Realism

In our modern society we prize objective truth; for only objective things, not metaphysical things can be reliable. But take colour. Although we assume colour is objective it turns out that colour is nothing other than our brain’s subjective rendering of photon frequency. The notes we hear on a piano are also subjective. There are no sounds in the physical world; only vibrating molecules. Equally smell is the interpretation of molecular shape and even the apparent solidness of matter as sensed through touch was discredited by Rutherford’s scattering experiment.

Society assumes a form of naive reality; what we perceive is the objective representation of the world but our perceptions are better described as subjective impressions (called sense-data) that we perceive then locally and indirectly render into a subjective image of something we can recognise;
Our hermeneutical equipment, then, is formed at the synaptic level, is capable of reformation, and is even now producing the conceptual schemes or imaginative structures by which we make sense of the world around us. My perceptions of the world is based on a network of ever-forming assumptions about my environment, and in a series of well-tested assumptions, shared by others with whom I associate, about the way the world works. Ambiguous data may present different hypotheses, but my mind disambiguates that data according to what I have learned to expect. That is, embodied human life performs like a cultural, neuro-hermeneutic system, locating (and thus making sense of) current realities in relation to our grasp of the past and expectations of the future’[1].
So if all our perceptions so readily reduce to subjective renderings how can we ever hope to establish what is objectively true? The task becomes even more formidable in light of Gödel’s[2] incompleteness theorem which proves for the case of numeracy at least that global truth cannot be mathematically justified; ‘no consistent system of axioms listed by an effective procedure is capable of proving all facts about natural numbers. For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system.’ 
Before attempting to extend the case to language we must identify what type of things might be the primary bearer of truth claims. Sentence types are not likely the primary bearer of truth because they cannot solve indexical claims. For example is the claim ‘I like nuts’ true or false? Sentence-tokens, being grounded in space and time, can solve indexical phrases but unfortunately we can imagine sentences that will never be uttered and so their corresponding tokens cannot exist. Beliefs and opinions suffer from the fact that if there were no languages then there would be no truth at all. The statement; ‘there are no conscious creatures’ should be true yet the statement paradoxically cannot exist if there is no language to utter it in. To escape these problems propositions are usually assigned the role of primary truth bearer. Critically propositions obey two laws;

Every proposition is true or false  -   Law of the Excluded Middle
No proposition is both true and false  - Law of Non-contradiction
According to Tarski’s[3] semantic theory[4]  the T-propositions consists of a containing sentence and a contained sentence. For example consider the sentence;
T: ‘I’herbe est verte is true if and only if grass is green’.
Tarski claims there must always be two languages; the quoted sentence (French) is an element of the objective language while the outer sentence (English) that adopts the truth predicate is an element of the meta-language. To avoid contradiction then, truth predicates cannot be contained in the objective language, they must be contained in the meta-language. This principle is formalised in Tarski’s undefinability theorem[5].
Similar to the case for numeracy, Tarski’s undefinability theorem concludes that no language is semantically self-representational. Phrasing it informally we could say that according to the underfinability theorem it is only possible to define truth in a language L in so much as truth exists in some meta-language that has descriptive power beyond L.
For example take the sentence ‘truth for an English sentence is not definable in English’[6]. Now let’s take society’s typical claim that ‘this sentence is false’ and call it S. Since ‘this sentence’ refers to S we have S if and only if ‘this sentence is false’ if and only if ‘S is false’ if and only if not S which leads to a contradiction discounting societies typical claim S. For Tarski, truth predicates cannot be proven from within; they can only be tested against a meta-language. The problem then, is how to identify the constituents of that meta-language.
Coherence theories on the other hand try to access the meta-language by methodologically comparing a set of proposed truth claims to generally accepted truth claims. Essentially coherence theories postulate that ‘a set of propositions is not false if and only if the set of propositions is both internally and externally coherent’.
I will generally adopt this pragmatic negative approach; which has so far proven effective in establishing some very basic facts about our world.
 
RELATIVITY[7]
During the 1860’s Maxwell developed Faradays experimental findings into a set of equations that elegantly unified electricity and magnetism. The equations showed that electromagnetic waves propagate at the speed of light independent of an observer’s velocity[8].
Einstein used this remarkable result to formulate his first axiom; light travels through empty space at the same speed regardless of the motion of the source or observer. His second axiom followed from philosophical persuasion; no experiment can ever be performed that is capable of identifying absolute motion.
To understand the implications of these two axioms imagine an apparatus in which a beam of light bounces between two mirrors. If the mirrors are one meter apart then the round trip would take 6.7 nanoseconds. So what happens when this simple light clock is placed on a moving train? According to Newton both time and space are absolute so it stood to reason that the light clock would continue to tick at the same rate; 6.7 nanoseconds. But is this really what happens?
From the perspective of an observer on the train at rest relative to the light clock the start point and end point of the round trip are concurrent so because the light beam travels the same distance as if the clock and observer were at rest, we must agree with Newton that the light clock ticks at the same rate. 
But what would a second observer, standing on the platform looking at the train moving away measure? For him the start and end points of the light round trip are not concurrent so geometry dictates that the light beam must have travelled further. For the observer on the platform either the light beam took longer to make the round trip or the light beam sped up to keep the clock ticking at the same rate.

But only the first option satisfies Maxwell’s equations. Now consider muons, which on average decay after 2.2 microseconds. In this life time they should be able to do 15 laps around a fourteen meter diameter accelerator. But observations show that muons actually do over 400 laps before they decay. These extra laps are explained by a slower decay rate in the same way the moving light clock slows when observed from the platform. But there is still an obvious problem. If scientist’s now jump on a muon then the very giddy scientists will again measure the decay rate to be 2.2 microseconds; so from this new, albeit uncomfortable perspective, the muons will not live long enough to do 400 laps.
To preserve reality (400 laps is really 400 laps) we must intuit that from this new albeit uncomfortable perspective the distance around the ring must have shrunk. In summary then, if you observe the muons as a stationary observer the muons live long enough to do 400 laps around a 14m diameter ring while if you travel with the muons the decay rate equals the rest decay rate but because the diameter of the ring has shrunk you still end up doing 400 laps.

Einstein sacrificed the invariance of time and the invariance of spatiality. This is the cost that he was willing to pay to preserve reality in a Maxwellian universe.
Einstein's way of thinking about the world takes a little getting used to. Consider the ladder paradox in which a moving ladder contracts from the perspective of the stationary observer so will fit into an equally sized garage while from the perspective of a moving observer travelling with the ladder it is the garage that shrinks and so the ladder must be too long to fit. So which is it; does the ladder fit or not?
According to Minkowski[9], if a non-Euclidean space-time measurement is taken rather than separate temporal and spatial measurements then reality will be preserved because the product of the space-time measurement remains invariant across constant velocity reference frames (the ladder has the same space-time length as the garage so the paradox arises because we are considering only part of the problem). For Einstein then space and time ‘are doomed to fade away into mere shadows, and only a kind of union of the two preserve an independent reality’[10].
Another example is the famous twin paradox. From the stationary twin’s perspective on Earth it is his brother the astronaut that flies off to the stars. But from the astronaut’s perspective it is his stationary twin left on Earth that flies off to the stars. Why then, when the twins unite from across the galaxy should the earth bound twin be older than the astronaut if the case is truly symmetric?
According to Minkowski’s non-Euclidean space-time the case is not symmetric for the earth bound twin used up only time on his path while his brother used both time and distance. To remain space-time invariant the astronaut’s path moved through less time (since he moved through more space) explaining the age difference upon their being reunited. But how do we make the judgement that one observer uses more time than the other without a preferred frame? 
Suppose you were deep in space with no reference to any external objects; no stars, no galaxies, nothing except you and a handy bucket of water. According to Newton you cannot know if you are stationary or moving with constant velocity.

But this is not the case for acceleration. All you would need to do is look at the curvature of the water in the bucket to determine if you are spinning. But what is the velocity vector of water changing relative to? Similarly to the twin paradox why should there be a force to curve the water if there is no preferred frame to measure the change in the velocity vector against?
For Newton this changing velocity vector implied that there had to be a preferred reference frame. But Einstein showed that space and time are not absolute entities; only a combined space-time measurement preserved reality.

So how do we explain the spinning bucket? Does Newton’s absolute time and absolute space re-emerge in the more general case of acceleration? To answer this question Einstein took vanishingly small segments of flat Minkowski space-time (which are only valid for constant velocity problems) and meshed them all together into a new curved non-Euclidean geometry. By a magical wave of his wand Einstein’s new curved space-time preserved the invariance of Minkowski’s constant velocity space-time even as these velocity vectors changed.

As it turns out Einstein’s General Relativity essentially agrees with Newton’s conclusion; the concave shape of water is caused by a constantly changing velocity vector that is measured against a frame of reference. But there is a difference between the two descriptions. Newton measured the concave force relative to absolute time and absolute, immutable space while Einstein measured the same force relative to a dynamic, contorted, ever changing space-time field that permeates all space and all time.
But the implications of Einstein’s new dynamic theory were nothing short of diabolical. Suppose, for example, there are two widely separated observers, A and B. If observer A moves away from B then because their clocks now tick at different rates A’s now must move into observer B’s past. Now suppose observer A turns around and approaches B. In that case A’s now will move into observer B’s future. But how can that be? How can B observe A’s future when A has not had the chance to live it yet?
One explanation is that the now we experience is not ontologically significant. For Einstein[11] it appears more natural to think of physical reality as a four-dimensional existence, instead of, as hitherto, the evolution of a three-dimensional existence’.
This view of the world, known as eternalism, pictures the three spatial dimensions smeared across a fourth temporal dimension. Although this view of the world stands against our every intuition the mathematics works.
So why do we experience things in the now, remember things from the immutable past and hope for things in the presumably open future? If time has no direction then a dropped glass shattering should be equivalent to a shattered glass assembling (think of playing a video in reverse) but we never experience this. Why do we only experience a temporal arrow that inextricably moves us towards the future? Why does glass shatter, but shattered glass never assemble?
The answer is entropy. Left unattended a house gets messy, an ordered deck of cards thrown into the air always lands in a more unordered pile. The reason is simple enough; an ordered solution is only one possibility in a sea of unordered possibilities. Entropy, which measures disorder, tells us that the universe started with a maximum amount of order; and processes have been picking out less ordered states ever since.
Just as in the card example, where it is far more likely to find an unordered pile that an ordered pile, our brains choose states from the sea of disordered states rather than from the few ordered states and in so doing inextricably shifts our awareness from what we have experienced to what we are experiencing to what we will experience. That is not to say our brains cannot choose more ordered states but the likelihood is astronomically improbable. 

The arrow of time then may not be an ontological concept that dictates our very existence but an artefact of how our brains select new states from the sea of existent states; but surely such a description of the world chains our every move to a determined, clock-work, block universe that just is. It is little wonder Einstein quipped ‘God does not play dice with the universe’.    
 
QUANTUM[12]
Now consider the electron two slit experiment as first formulated by Davisson and Germer. In the experiment electrons are fired one at a time at a barrier with two slits. Experimenters measure with accuracy the source and destination of each electron but they cannot measure which path the electron takes between the two points.
Classical particle theory predicts two intense patterns behind each slit but instead multiple bands of alternating maximum and minimum intensity consistent with an interference pattern was observed. But how can interference occur if electrons are released one at a time? Can a discrete particle like an electron really interfere with itself by passing through two space-separated positions at the same time?
Diabolically, when the experimenters tried to detect which slit the electron passed through the interference pattern collapsed. As it turns out it is impossible to determine if the electron is here or there; so only probabilities can be assigned to the likelihood that it is here or there.
At first many scientists were uncomfortable with the probabilistic nature of quantum theory. Einstein, Podolsky and Rosen, for example, derived the (EPR) paradox[13] to question the validity of Heisenburg’s uncertainty principle. The paradox remained untested for decades until Bell finally proposed an inequality that could settle the EPR. In a series of experiments Alan Aspect[14] proved that Bell's inequality could in-fact be violated; on this point, Einstein it seemed was very wrong.
Building upon Aspect, Jean-Francois of the Ecole Normale Superieure de Cachan was able to verify Wheeler’s[15] delayed choice thought experiment. He began by shooting single photons at a half-silvered mirror to cleave the quantum wave into two. After travelling different distances the two halves were re-combined at a second beam splitter some 50m away. The experiment randomly turned the second beam splitter on or off after the photon had passed the first beam splitter. When the second beam splitter was turned off the photon took one path or the other with 50% probability independent of the difference in path length (particle like behavior). When the second beam splitter was turned on interference dependent upon the difference in path length was observed (wave like behavior).
The experiment showed that photons do not decide to behave like particles or waves when passing through the first beam splitter but delay their choice until the second beam splitter is omitted or inserted. To achieve this feat the photons hedge their bets by remaining in a space separated probabilistic state until collapsing to the observer’s later choice. Like its predecessor this experiment verifies that quantum particles must exist in a probabilistic state until measured this way or that. 
To try and make sense of these findings Dirac wondered if space-separated and particular behaviour (wave particle duality) could be described by a quantum rather than classical field. A classical field as supposed by Einstein’s general relativity is a mathematical construct that is continuously spread out in time and space. It is intrinsically wave-like in nature. For Dirac quantising the field introduced discrete physical properties that provided the necessary resources to help explain the particle-like behaviour while still preserving Maxwell’s elegant wave description.
So what could Dirac’s quantum field look like? One way to visualise the field is to use Fourier Series to describe an infinite number of discrete harmonic oscillators. Each oscillator at a particular frequency can be compared to a pendulum. Think of the quantum field as the representation of all these pendulums in their lowest energy state; at rest. But the uncertainty principle, which Bell and Aspect affirmed, will not allow the oscillators to have a definite position (at the bottom) and definite momentum (at rest). The oscillators therefore must always be in zero-point motion. Zero point motion must do enough not to violate the uncertainty principle but not enough to entangle particles. The quantum field therefore must be a writhing, pulsing, ever moving sea of activity which in-turn influences the trajectories of enduring particles[16].
Newton believed space and time were fixed background entities underlying material reality and as such participated in the motion of physical objects. Einstein discovered that Newton’s background was nothing other than the malleable and dynamic gravitational field in which only invariant space-time descriptions could preserve reality; and finally the two slit and delayed choice experiments suggest the field that underpins our reality is not continuous but particulate in nature.
Mathematically we can represent the quantum state of this particulate field as a linear combination of eigenstates. Every eigenstate has an associated phase which gives the wave function its wave like character in complex space. In order for the components to combine to produce a superposition state they must cohere; that is to say they must all share the same phase. This is what happens in the electron two slit experiment when constituents possessing the same phase build up an interference pattern.
But when an observation is made at either slit the measurement photons interact with the passing particle. Each eigenstate of the particle forms separate entangled states with the measuring photon changing the overall phase relationship of the wave function. This process known as decoherence works to destroy the coherent in-phase relationship resulting instead in a mire of incoherent phase relationships.
When all the interference terms are sufficiently entangled the particle’s wave function is said to have collapsed leaving only one eigenstate. No longer is there any possibility; the particle can now be found here or there because all the entangled states that created the fuzziness have ‘leaked’ into the environment.
The process of decoherence is in some way analogous to throwing a rock into the sea. After the splash, the ripples dissipate until they no longer contribute to the initial system. The waves still exist in a complex superposition adding and cancelling the effect of other ripples but it is impossible to recover any resemblance of the initial splash from the now complex superposition of dissipating waves.
Decoherance then is not a sudden jump described by the collapse of the wave function but the progressive elimination of the interference terms constituted in the wave function through the relentless interaction with particles and associated entanglements contained in the wider environment.
Brian Greene[17] explains, ‘Decoherence forces much of the weirdness of quantum physics to leak from large objects since, bit by bit, the quantum weirdness is carried away by the innumerable impinging particles from the environment’.
Decoherence predicts that there will only be one outcome which is chosen by an observer (or system that could be defined as an observer). Before decoherence we might find the particle here or there; but after it is observed the particle can only exist in one place.

This characteristic of the quantum world is intriguing; for it directly brings into question whether our deterministic material world is fundamental – or whether a ‘free’ observer is fundamental; which in turn brings into question whether Einstein's universe is really a clock-work determined universe or whether it can cater for stochastic behaviour after all?
To address these questions scientists needed to first understand how  systems behave. Modern chaos theory emerged from May's investigation of the logistic equation; xi+1=r xi (1-xi). He noted that for r < 3 the equation converged to a stable solution but as r is increased above 3 the outcome refused to settle but oscillated between two possible outcomes. Increasing r further produces four outcomes then eight, sixteen, thirty two and so on until finally outcomes do not settle to any predictable solution at all. Turning up r produces emerging regions of new stability within the chaotic region as new cycles of 3, 6, 12 or 7, 14, 28 emerge.
Yorke[19] proved that in any one dimensional system if a regular three cycle appears then the system would go on to display regular cycles of every other length as well as periods of chaotic behaviour. A critical property of the logistic equation within the chaotic region is its dependence not only on ‘r’ but also on its initial boundary condition. Lorentz called this property the butterfly effect suggesting a butterfly fluttering its wings in the Amazon could change weather conditions in North America weeks or months later. In fact the tiniest of disturbance will amplify into entirely unpredictable future outcomes. In this world of amplified perturbations our classical relational calculations must sooner or later fail us. 
To study these complex non-linear systems scientists use phase space diagrams to transform equations into visual maps. The goal of a phase space diagram is to identify one of two types of attractors; fixed attractors which represent steady state behaviour or limit cycles which represent a behaviour that repeats continuously and predictably.
An example of a phase space diagram is a simple pendulum without friction. If the x axis is defined as position and the y axis as velocity then a circle representing the motion of the pendulum is scribed around the origin. If there is no friction then the pendulum will follow that circle infinitum; a limit cycle. Adding friction produces a system that dissipates energy reducing the swing and velocity of the pendulum. The phase space diagram spirals into the origin which represents a fixed attractor where position and velocity equals zero; also a stable and predictable outcome.
To a physicist an attractor represents the promise that the behaviour, at least in the longer term, will settle to something describable. But what happens when dissipative systems refuse to settle toward classical attractors? How can physicists describe that behaviour? 
An everyday example is turbulence. To describe the countless states conventional attractors would require countless degrees of freedom and a corresponding countless variables which would produce an infinitely complex phase space diagram. Such systems as Richard Feynman lamented seem beyond description, ‘it bothers me…..that it takes a computer an infinite amount of logical operations to figure out…what goes on in no matter how tiny a space…why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do?’.     
But Ruelle realised that because the path did not settle it had to be infinitely long but to be describable it had to have a limited number of degrees of freedom - which meant the path had to be finitely contained.
While this might seem impossible there are in-fact lots of curves that fit the bill. The Koch curve, for example, can be lengthened to infinitum by adding more and more triangles between the existing ones. No matter how many triangles are added and no matter how long the curve becomes it will always remain contained in a circle circumscribed around the first triangle. Mandelbrot named these types of geometries fractal geometries. Ruelle made the connection, realising the only language that could describe an infinite path contained in a finite phase space was Mandelbrot’s.
Ruelle reasoned the strange attractor responsible for turbulence must then be fractal in nature. Thus in phase space, dissipation directs trajectories along convergent paths toward classical fixed attractors, classical limit cycles or fractal strange attractors - but critically, if the trajectory converges toward a fractal attractor then the trajectory will forever escape a precise mathematical description.

Intriguingly, if future states cannot  in principle be described through the sum knowledge of all previous states then it becomes increasingly difficult to defend the clockwork universe. Instead, the universe not only appears to transcend our three spatial dimensions; but also appears open to the possibility of downward causation.



[1] Body, Soul and Human Life  Green J Baker Academic 2008
[2] Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I. Monatshefte für Mathematik und Physik 1931 38: 173-98
[3] Der Wahrheitsbegriff in den formalisierten Sprachen A. Tarski 1936. Studia Philosophica 1, 261-405.
[4] A theory is a Tarskian truth theory for language L if and only if, for each sentence S of L, if S expresses the proposition P, then the theory entails a true “T-proposition” of a bi-conditional form.
[5] There is no L-formula True(x) which defines T. That is, there is no L-formula True(x) such that for every L-formula x, True(x) ↔ x is true.
[6] Contradiction shown by use of the liar paradox. S=this statement is false.  If S is true, then ‘This statement is false’ is true. Therefore S must be false. The hypothesis that S is true leads to the conclusion that S is false, a contradiction. If S is false, then ‘This statement is false’ is false. Therefore S must be true. The hypothesis that S is false leads to the conclusion that S is true, another contradiction.
[7] Recommended Reading: Why does E=mc2  Cox B and Forshaw J 2009 Perseus Books
[8] Giving strong evidence that light itself was nothing other than an electromagnetic wave.
[9] Minkowski, Hermann ‘Raum und Zeit’ 1908. Jahresberichte der Deutschen Mathematiker-Vereinigung: 75–88.
English translation: ‘Space and Time. In: The Principle of Relativity’ 1920  Calcutta: University Press, 70-88
[10] The principle of relativity Lozentz HA, Einstein A, Minkowski H Weyl H 1923 Methuen London pp 75
[11] The principle of Relativity Einstein A 1952
[12] Recommended Reading: Quantum Theory: A Very Short Introduction Polkinghorne J  2002 Oxford University Press
[13] Einstein A, Podolsky B and Rosen N 1935 Phys. Rev. 47 777
[14] Aspect A, Grangier P and Roger G 1982 Phys. Rev. Lett.,49 91 & 1804
[15] The past and the delayed choice double-slit experiment. Wheeler, J.A. 1978.  In Mathematical Foundations of Quantum Theory, ed. A. Marlow. New York: Academic Press.
[16] The recent discovery of the Higgs field adds significant force to this thesis
[17] The Fabric of the Cosmos: Space, Time, and the Texture of Reality Greene B 2004. Alfred A. Knopf division, Random House
[18] Recommended Reading: Chaos Gleick J 1987 Vintage

No comments:

Post a Comment