Friday, April 20, 2018

Length contraction is caused by normal forces

Consider this modified version of the Bell-Dewan-Beran thought experiment:

Two spaceships are attached by a rope. They begin accelerating, with exactly equal accelerations, thereby maintaining a constant distance between them, as seen from the original rest frame.*

Now what about the rope? It is moving too, hence it should contract, but since is stretched between the two ships, and the ships maintain their separation, it cannot contract. Eventually it will break, and people argue over this to this day, but this isn't what we want to focus on.

Instead let's say the ships stop their acceleration before the rope breaks. Now the ships and the rope are moving at constant velocity, but the rope is still uncontracted. What happens next?
What happens next is we get to observe (mentally!) the contraction process happen in a very pure form. The rope will contract to its expected length, because only at that length are its atomic bonds free of strain. At the uncontracted length the bonds are highly strained and the rope is exerting a contractive force at each end, i.e., on the ships. It will pull the ships together until it shrinks to the expected contracted length. Then the contractive force ceases and the rope becomes slack (but of course the ships will continue moving closer, and will in fact collide).

The contractive force comes about because the fields between protons and electrons are deformed in the way originally calculated by Heaviside and noted by FitzGerald. It is quite real and capable of transferring momentum to other objects - namely the spaceships in this case.

So what about the idea that length contraction is purely a perspectival effect, not entailing any stress or strain? Well, this is true ONLY of the initial and final inertial states. These states are connected by the Lorentz symmetry, and if one is stress-free, then the other is also stress-free. But the Lorentz symmetry tells you nothing about how a system goes from initial to final state. It is, after all, only one of the symmetries of nature - not a complete description of it. Every effect predicted by relativity happens through some particular dynamical process and it is worth understanding the major sorts of mechanisms at work. I tried to compile them here .

* In fact the distance between the ships will not remain exactly constant because they are contracting as they accelerate; however, this doesn't change the point illustrated.





Monday, January 25, 2016

This was my submission for the recent FQXI essay contest on the role of mathematics and physics. It didn't win a prize but I think it makes some important points hence am reposting it here. It connects to a couple of earlier posts in this blog:
http://www.letstalkphysics.com/2014/03/why-i-dont-believe-bostrom-or-tegmark.html
http://www.letstalkphysics.com/2010/02/problem-with-quantum-mechanics.html

There are three basic points in the essay:

  1. The universe must obey mathematical laws since there is no other possible source of structure.
  2. The mathematical laws that we see are plausibly the only ones compatible with life.
  3. Although it must be described by mathematics (point #1), the universe cannot be a mathematical structure since the laws of quantum mechanics cannot be fully axiomatized.
Point #3 means that we can not think of our universe as being just one member of an infinite "mathematical multiverse" consisting of all mathematical structures. Mathematical structures can be axiomatically defined, but aspects of our universe (indeed aspects of quantum mechanics) can not be captured by axioms. Nevertheless point #1 implies that "everything there is to know" about the universe must be specified by mathematics; hence, our universe seems to lie at a sort of boundary of mathematical description. It is fully mathematical but it is not pure mathematics.

If the universe is not a mathematical structure then it can not be viewed in the Tegmarkian way, as part of an eternal "mathematical multiverse". It must therefore be a singular "creation" of some sort. Point #1 implies that any such creation must be based on mathematics, and point #2 implies that, if evolution of life is the "goal", that mathematics may be essentially unique.

Putting it all together suggests that our universe is a "singular, almost-entirely mathematical" creation which exists for the purpose of evolving life. It satisfies the unique mathematical laws which enable this outcome, so in this sense one could say there was "no choice" in its creation.

Of course this begs the question of what entity "created" the universe in the first place. I don't attempt to resolve this problem but am only concerned with the universe that we experience, not a larger "meta-universe" in which it may be contained. If one dislikes the overall hypothesis, the points 1,2, and 3 are still separately worthy of consideration.

Response to Horgan

Recently science journalist John Horgan posted an opinion piece called "How Physics Lost Its Fizz". Following is my response, posted there as a comment.

Dear John Horgan,

When it comes to physics you've long ago decided that your popular-level understanding of frontier work is equivalent to the actual work itself. This is why you think there is no "fizz" - because you equate ideas which spring from a mathematically compelling foundation with superficially similar ideas that some philosopher or writer mentioned in the past. This is like saying that particle physics was never very novel because Democritus had already thought of particles previously.

Black holes were invented in 1783 and not convincingly observed until very recently; does that mean that for 200 years the concept of black hole was either "not fizzy" (because someone already thought of it) or "not science" (because not connected with a direct observation)?

You need to exercise a little bit of modesty in making popular-level critiques of topics whose interest derives from aspects that cannot be understood at the popular level. You, in fact, cannot understand these ideas, and do not understand what is motivating them.

Have some respect for those people who are discovering various ideas through years (in fact decades) of very difficult study and research. You must know that these people are not idiots; each one of them would have appeared as one of the best students at a university, if not one of the best in the entire world.

Did these very bright students suddenly become stupid after putting in the solid decade of very difficult study necessary to actually understand the frontier of physics? Or is it more likely that you, who have not put in this effort, do not understand the actual state of physics or the actual background of any these ideas?

Saturday, April 19, 2014

Does believing in god accomplish anything?

This is a bit off-topic for my blog but I just find it very interesting. What do people get out of believing in god?

Belief in god doesn't, by itself, provide any moral guidance. For this one needs a whole set of more detailed beliefs about god's nature, and these beliefs are just free-standing moral beliefs that don't really have any connection to whether god exists or not. This is the famous Euthyphro dilemma of Socrates.

Belief in god also doesn't provide any meaningful explanation of ultimate origins. One may say the universe exists because god created it, and then the obvious followup is what created god? The typical answer to this is that god always existed and didn't need to be created; to which the equally obvious response is, then why can't the universe have always existed without ever being created?

Belief in god provides no hints to any of the remarkable things that have discovered through science. To learn about these things the believer has to study the exact same books as the atheist, and with the exact same amount of dedication.

Lastly, belief in god doesn't provide any understanding of the purpose of existence. Most religions explicitly state that god's purposes are beyond our understanding, hence not explicable to either believers or non-believers. If a religion does provide some concept of purpose, then this again is a free-standing belief relating to values, similar to a moral belief, without any clear connection to the existence of god.

I'm certainly not the first person to ever point these things out. Many believers are more or less aware of these problems as well, yet belief persists. Why?

My best guess is that, even though one doesn't gain any direct information about anything, belief in god still provides a feeling that things might be meaningful or purposeful in some way that isn't possible without god. It is difficult for people - including me - to accept that the whole universe has no more meaning than a bunch of math equations, and that we, ourselves, are just bunches of atoms of no significance to existence as a whole.

Nevertheless, having a feeling that things might be meaningful doesn't tell you what the meaning is, and having a feeling that morals really matter doesn't tell you how to act morally. Belief in god doesn't seem to add anything but confusion to human-scale discussions of any of these topics. So next time someone asks whether you believe in god, answer them with a question: "what difference does it make?"




Why the speed of light is constant in relativity

This a perennially confusing question which I attempted to answer on Quora (http://www.quora.com/What-does-the-constancy-of-the-speed-of-light-is-deduced-from-the-principle-of-relativity-mean-exactly). I also wrote a whole book, "Relativity Made Real", to provide a more detailed answer. Contrary to popular expositions, there is no really short and meaningful answer to this question, but here was my effort on Quora to condense the issue (and my book!) to its essence:

Special relativity is best divided into two conceptual pieces. The first, and most important, is a qualitative observation that all objects must be affected by motion. This is really not very surprising when one considers the structure of matter, i.e., electrons and nuclei held together by electric forces. The electric forces take time to transmit between the particles, so when the object is set into motion then obviously there are some very complex changes to the forces inside the matter. It would be remarkable indeed if its shape and size did not change, along with the rate of any processes it happens to be experiencing (e.g., ticking, if it is a clock). 

A good analogy is to imagine the individual electrons and nuclei as tiny people holding megaphones. They try to arrange themselves in a nice crystalline structure by shouting back and forth at their neighbors through the megaphones, and estimating their distances based on the response times and the volumes of the voices. The sound waves they exchange are analogous to the electric field, which also is transmitted by a type of wave, namely electromagnetic waves. 

If we now set this "object" made from tiny people into motion, it will be thrown into disarray, because now the sound waves will take more or less time getting from one person to the next, the voice volumes will be changed, and all the calculations will be off. The collection of people will not be able to maintain the same shape, and likewise a collection of moving atoms cannot maintain its same shape.  

But if all objects are physically changed by motion, then so are measuring devices like clocks and rulers, which immediately implies that moving observers will also measure different values for almost every quantity. There is nothing sacred about measurements; they are carried out by ordinary physical objects, and the results they produce are dictated not by prior principles or philosophy, but by the physical system they are embedded in. 

The situation we just described could clearly become extremely complicated, with arbitrarily complicated motion effects. It is indeed very easy to write down arbitrarily complicated laws of this kind, and we wouldn't (and probably couldn't) live in such a universe. Here is where the second piece of the relativity puzzle comes into play: the "principle of relativity" postulates that our actual laws come from a subset of this larger, possible set, a subset which has a very special property.

When one studies the effect of motion on objects which are "held together by waves", as described above, one finds that there is a very surprising mathematical possibility for their behavior. The various waves involved can be structured in such a way that the effects on moving objects are exactly calibrated so that all observers will always measure the same speed for the waves ("speed of light"). It is extremely non-obvious that this is possible, and it certainly is not necessary in any way; our universe could have been built otherwise. (And it could also have been built without any waves at all, in which case Einstein's relativity would be impossible, and we would be discussing only the relativity of Newton/Galileo).  

This "principle of relativity" essentially postulates that our universe has the simplest kind of laws it can have, given that it is built on a foundation of waves. It places great restrictions on the allowed wave laws, to the point that many effects can be computed without even knowing anything else about those laws. That is why the "principle of relativity" appears to function as a free-standing law on its own, even though it is really a property of the underlying quantum wave laws of physics (for example, one can write down the Standard Model of particle physics without ever saying the word "relativity"). 

So the principle of relativity does, indeed, imply that everyone measures the same speed for light; in fact, that is the entire content of the principle. But under the hood what it is doing is picking out a certain very special subset of the enormous collection of possible wave theories, and postulating, rather hopefully, that our universe is described by only these kinds of wave. It certainly didn't have to be that way, but if it wasn't then it would be so complex that living creatures would probably never have evolved to discuss it.

Monday, April 14, 2014

Why Quantum Mechanics Requires Complex Numbers

Why do complex numbers feature so prominently in quantum mechanics, when classical mechanics got by just fine without them? 
Scott Aaronson gives this explanation, which revolves around the idea of wanting to assign a meaning to "negative probabilities":
http://www.scottaaronson.com/democritus/lec9.html

I don't find this convincing, because I don't see a reason why nature should care about using one sort of probability over another. There's no physical gain from doing this, in the sense of enabling a universe that we can live in. 

Rather, I would argue that the wavefunction has to be complex in order to have enough information to encode both position and velocity of particles into one function. A real-valued function works for position or velocity separately, but to have both in one function one needs the complex phase.

But why does one want to stick both x and p into one wavefunction in the first place? Here is where the "physical" benefit comes in. Having both x and p encoded into one single wavefunction partially removes their independence, by making them connected through the uncertainly principle. This has very profound effects at the microscopic level, and most importantly it allows things in the universe to be stable.

Let's back up for a second and try to imagine a world built entirely using classical physics. Atoms would then be like little solar systems - and this would be terrible because classical orbiting systems are all different, so no two atoms would be alike, and they are also generically unstable. For example, classical particles can orbit as close as they like to the center, so over time they will give up bits and pieces of energy (e.g., through weak interactions with other atoms) and gradually fall to the center. It would be completely impossible to evolve living creatures using this kind of inconsistent and unstable building block.  

Similar problems afflict the classical theories of fields, in particular electromagnetism. There is an infinite range of frequencies, and classical physics allows each frequency to hold any amount of energy, however small. All the energy in the universe would then leak gradually into higher and higher frequency electromagnetic waves, and would become essentially useless. This is the so-called "ultraviolet catastrophe" which Max Planck was trying to solve when he discovered the quantum. 

So, classical physics is just not suitable as the underlying theory for a universe that can support life, because its components simply have too much freedom. The particle motions are not constrained enough to form consistent and stable building blocks, such as atoms, and the fields are infinite energy sinks that drain away all available energy. 

These problems are solved by quantum mechanics, and in particular what solves them is to encode position and velocity both into one wavefunction. Being entertwined in one function means they are not fully independent, and in fact the relationship is exactly the famous uncertainty principle (see e.g. http://www.letstalkphysics.com/2009/11/where-uncertainty-principle-really.html). 

The uncertainty principle for particles means that squeezing a particle into a smaller space causes it to have higher velocity. Now remember the problem (one of them) with atoms in classical physics, namely that the electrons can orbit as close as they want to the center. This can't happen anymore because squeezing the electron close to the center makes it move faster, which carries it away from the center again. In other words there is a minimum size for the electron orbit - the "ground state" - and moreover its size and shape is completely determined by the uncertainty principle, hence is exactly the same for all atoms. This creates the stable and consistent building blocks needed to evolve life. (Of course things get more complicated for additional electrons and the higher energy orbits, but the principle is the same). 

Now consider the electromagnetic field. Here the uncertainly principle implies that a mode with small wavelength ("squeezed into a small space") must oscillate faster, i.e., have higher energy. Again there is a tradeoff and the result is that for each wavelength there is a minimum unit of energy that it can transfer - the quantum. The smaller the wavelength, the larger the unit, and this prevents energy from dribbling bit by bit into that infinite pool of wave modes, because for short wavelengths the mode can only accept large chunks of energy at a time. Lesser amounts of energy are therefore stabilized and don't get drained way. 

 In short, it’s just very hard to build stable systems on a classical foundation because classical particles (and fields) have too much freedom. Hence the subject of classical chaos theory, which has no really QM analog. quantum mechanics solves this stability problem, and the complex-valued wavefunction lies at the heart of the solution. I haven’t seen any proof that QM is the *only* way to solve the stability problem, but I haven’t seen any other way either. 


Wednesday, April 9, 2014

Demarcating the difference between science and non-science

Here is an article lays out pretty clearly the current conventional wisdom about what distinguishes science from non-science, the so-called "demarcation problem":
http://scientopia.org/blogs/ethicsandscience/2006/12/02/has-the-demarcation-problem-been-solved/

In my opinion this is, unfortunately, not an adequate view. I use the term "unfortunately" here with full intent, because it really is unfortunate that the true distinction between science and non-science is something more abstract and not easily comprehensible to a layperson.
In fact the only possible demarcation between science and non-science is mathematizability. Scientific theories are those which could, in principle, arise from an underlying fully mathematical structure. This obviously includes evolution which arises inevitably from the molecular basis of life, which in turn arises from the purely mathematical theory of elementary particles.
By contrast, any theory which involves a god is inherently not reducible to mathematics. Indeed, this could be taken as the definition of a god; I doubt any believer's conception of god corresponds to an entity whose every aspect and action is governed by mathematical formulas.  
If, on the other hand, the universe is not described by mathematics at the deepest levels, then there is no underlying structure and no point even talking about science. If there is no underlying structure then anything is possible at any time. Any regularity that we happen to observe and study with "science" is not a reflection of underlying order, which by hypothesis does not exist, but rather is just the whim of gods or something like that. 
Personally, I think this scenario is not only incompatible with the existence of science, but actually  impossible, because regularity is necessary for existence, and regularity comes only from mathematics. Hence, in my opinion, all universes which can possibly exist will have science, and all for the same reason, namely that they are founded on mathematics. 
What about falsifiability? In the view I propose here, this is closely entwined with science, but not absolutely essential in all cases. 
Note first that, absent a mathematical foundation, falsifiability is clearly impossible since it is impossible to formulate a falsifiable statement that is not compatible with reduction to mathematics.  A falsifiable statement is something like "95% of chicken eggs have one end more pointy than the other". This statement is compatible with reduction to mathematics because, in fact, it arises in our world through the mathematical theory of the atoms from which chicken DNA and whole chickens are built. In general, any falsifiable statement  must reflect a regularity of the universe which we can observe, and any such regularity is compatible with reduction to mathematics, since mathematics consists of the study of all regular structures. 
Non-falsifiable statements, unfortunately, are not quite so simple. They fall into two types (that I am aware of). The first involve entities such as gods which, by definition, are not reducible to mathematics. They can never be falsified because the entities involved are not, by definition, governed by any kind of mathematical laws, and hence they don't obey any rules that could conceivably be tested. They just "do what they want" (except that, as I mentioned above, I don't believe that can exist at all).   
There are, however, other sorts of non-falsifiable statement which are compatible with mathematics. As we push the possible bounds of human knowledge, we are running into such statements in the form of multiple universe theories and the anthropic principle. Multiple universes are easily and naturally predicted by many mathematical theories, yet it is difficult to see how we could ever have clear evidence for their existence. Nevertheless they clearly make sense, and could exist, and hence our  notions of science must expand to encompass these ideas. 
The reason such ideas remain science, rather than pseudoscience, is precisely  that they are compatible with a fully mathematical theory of the universe. Indeed, the best evidence we are likely ever to have for the multiverse is that is seems to be an inevitable outcome of some extremely compelling mathematical theory that explains many other things that are fully falsifiable in our own universe.  
Falsifiability, therefore, retains its central role, in the sense that we can never believe any theory, no matter how mathematically amazing it is, if it doesn't make some predictions that we can actually test. However, it need not have only falsifiable consequences, and we must expand our thinking to include this possibility. Part of this expansion requires that philosophers of science and laymen both must finally come to accept the absolutely central, and completely non-accidental, role that mathematics plays at the deepest levels of physical existence. 

Friday, March 21, 2014

Why I don't believe Bostrom or Tegmark

Nick Bostrom has famously argued that we probably live in a simulated world. Max Tegmark has famously argued that all mathematical objects exist, and our world is just one of them. (These two arguments are not completely separate, since every computer program is a math object, hence simulated worlds are a subset of Tegmarkian worlds.)

I don't believe either of these arguments, and the reason can be summed up in two words: Quantum Mechanics.

To answer Bostrom: If one were to simulate a universe, Quantum Mechanics would be a very bizarre foundation to choose. It is notoriously difficult to simulate and does not add anything essential (that I am aware of) to a simulation. Instead one would just write a normal program, as done for video games or cellular automata (e.g. the "Game of Life"). Hence, our universe does not (so far) look at all like a generic instance of computer simulation.

And to answer Tegmark: Mathematical objects are clear-cut and well-defined. If we lived in a generic math object, then we should not experience any problems with definitions or consistency. However, the so-called "measurement problem" of Quantum Mechanics is exactly such a problem, as I described in my earlier post "The Problem with Quantum Mechanics" (Feb. 2010). Indeed, if the current framework of QM proves to be fundamental, then our universe exists at the very edge of what can be described by mathematics. It cannot be  axiomatized (because there is no way to fully define what a measurement is) and it is definitely not the sort of universe one would expect to see if universes were chosen at random from the full set of math objects.

In my earlier post I conclude this way: "For some reason, our universe chooses to exist at the very boundary of conceivability. Perhaps it is a joke of some kind, or perhaps for some reason this is the only kind of existence that is really possible." The Tegmarkian or Bostromian hypotheses seem, at first, equally abstruse or difficult to conceive, but in fact, in comparison to the actual universe we observe, both of these possibilities are just too simple.








Tuesday, July 30, 2013

Zeno's Quantum Arrow

Recently I was reading Fulvio Melia's book "Cracking the Einstein Code", and I was struck by his discussion of an old (very old) "paradox" known as "Zeno's Arrow".  The paradox goes like this: imagine the passage of time as a succession of frozen snapshots of each moment. In any given snapshot there is no movement, since it shows only a single instant. Any moving object just appears frozen at its current position.

Then the question is, if there's no motion at a given instant, where does motion come from? Or, what "connects" these motionless states? There appears to be no information in the individual snapshots that would allow them to be stitched together into a physical time flow. Zeno argued that motion was, therefore, impossible - one of several arguments he used to support the idea that universe was, contrary to appearance, timeless and changeless.

Of course, I should hasten to say that I'm not sure there's any real paradox. A pictorial "snapshot" of a moment in time simply doesn't capture all of the relevant information; each object also has an instantaneous velocity, which isn't captured in an image. There appears to be no problem modeling this situation in a mathematically consistent way, as embodied in the classical physics of Newton.

All the same, something doesn't seem quite right. It is troubling that motion, in the classical view, seems to only be definable in terms of the change that occurs between two successive times, yet it also needs to exist as a property of a single point in time. It is, then, rather interesting to note that quantum mechanics precisely removes this dichotomy, making it so that full information about an object's position (the "snapshot") actually contains full information about its movement as well.

In quantum mechanics, all the information about an object's position is contained in its "wave function", which is just a function which shows how likely the object is to be found at each position.* One might think that each object would also have a second wave function showing how likely it is to have any given velocity, but this isn't the case.

Quantum mechanics represents a much deeper change than just adding some probabilities to classical physics, and the real quantum trick is that all the information about an objects's motion is also encoded in the same wave function which describes its position. The "motion" of an object is, in effect, just another way to look at its "position"; in mathematical terms, it is the fourier transform. The type of  "position-only" snapshot imagined by Zeno simply doesn't exist.

Clearly what this is saying is that motion and position are both fundamentally quite different from how we imagine them, and perhaps unified in a way which isn't yet fully reflected in our theories of spacetime. It also shows the true content of the uncertainty principle, as I blogged about earlier: http://www.letstalkphysics.com/2009_11_01_archive.html.

And I do find it quite compelling that quantum mechanics so exactly resolves this ancient dilemma of Zeno; perhaps it is telling us that the "paradox" really was paradoxical after all?

* This is an oversimplification since the wave function is a complex number and hence contains more information than is strictly necessary to give the probability of being at a particular position. In fact it contains exactly twice as much information, which is not surprising since it's encoding velocity too.  Nevertheless, the fact remains that both position and velocity are inextricably intertwined in a single function, and can't be separately specified.


Friday, April 5, 2013

New Edition of Relativity Made Real

Warning, this blog entry devoted to shameless self promotion!

I've been working over the past 6 months or so to expand and improve my book on special relativity, Relativity Made Real, and I am pleased to announce that the second edition is now available on Amazon (print and Kindle). It was quite a labor of love, I must say, and I will be publishing two related papers, one of which should appear in the American Journal of Physics before too long.

The purpose of the book is to give a more physical, "nuts and bolts" treatment of relativity, to counterbalance the rather abstract spacetime-oriented viewpoint which one finds almost everywhere else. Just as one example, consider time dilation, the phenomenon by which moving clocks run slower than stationary clocks. What causes this to happen in specific, physically-constructed clocks? Every clock consists of some kind of matter which is constructed to execute some kind of repetitive, cyclical process; looking at specific sorts of clocks, it should be possible to understand how motion affects their internal processes without appealing to any abstract generalities about time or space. By doing this, one gains a more concrete understanding of the predictions of relativity, thereby (hopefully) making the theory seem more "real", which inspired the title of the book.

Of course, there is not a completely different set of mechanisms for each kind of clock and for each different phenomenon of relativity. There is an underlying, unifying theme, and this is the behavior of waves. Indeed, relativity grew out of the very first wave-based theory, the theory of electromagnetism, and this is no coincidence; furthermore, in our present quantum-mechanical understanding, every object in the universe is actually described by an underlying wave-based theory ("quantum field theory"). This deeply wavy foundation is what produces the strange phenomena of relativity.

And these phenomena are, in fact, not that strange when one thinks of them in terms of waves. You probably would not try to build a rigid object using waves - and relativity predicts that there are no rigid objects (things shrink when they move). Things made from waves inevitably "slosh" when they move, hence motion must affect them in all sorts of ways, and it is not that hard to understand the effects qualitatively by building on one's everyday experience with waves at the ocean, waves on a jumprope, or sound waves in air.

Anyway, that's a preview of the basic approach taken by the book; please check it out and let me know what you think. It is, I believe, completely unique in the popular relativity literature. Mermin's book (It's About Time) contains some similar material in its last chapter, and there is another book called Physical Relativity, by Harvey Brown, which definitely follows the same philosophy, but is not pitched at a popular level, and also places a much greater emphasis on philosophical disputes, historical viewpoints, and general relativity (Einstein's theory of gravity).

Thursday, February 28, 2013

How to Identify Cranks

Why does physics attract so many more cranks than any other field? I don't know, but here's how you can identify them.

1. They claim to get huge new results without new mathematics

This never happens. Consider: Newtonian physics required the invention of calculus. Electromagnetism brought in the whole machinery of field theory, including partial differential equations, gauge invariance, Green functions, and many other things unknown to physicists of the prior century. Quantum mechanics brought in infinite dimensional Hilbert spaces and operator algebras. General relativity brought in tensor calculus and Riemann spaces. Quantum field theory brought in...itself, a mathematical smorgasbord as yet not fully characterized.

Think any of these are unecessary? Think they could be replaced by some kind of pictures or verbal explanations, if only a more incisive thinker came along? Then congratulations, you're about 25% of the way towards crankdom.

New physics requires new mathematics because, essentially, working out the results of old mathematics is a matter of effort, not creativity. Mathematics is highly structured, by definition, and if you put a few hundred smart people to work for a couple decades within a given mathematical structure, they will extract everything of physical relevance. You just can't get new wine from old grapes (and definitely not from sour grapes, see item 4).

2. They haven't mastered existing theories

Nobody advances physics without a complete mastery of the current state of the art. Most cranks think that Einstein did this, but they are completely wrong. He completed the full course of physics studies, all the way through graduate level, and then on his own he studied obsessively.

You aren't playing at Carnegie Hall without practice, and the same goes with physics.

3. They don't publish conventionally

Cranks have the idea that there have been some great physicists of the past, mainly Einstein, whose work was ignored or not published in conventional venues. This is not really true. Einstein completed his first three great papers in 1905. When were they published? 1905. Where? Annalen der Physik, a mainstream journal. Even Boltzmann, whose work on statistical mechanics met with great resistance, was a full professor and a mainstream physicist.

If mainstream journals won't publish your works of physics, they aren't works of physics.

4. They blame their failures on the attitudes of others

Cranks believe that the "establishment" is lined up against their ideas and that is why they don't succeed. When the community (largely) ignores them or fails to follow up on these "brilliant" new developments, the reason is not that the developments aren't worthwhile, but rather that the community is too narrow-minded and dominated by entrenched interests to see the truth.

Your classic crank meets all 4 of these criteria, knows very little real mathematics, and is easily ignored. However, there are some people, superficially very knowledgable, who pass 1-3 but still fail item 4, hence qualifying as 25% cranks. They publish sour-grapes books with titles like "The trouble with physics" or "Not even wrong". They think a bunch of "big egos" are standing in the way of progress, even though this never happened before in the history of physics.*

Folks, when good ideas appear you can tell. How? First, all the smartest people jump on them. Why? Because that's how they make their careers. What does any theoretical physicist have to gain by *not* pouncing on a new idea? Nothing. What does s/he have to lose? Just the opportunity for success, fame, and a place in the history books. And the second indicator that a new idea is good: it produces mountains of new and unfamiliar mathematics, see item 1. Good ideas are very fertile, and the form which fertility takes in theoretical physics is new equations. With luck, they lead to new experimental tests. Nothing guarantees that a correct physical idea has to be testable - that depends on the specific design of our universe - but of course it will be a drag if the correct theories are, in fact, not testable in practice, so that we can never know the truth.


* See http://www.theregister.co.uk/2013/02/20/carver_mead_on_the_future_of_science/

Tuesday, February 14, 2012

Why the Multiverse is A Good Thing

In a nutshell: the multiverse makes it possible for our universe to be described by an beautiful theory.

Why does beauty imply a multiverse? Because of two things. First, beautiful theories seem to be quite rare. The concept of theoretical beauty involves unique and highly constrained combinations of mathematical structures, combinations which are rarely discovered and whose discovery usually heralds revolutions in both mathematics and physics. And second, if we rule out multiverses then each theory can correspond to exactly one universe. Each theory has just "one shot" to get it right, producing the conditions for life. What's the chance that one of the very rare beautiful theories could also jump this very high hurdle? It seems pretty small to me.

But with a multiverse theory there is no problem. We can have a fantastically beautiful theory that has no adjustable parameters at all -  for example, string theory. And this theory can also accomodate the existence of life, in some of the many allowed paths in the evolution of the multiverse.

But what about the lack of "explanatory power"? Haven't we given up the most important thing that a "scientific" theory is supposed to have?

Not at all. Take a look at any of the various definitions of science in a philosophy of science textbook: none of them specify that science must explain why the current state of the universe is exactly as it is. What science is supposed to do is to predict the outcome of future experiments, and a fundamental multiverse theory would by definition be able to do this for all possible experiments, at least in the probabilistic sense of quantum mechanics.

Moreover, the non-multiverse theories don't actually do a better job of explaining the current state of the universe, anyway. The "standard model", for example, has over 20 adjustable parameters, many of which must be incredibly finely tuned in order for life to exist; what explains this fine tuning? Nothing - it just is. At least with a multiverse theory we have some sort of explanation, namely that all the possible parameter sets are realized in different universes, and we happen to live in this one.

The more one thinks about it, indeed, the more inevitable the multiverse seems. It fits very easily within the probabilistic structure of quantum mechanics. It is a natural extension of the Copernican insight, which has survived every challenge for five centuries now. And it is the only plausible way that our universe could be described by a really beautiful theory, an expectation which is admittedly irrational, but which most theorists deeply believe to be true.

So its time to stop worrying and learn to love the multiverse. It's not going away!

Wednesday, February 23, 2011

Why General Relativity is Easier to Understand than Special

Reason 1:

In General Relativity, the effects on clock rates and rulers have a concrete cause, namely the gravitational field. It's hardly surprising that an all-pervading field can affect the lengths of things or the rates of clocks. One could easily write down equations for other fields that do this same sort of thing.

With Special Relativity, on the other hand, there is no external field causing the effects to moving objects. One then has a puzzle as to "why" the moving objects are affected. The customary explanation is that "spacetime" affects the moving objects, but this just changes to question to why spacetime should affect relativistic matter, when it did not affect Newtonian matter. In fact the difference is field theory, because motion alters the propagation of the fields within the objects, whereas it does not affect the action-at-a-distance forces of Newtonian physics.

Reason 2:

Simultaneity is not an issue in General Relativity. In GR there is no concept of the global reference frame for an observer, hence one does not try to extend an observer's concept of "now" to distant locations.

In Special Relativity, by contrast, one has global inertial frames and one can compare different observers' definitions of "now". One finds that these definitions disagree and this leads to the various "paradoxes" such as the twin paradox.

The twin paradox, for example, arises in SR when one incorrectly applies the inertial frame of the "traveling twin". It does not arise in GR, because one doesn't define an inertial frame for either twin. Of course one could, if one assumes that there is actually no gravitational field (i.e., spacetime is flat), but without gravity one is back to doing Special Relativity.

Reason 3 (really a broader way to state Reason 2):

In GR, one is not concerned with comparing the viewpoints of different observers. Rather, one is concerned with calculating the effects of the gravitational field. An observer at point A is affected by the gravitational field at point A, and likewise for observer B and point B. There is nothing more to say about their viewpoints.

In SR, by contrast, each observer has a global reference frame that encompasses the whole universe and all other observers. One then has questions like how each observer can see the other's clocks to be running slow. In GR one never addresses such questions because a moving clock and a stationary clock are not at the same place to be compared.

Reason 4:

Again this is a variation on the same theme.

In GR, one is not concerned with measurement. There is no discussion about how different observers measure things, and there doesn't need to be. In studying the gravitational red shift, for example, one doesn't get into a big discussion about how the observers at different altitude make their measurements; the issue never even arises.

in SR, by contrast, one has to deal with the question of how moving observers can each see the other's clocks running slow, and rulers shorter. This means discussing the process of measurement and the effects of simultaneity on it. It is a very confusing aspect of SR and does not arise at all in GR, because it has nothing to do with gravity.

I discuss some of these things further in my book Relativity Made Real,
www.relativitymadereal.com.

Shortcomings of the Spacetime View of Special Relativity

One often hears that Special Relativity is a "Spacetime Theory". Indeed, this is the predominant way to view the theory, and has been since Minkowski's famous pronouncements on the subject.

Certainly the spacetime framework is a very elegant one, and summarizes very concisely and graphically the results of the theory. But I want to emphasize that one word very clearly: results. The spacetime framework gives us a good way to visualize what the theory predicts, but it gives us little or no understanding of why the theory predicts such things.

For example, a moving object contracts. Why? In the spacetime paradigm this is "explained" by the differences in coordinate systems used by the two observers, and particularly by their different definitions of simultaneity.

But this is rather circular. Coordinate systems create the appearance of contraction, but what creates the coordinate systems? Well...obviously the observers create them themselves, my measuring things out with their own rulers (and clocks). So actually we need to understand the rulers first, before we can understand the coordinate systems, and not vice versa.

Let me give a specific problem that is hard from the spacetime viewpoint. Consider a spaceship which is accelerating constantly, moving faster and faster. We know that it will be contracting; but exactly how does this happen? Does the nose contract towards the tail, or vice versa, or do both contract towards a point in the center? The question does have a definite answer, because both the nose and tail of the ship have a definite trajectory, fully predictable by physics. But I challenge anyone to produce this answer by drawing spacetime diagrams, or computing Lorentz transformations.

I will give my own answer in a future post. For now I will only point out that, in reality, the contraction of a moving object is caused by changes to its internal forces and fields, most notably the electromagnetic field. Understanding this, one can tackle the problem and it is not particularly hard. One also gets past the circularity described above, because one sees that moving rulers (and clocks) are altered by concrete physical mechanisms, so that observers measuring things with them will naturally build different coordinate systems using them.

The energy/mass relation is also quite mysterious from the spacetime viewpoint. Consider this simple scenario: an electron and proton come together to form a hydrogen atom. This process gives off light, hence the atom has less energy than the electron and proton did separately, hence the atom has less mass than the separate electron plus proton. But why? Why is it harder to accelerate an electron an proton bound into an atom, than to accelerate them when separated? I have no idea how to address this question within the spacetime viewpoint, but it is quite simple if one thinks in terms of the physical mechanisms which give rise to the mass/energy formula.

I discuss these sort of things in more detail in my new book, Relativity Made Real (www.relativitymadereal.com). Indeed, these sorts of questions are the reason I wrote the book (although I don't explicitly answer the first one, because it is a bit too in-depth for a popular book).

Wednesday, February 9, 2011

Relativity Made Real

So, after thinking about the subject for many years, I've written a book on Relativity! It is called "Relativity Made Real", reflecting my hope that it can make the phenomena of Relativity seem more concrete and real to people, rather than difficult and abstract.

Here is the link: http://www.relativitymadereal.com.

Essentially, what I do is approach the topic in a very physical way, explaining how the phenomena arise from the underlying, mechanical properties of matter. I do not start from mysterious "postulates" nor from abstract "spacetime" concepts (although these are discussed in the second half of the book).

Rather, I start from the fundamental nature of electromagnetism and other "field theories", and show how this leads to the effects of Relativity. In this way one gets a very concrete picture of why moving clocks run slow, why moving things get shorter, and why energy and mass are interchangeable. Then one is in position to understand the meaning of Einstein's "postulates", and also the origin and significance of the spacetime concept.

Thursday, March 25, 2010

Removing from Kindle

Apologies to my Kindle followers - I just can't post at a rate that justifies selling the blog on Kindle. My goal is not to just relay the physics news of the day, which you can find on physorg or other sites, but rather to go in-depth on certain topics; however, these kind of posts take time, time that I don't always have.

So, I have to sign off of Kindle Publishing, but thanks for reading, and be sure to check out www.letstalkphysics.com periodically!

Sunday, February 28, 2010

Darkness and light

Things are heating up in the field of dark matter.

One interesting idea making the rounds is that the very first stars may have actually been powered by dark matter!

It sounds paradoxical, since dark matter is dark now because it doesn't interact with anything, so how can it burn to power a star? But back in the early universe there could have been a lot more of it around, enough so that the dark matter particles could annihilate against each other to make large amounts of energy.

These "dark stars" would be pretty odd creatures. They could reach vast proportions, comparable to the size of our solar system, and weighing in at 1000 times a much as our own sun. They would be extremely bright as well, around one million times the luminosity of our own sun. And oddly enough the dark matter would actually form just a tiny fraction of their mass, the vast majority of which is normal matter.

When the dark matter runs out, after a few hundred thousand years, the normal matter would collapse to a supermassive normal star, and ultimately into a black hole. This could help resolve a puzzle: very large black holes appear to exist in the early universe, but nobody understands how they could grow so large in the time available.

To see the papers, check out Katherine Freese's website: http://www-personal.umich.edu/~ktfreese/index.html.

And in other dark matter news...

In the same Minnesota mine where the CDMSII experiment reported two possible dark matter detections earlier this month, another experiment called CoGeNT is reporting hundreds of events: http://www.nature.com/news/2010/100226/full/news.2010.97.html. These events are doubly interesting because they suggest an unexpectedly light dark matter particle.

Well, we should not get too excited just yet, since it is only one experiment and there are many possible complicating factors - but we can get a little excited...

and especially so, because the LHC turned on again yesterday, hopefully for real this time! With luck we will see results from the world of 7 trillion electron volts before year-end.

Tuesday, February 23, 2010

Jan de Boer colloqium

Here's a popular-level online talk which goes over the current state of thinking on the whole fascinating mix of topics relating to black holes, string theory, "holography", and the Ads/CFT correspondence:

http://agenda.albanova.se/conferenceDisplay.py?confId=1900

This is really some pretty remarkable stuff. About half the talk is "ancient history" from the 70's, 80's and 90's, not new anymore but still fascinating ( you can read about it also in the book "The Black Hole War"). The rest is on newer developments, particularly the application of string theory to quark/gluon physics and to high-energy superconductivity. This is a story still very much under development.

At the end he says something which I find highly dubious. He claims that a person falling in to a black hole would gradually lose consciousness as they hit the event horizon, and this, as far as I know, is not the accepted viewpoint at all. The generally accepted view is that the horizon is undetectable by someone falling across it. Indeed, we could be falling across one right now - perhaps for a huge black hole whose horizon is light years across - and we won't know the difference for millions of years until we start to approach the actual singularity at the heart of the hole.

But on the other hand, it is also generally accepted that if you watch someone falling into a hole from the outside, then you see them get closer and closer to the horizon but never actually fall in. And furthermore, the horizon has a temperature, although generally a low one. So from the outside it looks like a person should be encountering warm temperatures as they fall in, which might dissolve them or cook them or something.

This relates to the idea of "Black hole complementarity", according to which there two equally valid but complementary ways to look at a black hole: the view from outside, and the view falling in. But this seems to violate the principle, because the person falling in could be sending radio messages back home, and those messages would say, "situation normal, nothing to report". But if the infalling person actually sees a temperature and is getting cooked, then their messages would surely mention this fact.

So there is a conflict here, one which has been debated for several decades now, apparently without resolution. Personally I don't believe that the infalling observer would see anything, at least for big black holes. To me the view "from outside" seems pathological and highly suspect, because of the strong warping of time near the event horizon, relative to a distant spot.

That's my .02, but I've been wrong before!

Friday, February 19, 2010

Signs of spring

Could it be that springtime is near, not just in Earth's climate (in the Northern hemisphere at least!), but also in experimental particle physics?

It has been a long, trying winter, with few really exciting observations since the early 1970's. Experiments at Fermilab, SLAC, and CERN confirmed and refined the so-called "Standard Model" of particle physics, for which Nobel prizes were duly dished out during the 70's, 80's, 90's, and even into the last decade. The tau lepton and top quark were confirmed (c. 1975 and 1995, respectively), filling out most of the missing pieces of the bestiary, with the Higgs boson remaining the one stubborn holdout.

These are terrifically important results, don't get me wrong. Four decades is not a long time to test and verify such a complex theory as the Standard Model. Nevertheless it has been frustrating for theorists, who - we can be honest here - find the Standard Model rather clunky and unloveable, and feel certain that it must be incomplete. Candidates to extend it abound, from supersymmetry to technicolor to strings, but very little data exists to constrain them. The cancellation of the SSC in 1993 was a major disappointment; arguably, the most fruitful development to emerge from particle physics laboratories during this period was the World Wide Web, invented at CERN in 1989 (just 21 years ago - but it feels like a century!).

However, all that may be poised to change. The Large Hadron Collider at CERN is finally almost ready to take data, and it should be powerful enough to go beyond the Standard Model. At the very least it should discover the Higgs or, failing that, blast a big hole in the Model.

But what prompted me to write this post was a recent, tantalizing result on dark matter. This is the mysterious matter which seems to comprise 75% or so of the mass of the universe, but which has never been directly seen.

At least until now - perhaps. An experiment called "CDMS II", utilizing fantastically sensitive detectors buried in a mine in Minnesota, reported in December the detection of two possible dark matter collisions (the paper came out in Science last week). Unfortunately, this was not enough events to confidently claim a discovery; the researchers estimated a 75% probability of being due to dark matter, rather than background noise.

Although not definitive, this is very exciting since it would be the first detection ever of a particle from beyond the Standard Model. Indeed, the most favored dark matter candidate at present is the so-called "Lightest Supersymmetric Particle", and theorists would love to get their hands on any concrete information about this creature.

So there's still, speaking literally, nothing to report. But there are gathering signs of promise everywhere. Punxsatawney Phil may have predicted a long winter this year - but what does a groundhog know about particle physics anyway?

Monday, February 15, 2010

Applied String Theory!?

Now here's surprising twist in the string theory story, to say the least...

I blogged a little bit the other day about the "Ads/CFT" correspondence, which relates string theory in certain spaces to non-string theories on the surface of those spaces. This bizarre dimension-shifting idea is 13 years old now but its ramifications continue to expand. Juan Maldacena's paper proposing the idea was, as of last year, the second most-cited paper of all time in the Spires high-energy physics database, and will certainly hit number one soon. (I have disqualified an unfair review paper which actually sits at number one).

When first conceived, it seemed like a novel way to figure out things about string theory and therefore, perhaps, about quantum gravity. It seemed like one more bit of cool but ultimately arcane mathematics coming out of string theory.

But in the last few years that logic has being turned on its head and physicists have found it very fruitful to go the other way - to use string theory to understand the surface theories, which are "quantum field theories" quite a bit like the one believed to describe quarks in atomic nuclei.

Now, the quark theory ("QCD") is very hard, because it is "strongly interacting". However, strongly interacting theories are precisely the ones with good "Ads/CFT" dual descriptions. So we have the bizarre phenomenon of actual observable properties of colliding nuclei - messy, hot globs of quarks and gluons - being described in terms of 5-dimensional gravity, strings, membranes, and black holes! I don't think, 15 years ago, that anyone in their wildest thoughts had imagined that black hole physics could be relevant in any way to nuclear interactions; let alone black hole physics in 5 dimensions!

And, more speculatively, some condensed matter systems (e.g. high temperature superconductors) at the temperatures of their phase transitions, also can be connected to a dual gravity description. This, I believe, is still much more tentative than the quark connection.

Note, nobody is saying that actual black holes or other quantum gravity effects are created in nuclear collisions or high-temperature superconductors. The string theory and gravity here are just a "dual description", or equivalent way of looking at them. What's acting like a "string" in the quark-gluon soup would actually be a chain of gluons or something like that. What's acting like the "5th dimension" would actually the energy scale of the reaction. And now I am getting out of my depth and cannot comment in further detail.

For those of you who have read about this elsewhere in the media, I am sorry to probably not add much more. For those who haven't, I hope you find this development as remarkable as I do! I mean seriously, black holes in nuclear physics, of all places!

Wednesday, February 10, 2010

The Black Hole War

I just finished a really good popular physics book, the best I can remember reading for a long time. It is "The Black Hole War", by Lenny Susskind, an eminent Stanford professor of physics. Among other major achievements, Susskind has a strong claim to be the inventor of string theory, and - unlike with some other current popular authors - everything he says can be taken extremely seriously.

Susskind's topic is one that is close to my heart, indeed I did my dissertation on it, more or less. I was part of the Santa Barbara group of string theory physicists - a.k.a. "the enemy" in Susskind's book, at least as far as the "black hole war" goes. My vote was counted in the tally shown on p. 262 of the book; unfortunately, I'm pretty sure I voted for the "wrong" side, along with the rest of the Santa Barbara crew.

The problem, and the subject of the "War" which Susskind recounts, is simple: what happens to matter swallowed up by a black hole? One possibility is that it just vanishes forever, and this was the general belief until Hawking - in one of the most beautiful computations ever carried out, and the first to combine general relativity and quantum mechanics in any substantial way - showed that black holes have a temperature and they radiate energy like every other warm object. Eventually, they "evaporate" completely and vanish.

But Hawking's calculation opened a huge can of worms because it indicated no connection at all between the matter which went in and that which came out. In other words, the evaporating black hole creates "something from nothing". Energy is conserved, to be sure, but everything else about the matter - all of its "information" - is erased, in a mathematically complete sense, and replaced by a featureless, memoryless, random collection of particles.

Now, this is not how physics has ever worked. In physics, the situation now comes from the situation before, through a one-to-one connection. The situation now does not just arise spontaneously from nothing, in some random state. That just sounds wrong, and it seems mathematically impossible to implement.

However, wrong as this consequence seemed, Hawking's calculation seemed right, and most physicists didn't see the big deal since there were no black holes handy to test with anyway.

But a few physicists, most notably Susskind and 'tHooft, recognized the problem as a critical matter of principle that should be resolved. And they felt quite strongly that Hawking's picture was wrong, and that proving it would teach us profound things about gravity and universe.

In 1994, the paradox seemed completely impenetrable; but by 1997 it had been resolved, more or less, and Susskind and 'tHooft proved right.

History will record these three years as among the most momentous in science. Below I present their chronology, with some introductory years added for context, to give the reader some feeling for the times, which were a strange admixture of excitement and despair. People were waiting for something big to happen, not really believing that it would - and then it did. There's a lesson in there, not least for yours truly, who quit the field just before it exploded. I was at Santa Barbara from 1989-94, a student of Steve Giddings.

March, 1991

Witten discovers a simplified, 2-dimensional black hole solution in string theory. It is exciting both because it is simple, and because it exists within string theory, a partial theory of quantum gravity, suggesting that it might illuminate the paradox of Hawking.

November, 1991

Callan, Giddings, Harvey, and Strominger propose the "CGHS" model of black hole formation and evaporation, based on Witten's black hole.

1992

The "black hole information problem" takes the string theory community by storm, sparked by the string-inspired CGHS model, and helped by a lull in progress in string theory itself. I began working with Giddings and we wrote a followup to the CGHS paper.

1993

The Santa Barbara Black Hole Conference, a.k.a "The Battle of Santa Barbara", in Susskind's dramatic rendition. Heated debate, fascinating ideas - but no resolutions.

In fact the most important result, by far, to be announced during the conference is the proof of Fermat's Last Theorem.

Meanwhile, in a major blow to the particle physics community, the SSC accelerator is canceled by Congress. My thesis advisor Giddings is quoted in a major news magazine saying that, had he known that would happen, he would have gone to law school.

1994

The calm before the storm. Black hole work mushrooms in string theory, and the ideas remain tantalizing, but true solutions seem wholly out of reach. Many, including yours truly, are very discouraged.

March, 1995. University of Southern California.

At the Strings '95 conference, Witten informs a stunned audience that string theory, previously thought to reside in 10 dimensions, actually has a hidden, 11th dimension. The most famous scientific talk in recent memory, it sparks a revolution in string theory.

The significance of it all was still pretty unclear though. At the conference final dinner, I listened to Susskind's wrapup speech, in which he described the whole field as "angels dancing on the head of a pin". I am sure he never really believed that, and if you read his book you won't believe it either - but it still might be true!

October, 1995

String theory expands yet again, as Joe Polchinski of Santa Barbara discovers 10 additional types of matter hidden within it, the "D-branes". D-brane theory is so beautiful and compelling that once you study it, you can't believe that string theory could not be right.

Polchinski wrote me a letter of recommendation upon my graduation; however, I suspect that it was not a very good letter! At any rate, I left the field several months before his historic discovery.

January, 1996

Vafa and Strominger use D-branes to build a model black hole for which they can identify the internal states directly and see that information is not lost. The problem is unraveling.

November, 1997

Juan Maldacena, using D-branes as well as most other major ideas of the previous two decades of theoretical physics research, conjectures that string theory in 4-dimensional spaces equivalent to "dual", non-string theory in 3 dimensions. It is both mind blowing and arcane, but it has over 6000 references and appears to solve problems even in the previously-fossilized field of nuclear physics.

Shortly thereafter, Witten shows that creating a black hole in the 4-dimensional space is the same as adding temperature to the dual 3-dimensional theory.

The veil of the Black Hole is lifted, at least in part, and nobody believes any more that information is sucked into a hole, never to return. The "war" described in Susskind's book is over.

Friday, February 5, 2010

The problem with Quantum Mechanics

Everyone knows Quantum Mechanics is weird. Many of its principles sound paradoxical.

Matter is both wave and particle. Position and velocity can't be simultaneously specified. Particles have spin even though they can't be spun. Particles carry entanglements across space, allowing a form of teleportation. "Empty" space seethes with activity. Small-scale physics is unpredictable and fundamentally random.

Weird, for sure. But is there any real problem here? Does the theory have some kind of inconsistency or mathematical difficulty, or does it just conflict with our inborn intuitions?

I say mathematical difficulty because that is the only kind of problem that would be a real problem (aside from experimental contradiction). If a theory makes mathematical sense then there's no reason to believe it couldn't represent a universe, no matter how badly it contravenes "common sense". Indeed, mathematics is just extrapolated common sense, so anything that makes mathematical sense can be assimilated into our intuition eventually.

But Quantum Mechanics has resisted this assimilation for almost a century now. The reason for this lies not with any of the oddities cited in the second paragraph; they are all perfectly comprehensible with a bit of study.

The problem with Quantum Mechanics is that it contains no consistent way to say what exists. This is usually referred to as the "measurement problem", because physicists encounter it when studying the measurement process, but in truth virtually everything is a kind of measurement. To even say that something exists, even something as seemingly obvious as a rhinoceros or a planet, is to make a type of measurement.

In Quantum Mechanics the universe consists of the "wave function", &#936. However, &#936 doesn't describe any actual particles, fields, or rhinoceroses, but only the probabilities that they might exist. In order for them to actually exist, there must be a "measurement". But a measurement requires a measurer, and the theory doesn't tell us what or who are the measurers.

Many perfectly valid Quantum Mechanical universes indeed have no actual measurements. Consider a sparsely-filled box of electrons, and imagine that this is the entire universe. Nothing in this universe provides any possibility of measurement, and therefore nothing in this universe really exists. We say the box "contains electrons" because we wrote down our normal &#936 theory for electrons, but beyond this there is nothing - no events, no history, and therefore no real electrons either.

But, one might object, surely in this "=electron universe one can talk about the possible locations and collisions of the electrons, and compute their probabilities? Maybe there's nobody around to see them, but so what - can't they still exist?

Alas, no. In the electron box there are no computable probabilities because the future trajectories of the electrons continue to interfere with each other. It is this interference which is the root of the problem we are discussing. The function of a "measuring device", or "observer", is to wash out the interference of outcomes in the future. Once the interference is washed out, the different outcomes are distinguishable and their probabilities make sense to a high degree - which, for probabilities, means that they very nearly add up to one.

And here is the mathematical crux of the problem: the probabilities never quite add up to one. There's no such thing as a perfect measuring device, because everything is just globs of matter in the first place. A big, complicated thing like a human being does a pretty good job of washing out interference (or to use the more technical lingo, "creating decoherence") but it is never perfect.

So we have "probabilities" that, mathematically speaking, aren't probabilities at all, because they don't add to one. It's like saying there's a 50% chance of flipping heads and a 51% chance of tails; in fact that's exactly what the prediction could be in extreme cases of interference, like the electron box world.

The most accepted solution to this problem, as far as I know, is the one I just alluded to: large blobs of matter create "decoherence" in the things they touch, allowing them to wash out interference to a very high degree. In other words, the probabilities almost add up to one. The come so close that one can argue that the discrepancy can never be noticed in practice.

Well, we don't "notice" the inconsistency between General Relativity and Quantum Mechanics in practice, either - but that hasn't stopped two generations of physicists from trying to resolve it. This problem of probabilities that don't add up to one is equally embarrassing, but gets little attention because nobody has a clue where to start. It is built so deeply into the structure of Quantum Mechanics, and that structure seems so impervious to tinkering, that the effort seems futile.

Personally, I am torn. I believe that anything that exists must rest on a consistent mathematical foundation. The fact that our own universe is built from mathematics suggests to me that this view is right. If it could have been some other way - then why isn't it?

But inconsistent mathematics is as bad as no mathematics at all. Inconsistent mathematics has all the same problems as gods or magic or any of the other non-mathematical fairy tales people have dreamed up over the eons. So, given that the universe clearly uses mathematics, why would it slip in an inconsistency at the very lowest level? Why bother with math at all, in that case?

Yet, I have a feeling that the problems with Quantum Mechanics will not be resolved. The probabilities add up nearly to one in the universe we have right here, so even though it doesn't make any sense in principle, and it wouldn't make sense for some other universes, it's what we will be stuck with - like it or lump it.

For some reason, our universe chooses to exist at the very boundary of conceivability. Perhaps it is a joke of some kind, or perhaps for some reason this is the only kind of existence that is really possible.

Sunday, January 31, 2010

Gravity as entropy? Sounds cool but....

A cool paper appeared on ArXiV by Erik Verlinde, arguing that gravity should be thought of not as a fundamental force but rather as arising from the second law of thermodynamics (an "entropic" force) (Link:http://arxiv.org/abs/1001.0785).

The neat thing about the paper is that it ties together many different general principles which have emerged from string and gravity theories. I'll just sketch the ideas involved. It is a long sketch, which I hope will intrigue readers to read more elsewhere - although Verlinde's idea itself seems untenable.

Verlinde first assumes "holography", which is the notion that the physics in a volume of space actually comes from objects that live on the surface enclosing that volume. In other words, the universe really has one less dimension than it appears; the position variable "x" for one whole dimension is really not a fundamental variable, but is an "emergent" property, or grouping of objects in the surface theory.

The first hints of holography came from the fascinating subject of "black hole thermodynamics". It is fairly well established now that black holes have a temperature and an entropy, and that the size of their entropy is given by their surface area, suggesting that the physics of their volume is really all captured at their surface.

Further support for holography has arisen within string theory, where the much-heralded "Ads/CFT" correspondence appears to provide an explicit example. Here, string theories inside certain volumes of space are believed to be fully equivalent to other theories residing on the surfaces of these volumes (theories which are not string theories and do not even have gravity). Ads/CFT was discovered by Juan Maldacena, and his Scientific American article is worth reading.

The second thread which Verlinde's paper picks up is the unexpected appearance of temperature in gravitational and relativistic physics. The most famous of these phenomena is the black hole temperature, discovered by Hawking in 1974. The horizon of a black hole has a temperature and the hole radiates like a light bulb, eventually dissipating to nothing.

Less well known is the "Unruh effect", named after its discoverer, Bill Unruh. Unruh calculated that an accelerating observer should perceive himself to be immersed in a heat bath, with higher temperature the higher the acceleration. The calculation is quite simple and results from the fact that the quantum-mechanical vacuum contains virtual particles which can hit the accelerating observer and then change to real particles.

The Unruh effect then combines with another classic idea, the "equivalence principle" of Einstein. This principle states that the physics in a gravitational field is the same as that seen by an observer experiencing the corresponding acceleration. Applied to Unruh's effect, this means that a stationary observer in a gravitational field (who "feels" the gravity as weight), sees himself immersed in a heat bath, while a freely-falling observer in the same location (who "feels" weightless) sees nothing but empty vacuum. Both of these Unruh effects are generally accepted as true, although they are too small to measure.

These are the ideas Verlinde is playing with. I'm sure this preamble was long enough to tax my readers' patience, yet still not long enough to make any sense; for those who wish to learn more, I highly recommend Susskind's book "The Black Hole Wars".

Verlinde takes holography as his starting point. He assumes that the physics of a region is actually derived from physics on a surface, or "screen", bordering that region. He assumes that the screens have a temperature which is given by the Unruh effect, i.e., the temperature a stationary observer would see if sitting at the screen location - which seems eminently reasonable.

Then the question he is trying to answer is, where does gravity come from in this picture? It has to "emerge" from the screen physics, just as the extra dimension of space emerges. In the Ads/CFT correspondence mentioned above, this happens through string-like groupings of particles in the screen.

Verlinde suggests a quite different possibility: that gravity is an "entropic" force. This means that two masses attract each other gravitationally because, as represented in the "screen" theory, the configurations which are interpreted as "closer together" have greater entropy than those where the masses are "farther apart". The second law of thermodynamics ensures that entropy increases, and therefore the masses will draw together.

To better understand what an entropic force is, Verlinde provides a nice example, which I here modify slightly. Consider a jumprope in a room filled with rapidly bouncing basketballs. The basketballs are bouncing back and forth off the walls, and they hit the jumprope. If you want to hold the jumprope straight it takes force, because the balls keep bouncing off of it, which tends to bend it. If you pull the rope straight and attach two masses to its ends, then when you let go the jumprope will start to fold up and pull the masses together: they will "attract" each other.

From this "microscopic" perspective, it is obvious where the force comes from. Bouncing basketballs provide the force, through innumerable separate impacts. But we can also take a "macroscopic" perspective, where we ignore all the details and focus on the big picture. In this picture, the straight configuration of the rope requires force to maintain because it is extremely unlikely to arise at random. There are countless folded configurations but only one straight configuration, so a straight rope is going to become folded quite easily, while a folded rope is very unlikely to ever straighten out again. Amazingly enough, one can make this mathematically precise, leading to the subject of thermodynamics.

Verlinde argues that gravity not only can, but must, be of the same nature as the jump-rope force described above (given the hypothesis of holography to screens which have a temperature). His reasoning is simply that he can derive the entire force from these assumptions, so there isn't any residual effect left to explain. He produces a really rather elegant "dictionary" which maps the usual quantities of gravity and acceleration onto the temperature and entropy of the screens.

It is beautiful, and nice to read because it draws together so many ideas and the formulas are simple, and it has already generated numerous followup papers.
However, I don't believe anymore that it can be right, after reading devastating commentary on Lubos Motl's blog "The Reference Frame" (which included direct responses from Verlinde).

Motl's criticisms were directed not at holography - which clearly seems possible - but at the attempt to derive zero-temperature physics from an underlying theory which does not have zero temperature. In the case at hand, Verlinde is trying to derive gravity in empty space - zero temperature - from a theory on a screen about which we know nothing except that its temperature is non-zero.

There are two major problems here. One is that the underlying theory will always have some analog of the "bouncing basketballs" which actually cause the forces, and these interactions will have measurable quantum mechanical effects. In quantum mechanics we can take two particles and match their wave functions, and then check them again later to see if they still match - which they will not do if each has been subject to a bunch of random particle interactions. Such experiments have been done with neutrons to high precision.

The second problem is that an entropic force should be irreversible. If a falling ball is really caused by increasing entropy, then it should not be easy at all to make a ball rise - just as it isn't easy to put a broken glass back together. But in fact we can raise a ball just by throwing it. This argument to me is powerful but not as decisive as the first one.

There I will leave it. The paper is relatively accessible and worth reading.

Tuesday, January 26, 2010

Black Holes at the LHC?

Worrisome-sounding news in the blogosphere today: two scientists (Matthew Choptuik and Frans Pretorius) report computer simulations proving that black holes actually can form in a collision of two particles. At first glance it sounds like more fodder for those who oppose the Large Hadron Collider in fear of Earth-gobbling black holes.

So - scary, or not very?

I would say not very. We already knew, for all intents and purposes, that particle collisions could make a black hole at high enough energies. It's nice to see it actually happen in a full computer simulation of classical General Relativity, it doesn't add greatly to the debate, especially as the energies simulated were far beyond the LHC.

The LHC debate really hinges on two issues: 1) does "new physics" (such as extra dimensions) permit formation of holes at lower energies, such as found at the LHC; and 2) could those holes grow uncontrollably. Issue (2) is clearly the big one, and the general consensus is that tiny holes evaporate immediately and do not grow, but one can be forgiven for finding this less than fully reassuring.

Personally, I am not worried about rampaging black holes for the following reason. Powerful cosmic rays strike the Earth every second, many with energies far beyond those of the LHC. This has been going on for 4 billion years, so if black holes could form and grow from such collisions then the Earth would have been swallowed long before we ever materialized to worry about it.

I can't help but reflect how much fuss would have been spared had the U.S. gone ahead and built the SSC back in the 90's, before the possibility of black hole formation had even been imagined. Furthermore, physicists would have been spared two more decades of theorizing in a vacuum, with no data; and finally, the SSC was considerably more powerful than the LHC will be.

Ah, what might have been.

Sunday, December 13, 2009

Why moving clocks run slow

The slowing of moving clocks ("time dilation") is one of the most famous results of Einstein's Special Theory of Relativity. Because it relates to time, it sounds very arcane and mysterious, but in truth it is very easy to get a concrete picture of how it happens.

This is because someone invented the wonderful example of the "light clock". I will first go over this example, and then explain why the same mechanism also applies to more normal clocks (not to mention every other physical process, e.g. aging).



Fig. 1
Light clock at rest.

Figure 1 shows a light clock sitting still with respect to us. The clock consists of two mirrors between which a light beam bounces back and forth. A counter counts the bounces, allowing us to measure time. I'm not sure whether such a clock has been built, but it's certainly possible in principle.



Fig. 2
Light clock in motion (aboard a fast-moving spacecraft).

Figure 2 shows a light clock flying past us aboard a spaceship. The light still bounces back and forth between the mirrors, but what we see now is that it takes a longer path than it did before, because the mirrors keep moving. Since the light still moves with its accustomed speed (denoted "c"), each bounce takes longer. It's actually easy to compute the exact factor of slowing using this picture; see the Wikipedia article.

So the moving light clock slows down because the light beam has trouble catching up to the mirrors it is bouncing between. The main ingredient that creates this result is that the speed of light doesn't depend on the thing that emitted it; light, unlike a tennis ball, doesn't move any faster after bouncing off of a moving mirror.

And why is this? It is because light is a wave, and the speed of a wave depends on its medium, not on the emitter. Think of a boat: its wake moves at the same speed regardless how fast the boat was moving.

The great revolution of 19th century physics was Maxwell's wave theory of light, and it is from this that Relativity directly sprang. Currently all of our theories of physics are wave theories and therefore they all embody similar Relativistic effects; indeed, their mathematics is matched up such that they all embody the exact same effects.

Finally let's consider more "normal" clocks, like ticking mechanical clocks or digital clocks. These clocks are really just light clocks in disguise, because they work using electrical forces, and electrical forces are carried by the electromagnetic field, the same field whose vibrations we call light. Mechanical clocks consist of atoms which are bound together by electrical forces, while digital clocks consist of electrons being shunted around by various electrical forces.

A normal clock is basically the same as a light clock but with a many, many mirrors. Each atom is like a mirror, and the electrical forces between them are like the light wave bouncing in the light clock. If the clock is moving rapidly, the forces between the atoms are transmitted more slowly, causing the operation of the whole clock to slow down.

Obviously there is much more that should be said here. For one thing, there are processes that don't involve light at all, e.g. those mediated by the strong or weak force, but as pointed out above these forces are also waves sharing the same basic properties which cause the light clock to slow down.

And one should really discuss the "paradoxical" symmetry of the time dilation: each observer sees the other's clock running slow. It seems impossible, but that's obviously how the light clocks behave, so it can't really be a paradox - and it isn't. But I have to leave it there.

Tuesday, December 8, 2009

Heat pump efficiency

Here's something that surprised me...I guess I didn't pay close enough attention in thermodynamics class....

What's the most efficient way to heat a house, a) burn natural gas, or b) run an electrically powered "heat pump" system?

I would have answered a), thinking that nothing could be more efficient than to burn an energy source directly into heat. But this is totally wrong. It is actually far more efficient to let the power company burn that gas to generate electricity, and then use the electricity to run your heat pump.

The outside air may be colder than the inside, but it still stores plenty of heat - the only trick is how to get it from the outside to the inside. Heat doesn't naturally move from a colder place to a hotter place, so it takes energy to pump it, but there is a multiplier factor: a given amount of energy can transfer several times that amount of heat.

Heat pumps really seem like a case of "something for nothing". How can energy E magically pump 3E or 4E of heat from the freezing outdoors to the inside of your house?

The first key is that the outside and inside temperatures are actually not that different, when measured in the Kelvin scale, i.e., starting from -460 Farenheit. The difference between 68 degrees indoors and 32 outdoors is only about 6% on this scale.

The second key is that the obstacle to transferring heat from outdoors to in is not energy, but entropy. After all, we are not talking about creating any energy - just moving it around. Energy ordinarily doesn't move from cold to hot places because it has lower entropy in the hot place 1; however, the entropy difference is not that great for normal temperatures because it depends on the temperature difference in Kelvin.

The third key is that the energy we use to run the heat pump has to be in a very low-entropy form, such as natural gas, rather than a high-entropy form such as the air inside the house. (We could not use the energy in that air to power the heat pump!)

To pump energy from outside to inside, then, all we have to do is make up the relatively small difference in entropy, which we can do by taking a little bit of low-entropy energy and converting it to high entropy.

Of course this doesn't tell us how to make a heat pump, it just tells us something about the performance we can expect. But a heat pump is not complicated - it is just a refrigerator or air conditioner turned backwards.

Added 8/4/2013: Since the heat pump is just a standard heat engine running in reverse, its efficiency is the inverse of that of a heat engine. The efficiency of an ideal heat engine is W/Q=(H-C)/H where H is the hot temperature and C is the cold, and W is the work done while Q is the heat transferred.  The "efficiency" of a heat pump is just the inverse of this, Q/W, and it will always be greater than one, at least for a reasonably well constructed engine.

I learned this from the book "Sustainable Energy - Without the Hot Air", by David MacKay, a book I highly recommend.

1 Hot energy has lower entropy than cold energy because the cold energy spreads over more "degrees of freedom". For example, one Joule at a cold temperature may be shared over N particles, while at a hot temperature it is only shared over N/2 particles, because the particles are all moving around faster on average.

Monday, December 7, 2009

Seeing quantum gravity?

Here's an beautiful line of research: look at very distant objects with the best possible telescopes, and see if their images are blurred due to spacetime fluctuations caused by quantum gravity. This is the subject of a recent paper posted to ArXiv with the evocative name "A Cosmic Peek at Spacetime Foam" (authors Wayne Christiansen, David Floyd, Y.J. Ng, and Eric Perlman).

The mixing of scales involved in this scenario is breathtaking. Photons coming to us from the most powerful objects in the universe - quasars - and traversing the longest distances we can measure - billions of light years - bring to us traces of the smallest entities ever conceived, namely the tiny fluctuations of quantum spacetime. Throw in some black hole theory and some quantum information theory, which are used to try to estimate the expected blurring effects, and one definitely gets what they call a "sexy" scientific paper.

So has quantum gravity been observed? Well...perhaps. There is a hint, but no more than that, of the behavior one would expect from one particular model (the behavior being the dependence of blurring on wavelength). It will take a better instrument to convert the hint into something meaningful.

This is definitely exciting data to look forward to from future, more accurate telescopes!

Below I've attached the actual quasar images used in the study, not because they really convey anything by themselves, but just because they are fun to think about...

Sunday, December 6, 2009

Falsifiability

What makes a theory "scientific"? Probably the most widely accepted notion is that it should be falsifiable, i.e., there should be some way, at least in principle, to disprove the theory. This sounds reasonable, but unfortunately it is logically possible - and looking increasingly likely - that the true underlying "theory of everything" is not falsifiable.

What if the theory predicts, for example, the existence of many different universes, unable to communicate with one another? This is a perfectly reasonable possibility, yet we could never disprove it. These "multiverse" theories even have explanatory value in helping us understand why our universe has the particular constants of nature necessary for life (because each universe has a different, random set of constants, so eventually the ones suitable for life will crop up).

Falsifiability is also a very tricky criterion to use in discriminating science from pseudoscience. For a theory to be falsifiable there have to be two "possible" universes, one in which the theory is true and one where it isn't. So we need to know what kind of universes are "possible"; but once we decide this then we don't need falsifiability anymore, since we will already know which theories are possible.

To me it seems that there is a very simple criterion for which universes can exist, namely reducibility to mathematics. As I have argued in another post, any possible universe must be founded on mathematics because only mathematical objects can actually be defined. This implies (as a trivial consequence) that the only "scientific" theories are those compatible with reduction to mathematics.

This criterion immediately rules out any theories involving gods or "supernatural" beings. One can argue at length over the hypothetical characteristics of these entities, but one thing their supporters will never agree to is that they might have a rigorous mathematical basis - because that would defeat the entire psychological purpose of believing in them.

The criterion may seem simplistic and reductive, but it does cast a clear light upon the issues - and one which happens to build upon, rather than shrugging off, the mathematical foundation we have discovered in our own universe.

My criterion is also more honest, I believe, since generally when scientists argue that certain things are "non-scientific" what they really mean is that those things could not possibly exist. For example, if "spirits" could exist and influence events, then those spirits could be studied by science and would not be "unscientific". To say they are "unscientific" is pointless - for that is the exact reason why believers want to believe in them; what one really means by "unscientific" is that they could not possibly exist.

Of course, to believe that only mathematical entities can exist is a belief; we can never prove this. However, it is a belief which matches our discoveries about our own universe, and which makes logical sense, and this is more than one can say about its numerous competing belief systems.