Friday, March 21, 2014

Why I don't believe Bostrom or Tegmark

Nick Bostrom has famously argued that we probably live in a simulated world. Max Tegmark has famously argued that all mathematical objects exist, and our world is just one of them. (These two arguments are not completely separate, since every computer program is a math object, hence simulated worlds are a subset of Tegmarkian worlds.)

I don't believe either of these arguments, and the reason can be summed up in two words: Quantum Mechanics.

To answer Bostrom: If one were to simulate a universe, Quantum Mechanics would be a very bizarre foundation to choose. It is notoriously difficult to simulate and does not add anything essential (that I am aware of) to a simulation. Instead one would just write a normal program, as done for video games or cellular automata (e.g. the "Game of Life"). Hence, our universe does not (so far) look at all like a generic instance of computer simulation.

And to answer Tegmark: Mathematical objects are clear-cut and well-defined. If we lived in a generic math object, then we should not experience any problems with definitions or consistency. However, the so-called "measurement problem" of Quantum Mechanics is exactly such a problem, as I described in my earlier post "The Problem with Quantum Mechanics" (Feb. 2010). Indeed, if the current framework of QM proves to be fundamental, then our universe exists at the very edge of what can be described by mathematics. It cannot be  axiomatized (because there is no way to fully define what a measurement is) and it is definitely not the sort of universe one would expect to see if universes were chosen at random from the full set of math objects.

In my earlier post I conclude this way: "For some reason, our universe chooses to exist at the very boundary of conceivability. Perhaps it is a joke of some kind, or perhaps for some reason this is the only kind of existence that is really possible." The Tegmarkian or Bostromian hypotheses seem, at first, equally abstruse or difficult to conceive, but in fact, in comparison to the actual universe we observe, both of these possibilities are just too simple.








Tuesday, July 30, 2013

Zeno's Quantum Arrow

Recently I was reading Fulvio Melia's book "Cracking the Einstein Code", and I was struck by his discussion of an old (very old) "paradox" known as "Zeno's Arrow".  The paradox goes like this: imagine the passage of time as a succession of frozen snapshots of each moment. In any given snapshot there is no movement, since it shows only a single instant. Any moving object just appears frozen at its current position.

Then the question is, if there's no motion at a given instant, where does motion come from? Or, what "connects" these motionless states? There appears to be no information in the individual snapshots that would allow them to be stitched together into a physical time flow. Zeno argued that motion was, therefore, impossible - one of several arguments he used to support the idea that universe was, contrary to appearance, timeless and changeless.

Of course, I should hasten to say that I'm not sure there's any real paradox. A pictorial "snapshot" of a moment in time simply doesn't capture all of the relevant information; each object also has an instantaneous velocity, which isn't captured in an image. There appears to be no problem modeling this situation in a mathematically consistent way, as embodied in the classical physics of Newton.

All the same, something doesn't seem quite right. It is troubling that motion, in the classical view, seems to only be definable in terms of the change that occurs between two successive times, yet it also needs to exist as a property of a single point in time. It is, then, rather interesting to note that quantum mechanics precisely removes this dichotomy, making it so that full information about an object's position (the "snapshot") actually contains full information about its movement as well.

In quantum mechanics, all the information about an object's position is contained in its "wave function", which is just a function which shows how likely the object is to be found at each position.* One might think that each object would also have a second wave function showing how likely it is to have any given velocity, but this isn't the case.

Quantum mechanics represents a much deeper change than just adding some probabilities to classical physics, and the real quantum trick is that all the information about an objects's motion is also encoded in the same wave function which describes its position. The "motion" of an object is, in effect, just another way to look at its "position"; in mathematical terms, it is the fourier transform. The type of  "position-only" snapshot imagined by Zeno simply doesn't exist.

Clearly what this is saying is that motion and position are both fundamentally quite different from how we imagine them, and perhaps unified in a way which isn't yet fully reflected in our theories of spacetime. It also shows the true content of the uncertainty principle, as I blogged about earlier: http://www.letstalkphysics.com/2009_11_01_archive.html.

And I do find it quite compelling that quantum mechanics so exactly resolves this ancient dilemma of Zeno; perhaps it is telling us that the "paradox" really was paradoxical after all?

* This is an oversimplification since the wave function is a complex number and hence contains more information than is strictly necessary to give the probability of being at a particular position. In fact it contains exactly twice as much information, which is not surprising since it's encoding velocity too.  Nevertheless, the fact remains that both position and velocity are inextricably intertwined in a single function, and can't be separately specified.


Friday, April 5, 2013

New Edition of Relativity Made Real

Warning, this blog entry devoted to shameless self promotion!

I've been working over the past 6 months or so to expand and improve my book on special relativity, Relativity Made Real, and I am pleased to announce that the second edition is now available on Amazon (print and Kindle). It was quite a labor of love, I must say, and I will be publishing two related papers, one of which should appear in the American Journal of Physics before too long.

The purpose of the book is to give a more physical, "nuts and bolts" treatment of relativity, to counterbalance the rather abstract spacetime-oriented viewpoint which one finds almost everywhere else. Just as one example, consider time dilation, the phenomenon by which moving clocks run slower than stationary clocks. What causes this to happen in specific, physically-constructed clocks? Every clock consists of some kind of matter which is constructed to execute some kind of repetitive, cyclical process; looking at specific sorts of clocks, it should be possible to understand how motion affects their internal processes without appealing to any abstract generalities about time or space. By doing this, one gains a more concrete understanding of the predictions of relativity, thereby (hopefully) making the theory seem more "real", which inspired the title of the book.

Of course, there is not a completely different set of mechanisms for each kind of clock and for each different phenomenon of relativity. There is an underlying, unifying theme, and this is the behavior of waves. Indeed, relativity grew out of the very first wave-based theory, the theory of electromagnetism, and this is no coincidence; furthermore, in our present quantum-mechanical understanding, every object in the universe is actually described by an underlying wave-based theory ("quantum field theory"). This deeply wavy foundation is what produces the strange phenomena of relativity.

And these phenomena are, in fact, not that strange when one thinks of them in terms of waves. You probably would not try to build a rigid object using waves - and relativity predicts that there are no rigid objects (things shrink when they move). Things made from waves inevitably "slosh" when they move, hence motion must affect them in all sorts of ways, and it is not that hard to understand the effects qualitatively by building on one's everyday experience with waves at the ocean, waves on a jumprope, or sound waves in air.

Anyway, that's a preview of the basic approach taken by the book; please check it out and let me know what you think. It is, I believe, completely unique in the popular relativity literature. Mermin's book (It's About Time) contains some similar material in its last chapter, and there is another book called Physical Relativity, by Harvey Brown, which definitely follows the same philosophy, but is not pitched at a popular level, and also places a much greater emphasis on philosophical disputes, historical viewpoints, and general relativity (Einstein's theory of gravity).

Thursday, February 28, 2013

How to Identify Cranks

Why does physics attract so many more cranks than any other field? I don't know, but here's how you can identify them.

1. They claim to get huge new results without new mathematics

This never happens. Consider: Newtonian physics required the invention of calculus. Electromagnetism brought in the whole machinery of field theory, including partial differential equations, gauge invariance, Green functions, and many other things unknown to physicists of the prior century. Quantum mechanics brought in infinite dimensional Hilbert spaces and operator algebras. General relativity brought in tensor calculus and Riemann spaces. Quantum field theory brought in...itself, a mathematical smorgasbord as yet not fully characterized.

Think any of these are unecessary? Think they could be replaced by some kind of pictures or verbal explanations, if only a more incisive thinker came along? Then congratulations, you're about 25% of the way towards crankdom.

New physics requires new mathematics because, essentially, working out the results of old mathematics is a matter of effort, not creativity. Mathematics is highly structured, by definition, and if you put a few hundred smart people to work for a couple decades within a given mathematical structure, they will extract everything of physical relevance. You just can't get new wine from old grapes (and definitely not from sour grapes, see item 4).

2. They haven't mastered existing theories

Nobody advances physics without a complete mastery of the current state of the art. Most cranks think that Einstein did this, but they are completely wrong. He completed the full course of physics studies, all the way through graduate level, and then on his own he studied obsessively.

You aren't playing at Carnegie Hall without practice, and the same goes with physics.

3. They don't publish conventionally

Cranks have the idea that there have been some great physicists of the past, mainly Einstein, whose work was ignored or not published in conventional venues. This is not really true. Einstein completed his first three great papers in 1905. When were they published? 1905. Where? Annalen der Physik, a mainstream journal. Even Boltzmann, whose work on statistical mechanics met with great resistance, was a full professor and a mainstream physicist.

If mainstream journals won't publish your works of physics, they aren't works of physics.

4. They blame their failures on the attitudes of others

Cranks believe that the "establishment" is lined up against their ideas and that is why they don't succeed. When the community (largely) ignores them or fails to follow up on these "brilliant" new developments, the reason is not that the developments aren't worthwhile, but rather that the community is too narrow-minded and dominated by entrenched interests to see the truth.

Your classic crank meets all 4 of these criteria, knows very little real mathematics, and is easily ignored. However, there are some people, superficially very knowledgable, who pass 1-3 but still fail item 4, hence qualifying as 25% cranks. They publish sour-grapes books with titles like "The trouble with physics" or "Not even wrong". They think a bunch of "big egos" are standing in the way of progress, even though this never happened before in the history of physics.*

Folks, when good ideas appear you can tell. How? First, all the smartest people jump on them. Why? Because that's how they make their careers. What does any theoretical physicist have to gain by *not* pouncing on a new idea? Nothing. What does s/he have to lose? Just the opportunity for success, fame, and a place in the history books. And the second indicator that a new idea is good: it produces mountains of new and unfamiliar mathematics, see item 1. Good ideas are very fertile, and the form which fertility takes in theoretical physics is new equations. With luck, they lead to new experimental tests. Nothing guarantees that a correct physical idea has to be testable - that depends on the specific design of our universe - but of course it will be a drag if the correct theories are, in fact, not testable in practice, so that we can never know the truth.


* See http://www.theregister.co.uk/2013/02/20/carver_mead_on_the_future_of_science/

Tuesday, February 14, 2012

Why the Multiverse is A Good Thing

In a nutshell: the multiverse makes it possible for our universe to be described by an beautiful theory.

Why does beauty imply a multiverse? Because of two things. First, beautiful theories seem to be quite rare. The concept of theoretical beauty involves unique and highly constrained combinations of mathematical structures, combinations which are rarely discovered and whose discovery usually heralds revolutions in both mathematics and physics. And second, if we rule out multiverses then each theory can correspond to exactly one universe. Each theory has just "one shot" to get it right, producing the conditions for life. What's the chance that one of the very rare beautiful theories could also jump this very high hurdle? It seems pretty small to me.

But with a multiverse theory there is no problem. We can have a fantastically beautiful theory that has no adjustable parameters at all -  for example, string theory. And this theory can also accomodate the existence of life, in some of the many allowed paths in the evolution of the multiverse.

But what about the lack of "explanatory power"? Haven't we given up the most important thing that a "scientific" theory is supposed to have?

Not at all. Take a look at any of the various definitions of science in a philosophy of science textbook: none of them specify that science must explain why the current state of the universe is exactly as it is. What science is supposed to do is to predict the outcome of future experiments, and a fundamental multiverse theory would by definition be able to do this for all possible experiments, at least in the probabilistic sense of quantum mechanics.

Moreover, the non-multiverse theories don't actually do a better job of explaining the current state of the universe, anyway. The "standard model", for example, has over 20 adjustable parameters, many of which must be incredibly finely tuned in order for life to exist; what explains this fine tuning? Nothing - it just is. At least with a multiverse theory we have some sort of explanation, namely that all the possible parameter sets are realized in different universes, and we happen to live in this one.

The more one thinks about it, indeed, the more inevitable the multiverse seems. It fits very easily within the probabilistic structure of quantum mechanics. It is a natural extension of the Copernican insight, which has survived every challenge for five centuries now. And it is the only plausible way that our universe could be described by a really beautiful theory, an expectation which is admittedly irrational, but which most theorists deeply believe to be true.

So its time to stop worrying and learn to love the multiverse. It's not going away!

Wednesday, February 23, 2011

Why General Relativity is Easier to Understand than Special

Reason 1:

In General Relativity, the effects on clock rates and rulers have a concrete cause, namely the gravitational field. It's hardly surprising that an all-pervading field can affect the lengths of things or the rates of clocks. One could easily write down equations for other fields that do this same sort of thing.

With Special Relativity, on the other hand, there is no external field causing the effects to moving objects. One then has a puzzle as to "why" the moving objects are affected. The customary explanation is that "spacetime" affects the moving objects, but this just changes to question to why spacetime should affect relativistic matter, when it did not affect Newtonian matter. In fact the difference is field theory, because motion alters the propagation of the fields within the objects, whereas it does not affect the action-at-a-distance forces of Newtonian physics.

Reason 2:

Simultaneity is not an issue in General Relativity. In GR there is no concept of the global reference frame for an observer, hence one does not try to extend an observer's concept of "now" to distant locations.

In Special Relativity, by contrast, one has global inertial frames and one can compare different observers' definitions of "now". One finds that these definitions disagree and this leads to the various "paradoxes" such as the twin paradox.

The twin paradox, for example, arises in SR when one incorrectly applies the inertial frame of the "traveling twin". It does not arise in GR, because one doesn't define an inertial frame for either twin. Of course one could, if one assumes that there is actually no gravitational field (i.e., spacetime is flat), but without gravity one is back to doing Special Relativity.

Reason 3 (really a broader way to state Reason 2):

In GR, one is not concerned with comparing the viewpoints of different observers. Rather, one is concerned with calculating the effects of the gravitational field. An observer at point A is affected by the gravitational field at point A, and likewise for observer B and point B. There is nothing more to say about their viewpoints.

In SR, by contrast, each observer has a global reference frame that encompasses the whole universe and all other observers. One then has questions like how each observer can see the other's clocks to be running slow. In GR one never addresses such questions because a moving clock and a stationary clock are not at the same place to be compared.

Reason 4:

Again this is a variation on the same theme.

In GR, one is not concerned with measurement. There is no discussion about how different observers measure things, and there doesn't need to be. In studying the gravitational red shift, for example, one doesn't get into a big discussion about how the observers at different altitude make their measurements; the issue never even arises.

in SR, by contrast, one has to deal with the question of how moving observers can each see the other's clocks running slow, and rulers shorter. This means discussing the process of measurement and the effects of simultaneity on it. It is a very confusing aspect of SR and does not arise at all in GR, because it has nothing to do with gravity.

I discuss some of these things further in my book Relativity Made Real,
www.relativitymadereal.com.

Shortcomings of the Spacetime View of Special Relativity

One often hears that Special Relativity is a "Spacetime Theory". Indeed, this is the predominant way to view the theory, and has been since Minkowski's famous pronouncements on the subject.

Certainly the spacetime framework is a very elegant one, and summarizes very concisely and graphically the results of the theory. But I want to emphasize that one word very clearly: results. The spacetime framework gives us a good way to visualize what the theory predicts, but it gives us little or no understanding of why the theory predicts such things.

For example, a moving object contracts. Why? In the spacetime paradigm this is "explained" by the differences in coordinate systems used by the two observers, and particularly by their different definitions of simultaneity.

But this is rather circular. Coordinate systems create the appearance of contraction, but what creates the coordinate systems? Well...obviously the observers create them themselves, my measuring things out with their own rulers (and clocks). So actually we need to understand the rulers first, before we can understand the coordinate systems, and not vice versa.

Let me give a specific problem that is hard from the spacetime viewpoint. Consider a spaceship which is accelerating constantly, moving faster and faster. We know that it will be contracting; but exactly how does this happen? Does the nose contract towards the tail, or vice versa, or do both contract towards a point in the center? The question does have a definite answer, because both the nose and tail of the ship have a definite trajectory, fully predictable by physics. But I challenge anyone to produce this answer by drawing spacetime diagrams, or computing Lorentz transformations.

I will give my own answer in a future post. For now I will only point out that, in reality, the contraction of a moving object is caused by changes to its internal forces and fields, most notably the electromagnetic field. Understanding this, one can tackle the problem and it is not particularly hard. One also gets past the circularity described above, because one sees that moving rulers (and clocks) are altered by concrete physical mechanisms, so that observers measuring things with them will naturally build different coordinate systems using them.

The energy/mass relation is also quite mysterious from the spacetime viewpoint. Consider this simple scenario: an electron and proton come together to form a hydrogen atom. This process gives off light, hence the atom has less energy than the electron and proton did separately, hence the atom has less mass than the separate electron plus proton. But why? Why is it harder to accelerate an electron an proton bound into an atom, than to accelerate them when separated? I have no idea how to address this question within the spacetime viewpoint, but it is quite simple if one thinks in terms of the physical mechanisms which give rise to the mass/energy formula.

I discuss these sort of things in more detail in my new book, Relativity Made Real (www.relativitymadereal.com). Indeed, these sorts of questions are the reason I wrote the book (although I don't explicitly answer the first one, because it is a bit too in-depth for a popular book).

Wednesday, February 9, 2011

Relativity Made Real

So, after thinking about the subject for many years, I've written a book on Relativity! It is called "Relativity Made Real", reflecting my hope that it can make the phenomena of Relativity seem more concrete and real to people, rather than difficult and abstract.

Here is the link: http://www.relativitymadereal.com.

Essentially, what I do is approach the topic in a very physical way, explaining how the phenomena arise from the underlying, mechanical properties of matter. I do not start from mysterious "postulates" nor from abstract "spacetime" concepts (although these are discussed in the second half of the book).

Rather, I start from the fundamental nature of electromagnetism and other "field theories", and show how this leads to the effects of Relativity. In this way one gets a very concrete picture of why moving clocks run slow, why moving things get shorter, and why energy and mass are interchangeable. Then one is in position to understand the meaning of Einstein's "postulates", and also the origin and significance of the spacetime concept.

Thursday, March 25, 2010

Removing from Kindle

Apologies to my Kindle followers - I just can't post at a rate that justifies selling the blog on Kindle. My goal is not to just relay the physics news of the day, which you can find on physorg or other sites, but rather to go in-depth on certain topics; however, these kind of posts take time, time that I don't always have.

So, I have to sign off of Kindle Publishing, but thanks for reading, and be sure to check out www.letstalkphysics.com periodically!

Sunday, February 28, 2010

Darkness and light

Things are heating up in the field of dark matter.

One interesting idea making the rounds is that the very first stars may have actually been powered by dark matter!

It sounds paradoxical, since dark matter is dark now because it doesn't interact with anything, so how can it burn to power a star? But back in the early universe there could have been a lot more of it around, enough so that the dark matter particles could annihilate against each other to make large amounts of energy.

These "dark stars" would be pretty odd creatures. They could reach vast proportions, comparable to the size of our solar system, and weighing in at 1000 times a much as our own sun. They would be extremely bright as well, around one million times the luminosity of our own sun. And oddly enough the dark matter would actually form just a tiny fraction of their mass, the vast majority of which is normal matter.

When the dark matter runs out, after a few hundred thousand years, the normal matter would collapse to a supermassive normal star, and ultimately into a black hole. This could help resolve a puzzle: very large black holes appear to exist in the early universe, but nobody understands how they could grow so large in the time available.

To see the papers, check out Katherine Freese's website: http://www-personal.umich.edu/~ktfreese/index.html.

And in other dark matter news...

In the same Minnesota mine where the CDMSII experiment reported two possible dark matter detections earlier this month, another experiment called CoGeNT is reporting hundreds of events: http://www.nature.com/news/2010/100226/full/news.2010.97.html. These events are doubly interesting because they suggest an unexpectedly light dark matter particle.

Well, we should not get too excited just yet, since it is only one experiment and there are many possible complicating factors - but we can get a little excited...

and especially so, because the LHC turned on again yesterday, hopefully for real this time! With luck we will see results from the world of 7 trillion electron volts before year-end.

Tuesday, February 23, 2010

Jan de Boer colloqium

Here's a popular-level online talk which goes over the current state of thinking on the whole fascinating mix of topics relating to black holes, string theory, "holography", and the Ads/CFT correspondence:

http://agenda.albanova.se/conferenceDisplay.py?confId=1900

This is really some pretty remarkable stuff. About half the talk is "ancient history" from the 70's, 80's and 90's, not new anymore but still fascinating ( you can read about it also in the book "The Black Hole War"). The rest is on newer developments, particularly the application of string theory to quark/gluon physics and to high-energy superconductivity. This is a story still very much under development.

At the end he says something which I find highly dubious. He claims that a person falling in to a black hole would gradually lose consciousness as they hit the event horizon, and this, as far as I know, is not the accepted viewpoint at all. The generally accepted view is that the horizon is undetectable by someone falling across it. Indeed, we could be falling across one right now - perhaps for a huge black hole whose horizon is light years across - and we won't know the difference for millions of years until we start to approach the actual singularity at the heart of the hole.

But on the other hand, it is also generally accepted that if you watch someone falling into a hole from the outside, then you see them get closer and closer to the horizon but never actually fall in. And furthermore, the horizon has a temperature, although generally a low one. So from the outside it looks like a person should be encountering warm temperatures as they fall in, which might dissolve them or cook them or something.

This relates to the idea of "Black hole complementarity", according to which there two equally valid but complementary ways to look at a black hole: the view from outside, and the view falling in. But this seems to violate the principle, because the person falling in could be sending radio messages back home, and those messages would say, "situation normal, nothing to report". But if the infalling person actually sees a temperature and is getting cooked, then their messages would surely mention this fact.

So there is a conflict here, one which has been debated for several decades now, apparently without resolution. Personally I don't believe that the infalling observer would see anything, at least for big black holes. To me the view "from outside" seems pathological and highly suspect, because of the strong warping of time near the event horizon, relative to a distant spot.

That's my .02, but I've been wrong before!

Friday, February 19, 2010

Signs of spring

Could it be that springtime is near, not just in Earth's climate (in the Northern hemisphere at least!), but also in experimental particle physics?

It has been a long, trying winter, with few really exciting observations since the early 1970's. Experiments at Fermilab, SLAC, and CERN confirmed and refined the so-called "Standard Model" of particle physics, for which Nobel prizes were duly dished out during the 70's, 80's, 90's, and even into the last decade. The tau lepton and top quark were confirmed (c. 1975 and 1995, respectively), filling out most of the missing pieces of the bestiary, with the Higgs boson remaining the one stubborn holdout.

These are terrifically important results, don't get me wrong. Four decades is not a long time to test and verify such a complex theory as the Standard Model. Nevertheless it has been frustrating for theorists, who - we can be honest here - find the Standard Model rather clunky and unloveable, and feel certain that it must be incomplete. Candidates to extend it abound, from supersymmetry to technicolor to strings, but very little data exists to constrain them. The cancellation of the SSC in 1993 was a major disappointment; arguably, the most fruitful development to emerge from particle physics laboratories during this period was the World Wide Web, invented at CERN in 1989 (just 21 years ago - but it feels like a century!).

However, all that may be poised to change. The Large Hadron Collider at CERN is finally almost ready to take data, and it should be powerful enough to go beyond the Standard Model. At the very least it should discover the Higgs or, failing that, blast a big hole in the Model.

But what prompted me to write this post was a recent, tantalizing result on dark matter. This is the mysterious matter which seems to comprise 75% or so of the mass of the universe, but which has never been directly seen.

At least until now - perhaps. An experiment called "CDMS II", utilizing fantastically sensitive detectors buried in a mine in Minnesota, reported in December the detection of two possible dark matter collisions (the paper came out in Science last week). Unfortunately, this was not enough events to confidently claim a discovery; the researchers estimated a 75% probability of being due to dark matter, rather than background noise.

Although not definitive, this is very exciting since it would be the first detection ever of a particle from beyond the Standard Model. Indeed, the most favored dark matter candidate at present is the so-called "Lightest Supersymmetric Particle", and theorists would love to get their hands on any concrete information about this creature.

So there's still, speaking literally, nothing to report. But there are gathering signs of promise everywhere. Punxsatawney Phil may have predicted a long winter this year - but what does a groundhog know about particle physics anyway?

Monday, February 15, 2010

Applied String Theory!?

Now here's surprising twist in the string theory story, to say the least...

I blogged a little bit the other day about the "Ads/CFT" correspondence, which relates string theory in certain spaces to non-string theories on the surface of those spaces. This bizarre dimension-shifting idea is 13 years old now but its ramifications continue to expand. Juan Maldacena's paper proposing the idea was, as of last year, the second most-cited paper of all time in the Spires high-energy physics database, and will certainly hit number one soon. (I have disqualified an unfair review paper which actually sits at number one).

When first conceived, it seemed like a novel way to figure out things about string theory and therefore, perhaps, about quantum gravity. It seemed like one more bit of cool but ultimately arcane mathematics coming out of string theory.

But in the last few years that logic has being turned on its head and physicists have found it very fruitful to go the other way - to use string theory to understand the surface theories, which are "quantum field theories" quite a bit like the one believed to describe quarks in atomic nuclei.

Now, the quark theory ("QCD") is very hard, because it is "strongly interacting". However, strongly interacting theories are precisely the ones with good "Ads/CFT" dual descriptions. So we have the bizarre phenomenon of actual observable properties of colliding nuclei - messy, hot globs of quarks and gluons - being described in terms of 5-dimensional gravity, strings, membranes, and black holes! I don't think, 15 years ago, that anyone in their wildest thoughts had imagined that black hole physics could be relevant in any way to nuclear interactions; let alone black hole physics in 5 dimensions!

And, more speculatively, some condensed matter systems (e.g. high temperature superconductors) at the temperatures of their phase transitions, also can be connected to a dual gravity description. This, I believe, is still much more tentative than the quark connection.

Note, nobody is saying that actual black holes or other quantum gravity effects are created in nuclear collisions or high-temperature superconductors. The string theory and gravity here are just a "dual description", or equivalent way of looking at them. What's acting like a "string" in the quark-gluon soup would actually be a chain of gluons or something like that. What's acting like the "5th dimension" would actually the energy scale of the reaction. And now I am getting out of my depth and cannot comment in further detail.

For those of you who have read about this elsewhere in the media, I am sorry to probably not add much more. For those who haven't, I hope you find this development as remarkable as I do! I mean seriously, black holes in nuclear physics, of all places!

Wednesday, February 10, 2010

The Black Hole War

I just finished a really good popular physics book, the best I can remember reading for a long time. It is "The Black Hole War", by Lenny Susskind, an eminent Stanford professor of physics. Among other major achievements, Susskind has a strong claim to be the inventor of string theory, and - unlike with some other current popular authors - everything he says can be taken extremely seriously.

Susskind's topic is one that is close to my heart, indeed I did my dissertation on it, more or less. I was part of the Santa Barbara group of string theory physicists - a.k.a. "the enemy" in Susskind's book, at least as far as the "black hole war" goes. My vote was counted in the tally shown on p. 262 of the book; unfortunately, I'm pretty sure I voted for the "wrong" side, along with the rest of the Santa Barbara crew.

The problem, and the subject of the "War" which Susskind recounts, is simple: what happens to matter swallowed up by a black hole? One possibility is that it just vanishes forever, and this was the general belief until Hawking - in one of the most beautiful computations ever carried out, and the first to combine general relativity and quantum mechanics in any substantial way - showed that black holes have a temperature and they radiate energy like every other warm object. Eventually, they "evaporate" completely and vanish.

But Hawking's calculation opened a huge can of worms because it indicated no connection at all between the matter which went in and that which came out. In other words, the evaporating black hole creates "something from nothing". Energy is conserved, to be sure, but everything else about the matter - all of its "information" - is erased, in a mathematically complete sense, and replaced by a featureless, memoryless, random collection of particles.

Now, this is not how physics has ever worked. In physics, the situation now comes from the situation before, through a one-to-one connection. The situation now does not just arise spontaneously from nothing, in some random state. That just sounds wrong, and it seems mathematically impossible to implement.

However, wrong as this consequence seemed, Hawking's calculation seemed right, and most physicists didn't see the big deal since there were no black holes handy to test with anyway.

But a few physicists, most notably Susskind and 'tHooft, recognized the problem as a critical matter of principle that should be resolved. And they felt quite strongly that Hawking's picture was wrong, and that proving it would teach us profound things about gravity and universe.

In 1994, the paradox seemed completely impenetrable; but by 1997 it had been resolved, more or less, and Susskind and 'tHooft proved right.

History will record these three years as among the most momentous in science. Below I present their chronology, with some introductory years added for context, to give the reader some feeling for the times, which were a strange admixture of excitement and despair. People were waiting for something big to happen, not really believing that it would - and then it did. There's a lesson in there, not least for yours truly, who quit the field just before it exploded. I was at Santa Barbara from 1989-94, a student of Steve Giddings.

March, 1991

Witten discovers a simplified, 2-dimensional black hole solution in string theory. It is exciting both because it is simple, and because it exists within string theory, a partial theory of quantum gravity, suggesting that it might illuminate the paradox of Hawking.

November, 1991

Callan, Giddings, Harvey, and Strominger propose the "CGHS" model of black hole formation and evaporation, based on Witten's black hole.

1992

The "black hole information problem" takes the string theory community by storm, sparked by the string-inspired CGHS model, and helped by a lull in progress in string theory itself. I began working with Giddings and we wrote a followup to the CGHS paper.

1993

The Santa Barbara Black Hole Conference, a.k.a "The Battle of Santa Barbara", in Susskind's dramatic rendition. Heated debate, fascinating ideas - but no resolutions.

In fact the most important result, by far, to be announced during the conference is the proof of Fermat's Last Theorem.

Meanwhile, in a major blow to the particle physics community, the SSC accelerator is canceled by Congress. My thesis advisor Giddings is quoted in a major news magazine saying that, had he known that would happen, he would have gone to law school.

1994

The calm before the storm. Black hole work mushrooms in string theory, and the ideas remain tantalizing, but true solutions seem wholly out of reach. Many, including yours truly, are very discouraged.

March, 1995. University of Southern California.

At the Strings '95 conference, Witten informs a stunned audience that string theory, previously thought to reside in 10 dimensions, actually has a hidden, 11th dimension. The most famous scientific talk in recent memory, it sparks a revolution in string theory.

The significance of it all was still pretty unclear though. At the conference final dinner, I listened to Susskind's wrapup speech, in which he described the whole field as "angels dancing on the head of a pin". I am sure he never really believed that, and if you read his book you won't believe it either - but it still might be true!

October, 1995

String theory expands yet again, as Joe Polchinski of Santa Barbara discovers 10 additional types of matter hidden within it, the "D-branes". D-brane theory is so beautiful and compelling that once you study it, you can't believe that string theory could not be right.

Polchinski wrote me a letter of recommendation upon my graduation; however, I suspect that it was not a very good letter! At any rate, I left the field several months before his historic discovery.

January, 1996

Vafa and Strominger use D-branes to build a model black hole for which they can identify the internal states directly and see that information is not lost. The problem is unraveling.

November, 1997

Juan Maldacena, using D-branes as well as most other major ideas of the previous two decades of theoretical physics research, conjectures that string theory in 4-dimensional spaces equivalent to "dual", non-string theory in 3 dimensions. It is both mind blowing and arcane, but it has over 6000 references and appears to solve problems even in the previously-fossilized field of nuclear physics.

Shortly thereafter, Witten shows that creating a black hole in the 4-dimensional space is the same as adding temperature to the dual 3-dimensional theory.

The veil of the Black Hole is lifted, at least in part, and nobody believes any more that information is sucked into a hole, never to return. The "war" described in Susskind's book is over.

Friday, February 5, 2010

The problem with Quantum Mechanics

Everyone knows Quantum Mechanics is weird. Many of its principles sound paradoxical.

Matter is both wave and particle. Position and velocity can't be simultaneously specified. Particles have spin even though they can't be spun. Particles carry entanglements across space, allowing a form of teleportation. "Empty" space seethes with activity. Small-scale physics is unpredictable and fundamentally random.

Weird, for sure. But is there any real problem here? Does the theory have some kind of inconsistency or mathematical difficulty, or does it just conflict with our inborn intuitions?

I say mathematical difficulty because that is the only kind of problem that would be a real problem (aside from experimental contradiction). If a theory makes mathematical sense then there's no reason to believe it couldn't represent a universe, no matter how badly it contravenes "common sense". Indeed, mathematics is just extrapolated common sense, so anything that makes mathematical sense can be assimilated into our intuition eventually.

But Quantum Mechanics has resisted this assimilation for almost a century now. The reason for this lies not with any of the oddities cited in the second paragraph; they are all perfectly comprehensible with a bit of study.

The problem with Quantum Mechanics is that it contains no consistent way to say what exists. This is usually referred to as the "measurement problem", because physicists encounter it when studying the measurement process, but in truth virtually everything is a kind of measurement. To even say that something exists, even something as seemingly obvious as a rhinoceros or a planet, is to make a type of measurement.

In Quantum Mechanics the universe consists of the "wave function", &#936. However, &#936 doesn't describe any actual particles, fields, or rhinoceroses, but only the probabilities that they might exist. In order for them to actually exist, there must be a "measurement". But a measurement requires a measurer, and the theory doesn't tell us what or who are the measurers.

Many perfectly valid Quantum Mechanical universes indeed have no actual measurements. Consider a sparsely-filled box of electrons, and imagine that this is the entire universe. Nothing in this universe provides any possibility of measurement, and therefore nothing in this universe really exists. We say the box "contains electrons" because we wrote down our normal &#936 theory for electrons, but beyond this there is nothing - no events, no history, and therefore no real electrons either.

But, one might object, surely in this "=electron universe one can talk about the possible locations and collisions of the electrons, and compute their probabilities? Maybe there's nobody around to see them, but so what - can't they still exist?

Alas, no. In the electron box there are no computable probabilities because the future trajectories of the electrons continue to interfere with each other. It is this interference which is the root of the problem we are discussing. The function of a "measuring device", or "observer", is to wash out the interference of outcomes in the future. Once the interference is washed out, the different outcomes are distinguishable and their probabilities make sense to a high degree - which, for probabilities, means that they very nearly add up to one.

And here is the mathematical crux of the problem: the probabilities never quite add up to one. There's no such thing as a perfect measuring device, because everything is just globs of matter in the first place. A big, complicated thing like a human being does a pretty good job of washing out interference (or to use the more technical lingo, "creating decoherence") but it is never perfect.

So we have "probabilities" that, mathematically speaking, aren't probabilities at all, because they don't add to one. It's like saying there's a 50% chance of flipping heads and a 51% chance of tails; in fact that's exactly what the prediction could be in extreme cases of interference, like the electron box world.

The most accepted solution to this problem, as far as I know, is the one I just alluded to: large blobs of matter create "decoherence" in the things they touch, allowing them to wash out interference to a very high degree. In other words, the probabilities almost add up to one. The come so close that one can argue that the discrepancy can never be noticed in practice.

Well, we don't "notice" the inconsistency between General Relativity and Quantum Mechanics in practice, either - but that hasn't stopped two generations of physicists from trying to resolve it. This problem of probabilities that don't add up to one is equally embarrassing, but gets little attention because nobody has a clue where to start. It is built so deeply into the structure of Quantum Mechanics, and that structure seems so impervious to tinkering, that the effort seems futile.

Personally, I am torn. I believe that anything that exists must rest on a consistent mathematical foundation. The fact that our own universe is built from mathematics suggests to me that this view is right. If it could have been some other way - then why isn't it?

But inconsistent mathematics is as bad as no mathematics at all. Inconsistent mathematics has all the same problems as gods or magic or any of the other non-mathematical fairy tales people have dreamed up over the eons. So, given that the universe clearly uses mathematics, why would it slip in an inconsistency at the very lowest level? Why bother with math at all, in that case?

Yet, I have a feeling that the problems with Quantum Mechanics will not be resolved. The probabilities add up nearly to one in the universe we have right here, so even though it doesn't make any sense in principle, and it wouldn't make sense for some other universes, it's what we will be stuck with - like it or lump it.

For some reason, our universe chooses to exist at the very boundary of conceivability. Perhaps it is a joke of some kind, or perhaps for some reason this is the only kind of existence that is really possible.

Sunday, January 31, 2010

Gravity as entropy? Sounds cool but....

A cool paper appeared on ArXiV by Erik Verlinde, arguing that gravity should be thought of not as a fundamental force but rather as arising from the second law of thermodynamics (an "entropic" force) (Link:http://arxiv.org/abs/1001.0785).

The neat thing about the paper is that it ties together many different general principles which have emerged from string and gravity theories. I'll just sketch the ideas involved. It is a long sketch, which I hope will intrigue readers to read more elsewhere - although Verlinde's idea itself seems untenable.

Verlinde first assumes "holography", which is the notion that the physics in a volume of space actually comes from objects that live on the surface enclosing that volume. In other words, the universe really has one less dimension than it appears; the position variable "x" for one whole dimension is really not a fundamental variable, but is an "emergent" property, or grouping of objects in the surface theory.

The first hints of holography came from the fascinating subject of "black hole thermodynamics". It is fairly well established now that black holes have a temperature and an entropy, and that the size of their entropy is given by their surface area, suggesting that the physics of their volume is really all captured at their surface.

Further support for holography has arisen within string theory, where the much-heralded "Ads/CFT" correspondence appears to provide an explicit example. Here, string theories inside certain volumes of space are believed to be fully equivalent to other theories residing on the surfaces of these volumes (theories which are not string theories and do not even have gravity). Ads/CFT was discovered by Juan Maldacena, and his Scientific American article is worth reading.

The second thread which Verlinde's paper picks up is the unexpected appearance of temperature in gravitational and relativistic physics. The most famous of these phenomena is the black hole temperature, discovered by Hawking in 1974. The horizon of a black hole has a temperature and the hole radiates like a light bulb, eventually dissipating to nothing.

Less well known is the "Unruh effect", named after its discoverer, Bill Unruh. Unruh calculated that an accelerating observer should perceive himself to be immersed in a heat bath, with higher temperature the higher the acceleration. The calculation is quite simple and results from the fact that the quantum-mechanical vacuum contains virtual particles which can hit the accelerating observer and then change to real particles.

The Unruh effect then combines with another classic idea, the "equivalence principle" of Einstein. This principle states that the physics in a gravitational field is the same as that seen by an observer experiencing the corresponding acceleration. Applied to Unruh's effect, this means that a stationary observer in a gravitational field (who "feels" the gravity as weight), sees himself immersed in a heat bath, while a freely-falling observer in the same location (who "feels" weightless) sees nothing but empty vacuum. Both of these Unruh effects are generally accepted as true, although they are too small to measure.

These are the ideas Verlinde is playing with. I'm sure this preamble was long enough to tax my readers' patience, yet still not long enough to make any sense; for those who wish to learn more, I highly recommend Susskind's book "The Black Hole Wars".

Verlinde takes holography as his starting point. He assumes that the physics of a region is actually derived from physics on a surface, or "screen", bordering that region. He assumes that the screens have a temperature which is given by the Unruh effect, i.e., the temperature a stationary observer would see if sitting at the screen location - which seems eminently reasonable.

Then the question he is trying to answer is, where does gravity come from in this picture? It has to "emerge" from the screen physics, just as the extra dimension of space emerges. In the Ads/CFT correspondence mentioned above, this happens through string-like groupings of particles in the screen.

Verlinde suggests a quite different possibility: that gravity is an "entropic" force. This means that two masses attract each other gravitationally because, as represented in the "screen" theory, the configurations which are interpreted as "closer together" have greater entropy than those where the masses are "farther apart". The second law of thermodynamics ensures that entropy increases, and therefore the masses will draw together.

To better understand what an entropic force is, Verlinde provides a nice example, which I here modify slightly. Consider a jumprope in a room filled with rapidly bouncing basketballs. The basketballs are bouncing back and forth off the walls, and they hit the jumprope. If you want to hold the jumprope straight it takes force, because the balls keep bouncing off of it, which tends to bend it. If you pull the rope straight and attach two masses to its ends, then when you let go the jumprope will start to fold up and pull the masses together: they will "attract" each other.

From this "microscopic" perspective, it is obvious where the force comes from. Bouncing basketballs provide the force, through innumerable separate impacts. But we can also take a "macroscopic" perspective, where we ignore all the details and focus on the big picture. In this picture, the straight configuration of the rope requires force to maintain because it is extremely unlikely to arise at random. There are countless folded configurations but only one straight configuration, so a straight rope is going to become folded quite easily, while a folded rope is very unlikely to ever straighten out again. Amazingly enough, one can make this mathematically precise, leading to the subject of thermodynamics.

Verlinde argues that gravity not only can, but must, be of the same nature as the jump-rope force described above (given the hypothesis of holography to screens which have a temperature). His reasoning is simply that he can derive the entire force from these assumptions, so there isn't any residual effect left to explain. He produces a really rather elegant "dictionary" which maps the usual quantities of gravity and acceleration onto the temperature and entropy of the screens.

It is beautiful, and nice to read because it draws together so many ideas and the formulas are simple, and it has already generated numerous followup papers.
However, I don't believe anymore that it can be right, after reading devastating commentary on Lubos Motl's blog "The Reference Frame" (which included direct responses from Verlinde).

Motl's criticisms were directed not at holography - which clearly seems possible - but at the attempt to derive zero-temperature physics from an underlying theory which does not have zero temperature. In the case at hand, Verlinde is trying to derive gravity in empty space - zero temperature - from a theory on a screen about which we know nothing except that its temperature is non-zero.

There are two major problems here. One is that the underlying theory will always have some analog of the "bouncing basketballs" which actually cause the forces, and these interactions will have measurable quantum mechanical effects. In quantum mechanics we can take two particles and match their wave functions, and then check them again later to see if they still match - which they will not do if each has been subject to a bunch of random particle interactions. Such experiments have been done with neutrons to high precision.

The second problem is that an entropic force should be irreversible. If a falling ball is really caused by increasing entropy, then it should not be easy at all to make a ball rise - just as it isn't easy to put a broken glass back together. But in fact we can raise a ball just by throwing it. This argument to me is powerful but not as decisive as the first one.

There I will leave it. The paper is relatively accessible and worth reading.

Tuesday, January 26, 2010

Black Holes at the LHC?

Worrisome-sounding news in the blogosphere today: two scientists (Matthew Choptuik and Frans Pretorius) report computer simulations proving that black holes actually can form in a collision of two particles. At first glance it sounds like more fodder for those who oppose the Large Hadron Collider in fear of Earth-gobbling black holes.

So - scary, or not very?

I would say not very. We already knew, for all intents and purposes, that particle collisions could make a black hole at high enough energies. It's nice to see it actually happen in a full computer simulation of classical General Relativity, it doesn't add greatly to the debate, especially as the energies simulated were far beyond the LHC.

The LHC debate really hinges on two issues: 1) does "new physics" (such as extra dimensions) permit formation of holes at lower energies, such as found at the LHC; and 2) could those holes grow uncontrollably. Issue (2) is clearly the big one, and the general consensus is that tiny holes evaporate immediately and do not grow, but one can be forgiven for finding this less than fully reassuring.

Personally, I am not worried about rampaging black holes for the following reason. Powerful cosmic rays strike the Earth every second, many with energies far beyond those of the LHC. This has been going on for 4 billion years, so if black holes could form and grow from such collisions then the Earth would have been swallowed long before we ever materialized to worry about it.

I can't help but reflect how much fuss would have been spared had the U.S. gone ahead and built the SSC back in the 90's, before the possibility of black hole formation had even been imagined. Furthermore, physicists would have been spared two more decades of theorizing in a vacuum, with no data; and finally, the SSC was considerably more powerful than the LHC will be.

Ah, what might have been.

Sunday, December 13, 2009

Why moving clocks run slow

The slowing of moving clocks ("time dilation") is one of the most famous results of Einstein's Special Theory of Relativity. Because it relates to time, it sounds very arcane and mysterious, but in truth it is very easy to get a concrete picture of how it happens.

This is because someone invented the wonderful example of the "light clock". I will first go over this example, and then explain why the same mechanism also applies to more normal clocks (not to mention every other physical process, e.g. aging).



Fig. 1
Light clock at rest.

Figure 1 shows a light clock sitting still with respect to us. The clock consists of two mirrors between which a light beam bounces back and forth. A counter counts the bounces, allowing us to measure time. I'm not sure whether such a clock has been built, but it's certainly possible in principle.



Fig. 2
Light clock in motion (aboard a fast-moving spacecraft).

Figure 2 shows a light clock flying past us aboard a spaceship. The light still bounces back and forth between the mirrors, but what we see now is that it takes a longer path than it did before, because the mirrors keep moving. Since the light still moves with its accustomed speed (denoted "c"), each bounce takes longer. It's actually easy to compute the exact factor of slowing using this picture; see the Wikipedia article.

So the moving light clock slows down because the light beam has trouble catching up to the mirrors it is bouncing between. The main ingredient that creates this result is that the speed of light doesn't depend on the thing that emitted it; light, unlike a tennis ball, doesn't move any faster after bouncing off of a moving mirror.

And why is this? It is because light is a wave, and the speed of a wave depends on its medium, not on the emitter. Think of a boat: its wake moves at the same speed regardless how fast the boat was moving.

The great revolution of 19th century physics was Maxwell's wave theory of light, and it is from this that Relativity directly sprang. Currently all of our theories of physics are wave theories and therefore they all embody similar Relativistic effects; indeed, their mathematics is matched up such that they all embody the exact same effects.

Finally let's consider more "normal" clocks, like ticking mechanical clocks or digital clocks. These clocks are really just light clocks in disguise, because they work using electrical forces, and electrical forces are carried by the electromagnetic field, the same field whose vibrations we call light. Mechanical clocks consist of atoms which are bound together by electrical forces, while digital clocks consist of electrons being shunted around by various electrical forces.

A normal clock is basically the same as a light clock but with a many, many mirrors. Each atom is like a mirror, and the electrical forces between them are like the light wave bouncing in the light clock. If the clock is moving rapidly, the forces between the atoms are transmitted more slowly, causing the operation of the whole clock to slow down.

Obviously there is much more that should be said here. For one thing, there are processes that don't involve light at all, e.g. those mediated by the strong or weak force, but as pointed out above these forces are also waves sharing the same basic properties which cause the light clock to slow down.

And one should really discuss the "paradoxical" symmetry of the time dilation: each observer sees the other's clock running slow. It seems impossible, but that's obviously how the light clocks behave, so it can't really be a paradox - and it isn't. But I have to leave it there.

Tuesday, December 8, 2009

Heat pump efficiency

Here's something that surprised me...I guess I didn't pay close enough attention in thermodynamics class....

What's the most efficient way to heat a house, a) burn natural gas, or b) run an electrically powered "heat pump" system?

I would have answered a), thinking that nothing could be more efficient than to burn an energy source directly into heat. But this is totally wrong. It is actually far more efficient to let the power company burn that gas to generate electricity, and then use the electricity to run your heat pump.

The outside air may be colder than the inside, but it still stores plenty of heat - the only trick is how to get it from the outside to the inside. Heat doesn't naturally move from a colder place to a hotter place, so it takes energy to pump it, but there is a multiplier factor: a given amount of energy can transfer several times that amount of heat.

Heat pumps really seem like a case of "something for nothing". How can energy E magically pump 3E or 4E of heat from the freezing outdoors to the inside of your house?

The first key is that the outside and inside temperatures are actually not that different, when measured in the Kelvin scale, i.e., starting from -460 Farenheit. The difference between 68 degrees indoors and 32 outdoors is only about 6% on this scale.

The second key is that the obstacle to transferring heat from outdoors to in is not energy, but entropy. After all, we are not talking about creating any energy - just moving it around. Energy ordinarily doesn't move from cold to hot places because it has lower entropy in the hot place 1; however, the entropy difference is not that great for normal temperatures because it depends on the temperature difference in Kelvin.

The third key is that the energy we use to run the heat pump has to be in a very low-entropy form, such as natural gas, rather than a high-entropy form such as the air inside the house. (We could not use the energy in that air to power the heat pump!)

To pump energy from outside to inside, then, all we have to do is make up the relatively small difference in entropy, which we can do by taking a little bit of low-entropy energy and converting it to high entropy.

Of course this doesn't tell us how to make a heat pump, it just tells us something about the performance we can expect. But a heat pump is not complicated - it is just a refrigerator or air conditioner turned backwards.

Added 8/4/2013: Since the heat pump is just a standard heat engine running in reverse, its efficiency is the inverse of that of a heat engine. The efficiency of an ideal heat engine is W/Q=(H-C)/H where H is the hot temperature and C is the cold, and W is the work done while Q is the heat transferred.  The "efficiency" of a heat pump is just the inverse of this, Q/W, and it will always be greater than one, at least for a reasonably well constructed engine.

I learned this from the book "Sustainable Energy - Without the Hot Air", by David MacKay, a book I highly recommend.

1 Hot energy has lower entropy than cold energy because the cold energy spreads over more "degrees of freedom". For example, one Joule at a cold temperature may be shared over N particles, while at a hot temperature it is only shared over N/2 particles, because the particles are all moving around faster on average.

Monday, December 7, 2009

Seeing quantum gravity?

Here's an beautiful line of research: look at very distant objects with the best possible telescopes, and see if their images are blurred due to spacetime fluctuations caused by quantum gravity. This is the subject of a recent paper posted to ArXiv with the evocative name "A Cosmic Peek at Spacetime Foam" (authors Wayne Christiansen, David Floyd, Y.J. Ng, and Eric Perlman).

The mixing of scales involved in this scenario is breathtaking. Photons coming to us from the most powerful objects in the universe - quasars - and traversing the longest distances we can measure - billions of light years - bring to us traces of the smallest entities ever conceived, namely the tiny fluctuations of quantum spacetime. Throw in some black hole theory and some quantum information theory, which are used to try to estimate the expected blurring effects, and one definitely gets what they call a "sexy" scientific paper.

So has quantum gravity been observed? Well...perhaps. There is a hint, but no more than that, of the behavior one would expect from one particular model (the behavior being the dependence of blurring on wavelength). It will take a better instrument to convert the hint into something meaningful.

This is definitely exciting data to look forward to from future, more accurate telescopes!

Below I've attached the actual quasar images used in the study, not because they really convey anything by themselves, but just because they are fun to think about...

Sunday, December 6, 2009

Falsifiability

What makes a theory "scientific"? Probably the most widely accepted notion is that it should be falsifiable, i.e., there should be some way, at least in principle, to disprove the theory. This sounds reasonable, but unfortunately it is logically possible - and looking increasingly likely - that the true underlying "theory of everything" is not falsifiable.

What if the theory predicts, for example, the existence of many different universes, unable to communicate with one another? This is a perfectly reasonable possibility, yet we could never disprove it. These "multiverse" theories even have explanatory value in helping us understand why our universe has the particular constants of nature necessary for life (because each universe has a different, random set of constants, so eventually the ones suitable for life will crop up).

Falsifiability is also a very tricky criterion to use in discriminating science from pseudoscience. For a theory to be falsifiable there have to be two "possible" universes, one in which the theory is true and one where it isn't. So we need to know what kind of universes are "possible"; but once we decide this then we don't need falsifiability anymore, since we will already know which theories are possible.

To me it seems that there is a very simple criterion for which universes can exist, namely reducibility to mathematics. As I have argued in another post, any possible universe must be founded on mathematics because only mathematical objects can actually be defined. This implies (as a trivial consequence) that the only "scientific" theories are those compatible with reduction to mathematics.

This criterion immediately rules out any theories involving gods or "supernatural" beings. One can argue at length over the hypothetical characteristics of these entities, but one thing their supporters will never agree to is that they might have a rigorous mathematical basis - because that would defeat the entire psychological purpose of believing in them.

The criterion may seem simplistic and reductive, but it does cast a clear light upon the issues - and one which happens to build upon, rather than shrugging off, the mathematical foundation we have discovered in our own universe.

My criterion is also more honest, I believe, since generally when scientists argue that certain things are "non-scientific" what they really mean is that those things could not possibly exist. For example, if "spirits" could exist and influence events, then those spirits could be studied by science and would not be "unscientific". To say they are "unscientific" is pointless - for that is the exact reason why believers want to believe in them; what one really means by "unscientific" is that they could not possibly exist.

Of course, to believe that only mathematical entities can exist is a belief; we can never prove this. However, it is a belief which matches our discoveries about our own universe, and which makes logical sense, and this is more than one can say about its numerous competing belief systems.

Wednesday, December 2, 2009

What is a clock?

What is a clock, what is a "good" clock, and why do any good clocks exist?

A clock is just a physical system which goes through a repeating cycle. The cycle can be anything from the swinging of a pendulum to the vibrations of the electromagnetic field. By counting the cycles we can attempt to measure of the passage of time.

Of course our measurements won't be very useful unless the time taken for each cycle remains constant. A clock whose cycle time remains constant is a "good" clock, and we really need at least one of these to make any sense of time at all.

Fortunately, we can expect almost any reasonably-constructed clock to be good, in almost any reasonably-imaginable universe. The reason for this is that the laws of a reasonable universe have a symmetry known as "time translation invariance", which is a fancy way of saying that the laws today are the same as the laws tomorrow. This means that identical starting conditions give rise to identical evolution, regardless of time. Since each clock cycle starts with an identical configuration, each clock cycle unfolds the same, and takes the same amount of time.

How do we know that this "time translation invariance" is actually true? We don't know for sure, but it is true for our current best-guess theories, and it is hard to see how life or any interesting structure could evolve in a universe without it. Evolution couldn't work if the next generation was subject to different laws from the current one - and the Earth itself probably could not sustain a regular orbit around a star.

So time translation invariance is a very fundamental assumption/observation of physics. Perhaps it is not surprising then that it is intimately connected to the most fundamental quantity of physics - energy. Energy conservation is the flip side of time translation symmetry. Mathematically they are simply different statements of the same thing; but this will have to be the subject of another post.

Saturday, November 14, 2009

Where the Uncertainty Principle Really Comes From

If you have read much popular physics literature, chances are you have encountered this formulation of the Heisenberg Uncertainty Principle:
"The measurement of position necessarily disturbs a particle's momentum, and vice versa". This "measurement" formulation comes from Heisenberg himself and has lodged itself firmly in the popular imagination; indeed, it has spread beyond physics to become an intellectual paradigm used in all sorts of contexts.

Unfortunately, the measurement formulation is a very inaccurate explanation of the Uncertainty Principle. It is not false, exactly, but it is circular and does not get to the root of the phenomenon. Ironically enough, it is probably more correct in its applications outside the field of physics than it is for the actual physical phenomenon.

For readers who have grown comfortable with the measurement formulation, I will now explain why it is inadequate; those who wish may skip straight to the next section where I present a more accurate explanation.

The Problem with the "Heisenberg Microscope"


The "measurement formulation" of the Uncertainty Principle was invented by Heisenberg himself, who illustrated it with a famous thought experiment known as the "Heisenberg Microscope".

The setup is simple: a scientist is trying to locate a particle by means of light. In other words, he or she is trying to figure out where the particle is by bouncing light off of it and observing the reflected light. This is just the normal operation of a microscope, but we imagine applying it at the tiny scale of individual particles.

What our scientist finds is that the particle's location is not easily pinned down. He or she may try to increase the precision of the observation, but the effort becomes self-defeating, and in the end the scientist must be satisfied with limited knowledge.

In more detail, the way to increase the precision is by using a shorter wavelength of light. Shorter wavelengths allow the resolution of smaller distances (the "Rayleigh Limit", which I will discuss in a subsequent blog post).

But now the quantum rears its enigmatic head, for light, as we know now, is made up of discrete particles known as photons, and photons of lower wavelength have higher momentum. (This was the 1905 discovery for which Albert Einstein won the Nobel Prize. If momentum is an unfamiliar concept, you can substitute "energy" instead for the purpose of this explanation).

So the shorter the wavelength we try to use, the higher the momentum of the photons - and the more they knock around the particle we are trying to observe. By using very short wavelengths, we can know very precisely where it was when the photon hit it - but only at the cost of losing any idea where it is afterwards. Conversely, if we use very long wavelength photons, we will get only a vague idea about where the particle was - but at least we will know that it didn't get knocked away from there.

And voila, the "Uncertainty Principle": we can't observe the particle without disrupting it. It sounds great - once one accepts all the things we said about photons.

Therein lies the rub, which makes this explanation circular. The problem arose because shorter wavelength photons have higher momentum. But why is this? Why are there no short wavelength, low-momentum photons which we could use to nail down our particle definitively?

The reason is that the Uncertainty Principle applies to photons too. Photons whose location can be known precisely - i.e., short-wavelength photons - necessarily have a big uncertainty in momentum, which is essentially the same as saying they have high momentum (because something with definitely low momentum cannot have much momentum uncertainty).

So the Heisenberg Microscope "explains" the Uncertainty Principle for other particles only by assuming the same principle for photons. It is a circular explanation.

The True Origin of the Principle


The true origin of the Uncertainty Principle lies at the heart of Quantum Mechanics (QM) itself. It is deeper and more interesting than the "measurement" explanation - but also a bit more abstract and mathematical. We will have to look at a couple of graphs to understand it.

The key to the Uncertainty Principle is that position and velocity (technically, momentum) are not separate in QM. In Classical Mechanics (CM) position and velocity are just two sets of numbers with no connection to each other at a given moment. Of course, the velocity tells how the position will change in the next moments - but that doesn't change the fact that particles can have any position and any velocity at one particular moment.

In QM the situation is very different. Neither position nor velocity is a fundamental quantity in QM; rather, every particle is defined by a "wave function", symbolized by Ψ. Ψ gives the probability that the particle might be seen at different positions. It can be visualized as a simple graph showing probability vs. position; two examples are shown in Figure 1 below:


Figure 1
Two examples of the particle wavefunction, Ψ.
A. Position more certain, velocity less certain
B. Position less certain, velocity more certain



(Actually I have simplified the situation slightly, because Ψ really is a complex-number function and its square is the probability. This is incredibly important for physics but not for understanding the Uncertainty Principle!)

The graph shown in Fig. 1A depicts a particle whose position is relatively certain. We can tell this because the graph is very narrow, meaning that the probability of seeing the particle is concentrated in a small region. Conversely, Fig. 1B is wide, and depicts a particle having very uncertain position.

Already here we can see that to have complete certainty about position is an unusual, even pathological case in QM. For complete certainty, the particle's graph would have to be so narrow that it covered only one point of space, being zero everywhere else. Such a weird graph isn't going to arise in a normal physical situation.

So the particle's position information is given by the graph of Ψ. Where is the velocity information? The sensible thing would be to have another wavefunction for velocity; however, nature doesn't always choose to be sensible!

No, it turns out that the velocity information is magically encoded right into the position wavefunction. This one graph gives us both position and velocity, making these two quantities indivisible in QM (the technical term is "complementary"). This close relationship is the root of the Uncertainty Principle.

The encoding of velocity information in Ψ is simple but not at all obvious. Velocity is represented, roughly speaking, by the steepness of the slopes on the graph. A graph with steeper slopes - a "bumpier" graph - encodes higher velocities than a smooth, non-bumpy graph. Of course it is not one particular velocity which the graph encodes, but rather the probabilities of different velocities, just as with position. Bumpier graphs have a higher spread of velocities than smooth graphs.

Now if look back at Fig. 1, we can see exactly where the Uncertainty Principle is coming from. Figure 1A, having the more certain position, also has much steeper slopes than Fig 1B. Therefore, the velocity of the particle in Fig 1A is less certain than that of Fig 1B.

This simple example shows the essential rule: the more we try to squeeze the particle's position - as in Fig. 1A - the steeper the slopes on its graph, and the more uncertainty is present in velocity. Conversely, the more we try to pinpoint the velocity - which means smoothing out the slopes in the graph, as in Fig. 1B - the wider the graph grows, and the less certain the position becomes.

This then is the true origin of the Uncertainty Principle. It is not related to the process of measurement, which should not be surprising given that measurement is also necessary in Classical Mechanics. Rather, the Principle comes from the very different mathematical definition of a particle in Quantum Mechanics, the basics of which were sketched above.

The fact that position and velocity are united in Quantum Mechanics gives rise to no end of surprising phenomena, and almost seems to suggest that the concepts of space and motion might be unified in the underlying theory of spacetime, whatever that may be. It seems a bit gratuitous to have a spacetime capable of supporting independent particle velocities and positions, when the particles themselves don't possess them.

Friday, May 2, 2008

Yes, Everything is Math

The student of physics cannot help but notice, and perhaps resent, the central place of mathematics in the curriculum. Doing physics is nearly synonymous with writing equations, and the centrality and sophistication of mathematical methods in physics have only grown over time.

Why is this? Why does mathematics seem to be the language of physics, and even of science as a whole? Is it an accident, something which might have been otherwise? Is it something even to be regretted - a bad joke played upon us by the creator, who could have created a more-fun universe but chose not to?

I believe it is not an accident. In the following I will argue that any possible universe must have a rigorous mathematical basis, just as we have discovered in our own.

Observe first that a universe must have rules. Donkeys don't fly, horses don't turn into supernovae, bugs don't become buggies...an endless list of constraints operates at all times in our universe, and it seems clear that a similar list must apply in any universe. Any universe must contain things which are distinct from other things, and every distinction implies some kind of rule - thing A doesn't spontaneously change into thing B. No rules implies no things, i.e., no-thingness - nothing.


So where do these rules come from? Could we design a universe be by listing out each of them in "plain-english" form as above? You can try but - good luck.

For starters, you would need an infinite number of rules. More seriously, it isn't even possible to define any of the terms used in such rules. What, for example, is a bug? If a frog eats a bug, is it still a bug? When does it stop being a bug? Is a partially developed larva a bug? Is an animation of a bug a bug? Is a mutant or genetically altered bug a bug? Virtually all plain-english concepts are impossible to define, and therefore not adequate for specifying a universe.

For example, we said a bug can't become a buggy. If we define a bug as, say, something which has more than 4 legs and an exoskeleton, then we have not ruled out the possibility that a mutant bug with 3 legs and no exoskeleton could turn into a buggy. We also have not ruled out the possibility that a bug could turn into a Lincoln Continental; indeed, we can't even start this discussion without first defining the terms "leg" and "exoskeleton", which we will find impossible to do.

Philosophers have wrestled with these problems of definition for millenia, and there is no solution. The higher-level concepts embodied in plain-language terms cannot be fully defined, but are inherently fuzzy and subjective.

Now, why in reality does a bug not become a buggy? Obviously it is because the bug is made of atoms, and the atoms don't spontaneously rearrange themselves or change their characteristics. And why is this? Is it because we have some fundamental rules governing atoms, rules like "sodium can't spontaneously turn into chlorine"? No - because atoms are not fundamental building blocks either, and cannot be rigorously defined any more than bugs (is an ionized atom still an atom? Is an unstable atom still an atom?)

Atoms act the way they do because they are made from electrons, protons, and neutrons. Protons and neutrons, in turn, act the way they do because they are made from quarks.

And now we are getting someplace, because both electrons and quarks are fundamental, mathematical objects (at least in current theories). In other words, they can be defined completely. We can write down by means of equations exactly what they are and what rules they obey, under all circumstances, with no caveats or gaps. These are the kind of rules on which a universe can be based, and they are called the laws of physics.

So I would argue that the universe is built on mathematical objects because these are the only objects which can be comprehensively defined. No other kinds of objects can exist except as aggregated constructs of underlying mathematical building blocks (e.g., a bug is built from electrons and quarks). The underlying laws of physics are mathematical because no other kinds of laws exist. In creating a universe, the choice is not whether to base it on mathematics, but only which mathematics to use.

None of these arguments are original to me, of course, although I haven't heard them expressed in quite this form. The essential ideas, including my discussion of things not changing to other things, go all the way back to the original Materialists, as recorded by Lucretius. The original Materialists, interestingly enough, based their Materialism not on "scientific evidence", as is the custom today, but rather on exactly the philosophical arguments outlined above. Of course neither philosophy, nor science, nor any other technique can ever prove anything definitively, so we don't claim to prove that math must underly everything; however, we do claim that the case is pretty strong.

Recognition of the primacy of mathematics allows us to formulate a different conception of science and the scientific method, one which frames the debate with Creationists and other pseudoscientists in a different light. Science is the study of the underlying mathematical laws of the universe and the effort to connect all observed phenomena to them. The "scientific method" is nothing but common sense applied to this effort. There is no single method of science, just as there is no single type of argument in a legal case; however, there is a single goal to the endeavor of science, and it is by reference to this goal that we, in fact, distinguish science from pseudoscience. Science proposes explanations which are potentially connected to an underlying mathematical order; pseudoscience proposes explanations which are not. The concept of "refutability", which is very slippery to define in general, becomes crystal clear from this perspective: non-scientific theories are "irrefutable" because they can't be founded on mathematics and therefore do not follow any definable rules - and that which is not bound by rules can never be refuted.

We also find a new perspective on the concept of Materialism. Materialism is inseparable from Mathematics, and the "material" to which it refers can only be a mathematical construct - because no other construct is possible. This "Mathematical Materialism" is the necessary foundation of any universe. In doing science, we don't "discover" that the universe is mathematical, but merely what kind of mathematics it employs.

Sunday, March 23, 2008

Fields and their Discontents

How does science make progress? In school we all learned about the "scientific method": data, hypothesis, experiment, new hypothesis...resulting in incremental improvement to our models of the world. When it comes to fundamental physics, however, this paradigm is rather inadequate, because it gives the impression that hypotheses are arbitrary, unconstrained, and concocted as needed to fit new data. Nothing could be farther from the truth.

Hypotheses in fundamental physics take the form of mathematical theories of the underlying structure of the universe, and mathematics is neither arbitrary nor unconstrained. Only certain mathematical structures exist and our theories must be built using these. New structures can be discovered, of course, but the latitude for constructing them is tightly limited by the requirement of logical consistency. For this reason, very sweeping hypotheses may often be put forth on the basis of little, or even no new data, but simply by investigating the mathematical consequences of our existing theories and fixing purely mathematical flaws.

In this case theorizing proceeds, not inductively, by gathering more data and seeking models to fit it, but rather deductively, by seeking some new mathematical structures which can resolve problems in the existing framework. Often enough there is only one mathematical structure which can achieve this. Of course it must be validated by experiments before we believe it, but if we find a complete, mathematically coherent hypothesis, chances are good that it is correct, because such hypotheses are not common. One or two key experiments may be all it takes to convince the community of a new theory when no mathematically compelling rival has been found.

It is not too much of an exaggeration to say that all of modern physics was born in this fashion. In the early to mid 19th century, Michael Faraday and James Clerk Maxwell had introduced a major new mathematical concept to the world, the "field", and had argued convincingly that the phenomena of light, electricity, and magnetism could all be unified in a new theory based on this new concept. The new theory was called Electromagnetism, and was the first major advance in physics since Newton's laws. The new theory scored success after success, but after several decades some clear thinkers began to notice that the field concept contained certain inherent difficulties. These were Lord Kelvin's famous "two small clouds" on the horizon of physics, and they would grow, respectively, into the revolutionary storms of Relativity and Quantum Mechanics.

To describe these problems let's back up and review physics as it was before the advent of the field. Before fields, there were particles. Particles were discrete bundles of matter, not subject to further analysis, and had a definite location at each moment in time. They exerted forces on each other by instantaneous action-at-a-distance (Newton's law of gravity, and the similar laws of electric charge attraction and repulsion).

The phenomenon of light, however, is very difficult to understand with a particle model. Its diffraction, refraction, and interference behaviors can only be explained by assuming light is a wave. But a wave of what? Something has to be "waving", just like the water whose up-and-down movement constitutes water waves; and that something is the newly invented concept of the field.

A field, unlike a particle, exists everywhere. In every nook and cranny of space, at all times, within and without any other matter, the field is there. It is somewhat analogous to temperature and pressure in the Earth's atmosphere; for every point in the space above the Earth, there is a temperature number and a pressure number. Likewise, a field is described by a certain set of numeric values at every point in space and time (for electromagnetism, there are six values). The larger the values, and the more rapidly they are changing, the more energy the field contains at a particular location. A disturbance at one location, like a pebble dropped in water, spreads by waves into the surrounding space.

We can't go further into the physics of fields and waves here, but the important point to grasp is that the field is a new kind of mathematical beast. Particles are defined by a location; fields are defined by a value at every possible location.

Every possible location is a lot of locations, and therein lies the first, and most crucial problem with fields: they have too much energy-storage capacity. You can always pack more energy into a given little region just by making the field fluctuate more rapidly in that region. This, unfortunately, makes it impossible to cook food! An oven works by heating up the surroundings of the food, so that heat is transferred to the food. The surroundings of the food include, of course, the electromagnetic field, so this must be heated up. But no matter how much energy you pump into the field in the oven, there is always room for more - there are always higher frequency modes of fluctuation which are not yet filled. The field, therefore, can never be heated to any temparature; both the oven, and the food in it, would see all of their energy sucked away by the field, making them colder than they started (indeed, taking them to absolute zero).

This paradox of fields was known for technical reasons as the "ultraviolet catastrophe", and it shows quite starkly that a classical field theory such as electromagnetism cannot be a fundamental theory of nature. No matter how well it seems to match many experiments, it is not mathematically possible for it to truly represent a universe in which any structure, e.g. life, could exist.

Max Planck was obsessed with this problem and, in perhaps the most remarkable bout of theorizing in the history of physics, he concocted a mathematical formula to resolve the oven problem, and a profoundly non-intuitive mechanism to underly it. Planck's formula was ad-hoc and just the tip of the iceberg - the first glimpse of a new, consistent mathematical structure which contains the old field theory, mostly, and resolves its problem of "too muchness". The new structure, called Quantum Field Theory (QFT), was created in the 1930's and its profound mathematical depths are being plumbed to this day.

Between Planck's discovery, in 1900, and the advent of QFT in the 1930's, physicists were engaged in working out a preliminary stage of this theory, namely Quantum Mechanics. Quantum Mechanics is a theory of particles, not fields, and this has obscured the fact that it came into existence to resolve a problem with fields. The universe could be made of classical, Newtonian particles; or, it could be made of Quantum particles; but it cannot be made of classical, Faraday/Maxwell fields.
Classical field theories cannot underly a real universe because of the oven problem, and so far no way has been found to resolve this outside of the Quantum. It appears that the "purpose" of the Quantum is to make field theories mathematically possible.

Thus Quantum physics arose out of the mathematical difficulties of fields. The Theory of Relativity also arose from the mathematics of fields, not as a problem but rather a very unexpected mathematical consequence.

Recall that energy propagates through a field by waves, just like the water waves when a pebble drops. So what? Well, the funny thing about waves is that they have a predetermined speed. You can't push on water waves to make them go any faster; any kind of splashing or pushing you do just makes more waves, but the new waves move at the same, predetermined speed as the old ones. This is completely different from particles, which move faster if you push them harder.

Now let's imagine that everything in the universe is described by a field of one kind or another (which, in fact, is believed to be the case). Imagine an object, for example a wristwatch, which consists of various different parts. These parts have to communicate with each other in order for the watch to work. The communication happens by waves of the fields, and these waves move at a certain speed. Now here's the kicker: what if the watch itself is moving at a speed close to the wave speed? Then the waves emitted from the parts behind are going to have an awfully hard time "catching up" to the parts ahead. This moving watch is very unlikely to tick at the same rate as a stationary watch; indeed, when we look at it this way it seems surprising that it can keep working at all.

Einstein thought very hard about this problem, albeit from a somewhat different angle, and the result is his famous Theory of Relativity, in which moving clocks run slow, moving objects shrink, and matter equates to energy. I will fill in more of the logical steps here in a later blog, but the point to take away is that things built from fields act funny when they move, because waves travel with a fixed speed. Depending what the fields are like exactly, moving things can act funny in a simple way or in arbitrarily complex ways. Einstein's hypothesis is that they act funny in the simplest possible way. His theory is often regarded as a theory "about space and time", but I think it is more correct to regard it as a theory about the behavior of moving matter; however, this discussion must wait for a later blog.

In closing let me note that the problems and mathematical developments brought about by the field concept are far from finished. It turns out that most Quantum Field Theories still suffer from the problem of "too muchness". In Quantum Physics, particles (i.e., local field fluctuations) can pop into existence temporarily from nothing, and if there are too many possible modes of fluctuation (roughly speaking) the theory doesn't make mathematical sense. This appears to be the case for any possible Quantum theory of gravity, so that gravity cannot coexist with the theories we have now for other types of matter. Something beyond a QFT is needed - and so far the only compelling candidate is String Theory.

Therefore, with some exaggeration, we can say that all of modern fundamental physics, from Relativity to Quantum Physics to String Theory, was implicit in the purely mathematical difficulties which arise from the field hypothesis. Had all scientific experimentation stopped in 1850, it is quite possible that all of modern physics would still have been discovered by mathematicians, and that they would have become convinced of its truth based on consistency alone, and lack of any other discoverable alternatives.