Entropic Forces

This mysterious sounding concept is widely used in biology. The chief reason being that life, in fact, is a struggle against the Second Law of thermodynamics.

The Second Law stipulates that the entropy of a closed system, quantifying its disorder, always increases. Disorder increases that is until a maximum is reached after which nothing new can possibly happen. No dynamics is possible since there is nowhere else to go as a maximal disorder is reached.

Photo by FlyD on Unsplash

In the state of maximum entropy therefore life would not be possible. Schrödinger used the concept of neg-entropy (neg for “negative”) to say that the primary role of the food the living systems consume is to import neg-entropy from the environment in order and keep the living being as far away as possible from the state of maximum disorder. Eating food, in other words, acts to slow down the overall increase in entropy and thus reduce the speed at which we approach death. This is an interesting point because we always think of food in terms of the energy it gives us, but Schrödinger claimed that we should, instead, focus on its entropy content (in other words, how many bits of neg-entropy does a Mars bar have?).

Now, biologists call this tendency towards disorder an entropic force. But the word “force” is a bit misleading. The reason is that nothing “forces” things to go towards maximum entropy – it simply just happens spontaneously. It’s a universal statistical trend in the macroscopic world which is why we gave it the grand name it has (the Second Law; the First Law being energy conservation).

Let me elaborate a bit on why the word force is inappropriate. Imagine that you put all the molecules of air in one half of your living room (the other half therefore remaining in the state of vacuum). The molecules move randomly in all possible directions, some going up, others down, some left, others right, and so on, equal numbers in all possible ways.

Soon after the separation, however, you will notice that the empty side of the room is no longer empty. That’s because some of the molecules have gone into the vacuum from the other side. This trend continues. It continues simply because even though the molecules are moving randomly in all directions, there are more on one side than the other, so more of them cross from the denser into the less dense region.

This tendency is simply a consequence of statistics and, in particular, of the law of large numbers. The process continues until there is an equal number of molecules on both sides. When this happens there is then an equal flux of molecules going each way and this is therefore the estate of maximum entropy. Nothing new can happen as any motion to the left is matched by an equal motion to the right, any upward motion countered by an equal downward trend and so on. So, there is no mysterious force acting to distribute the molecules in equal numbers all over your living room – that just happens spontaneously due to the underlying stochastic motion which is completely oblivious to the fact that the resulting effect is an increase in the overall disorder.

The same appears to be true for living systems. By eating neg-entropy from the environment, living systems pump ions outside of the cell membranes, thereby maintaining a difference between the positive and negative charges across the membrane. (Much like the initial difference between air molecules in the above example.) This provides the needed electrostatic energy – like in a battery – that enables many processes in the cell and in particular it enables the mitochondria to run the relevant chemical cycles (the ATP and all that) in order to power life. This in turn is used to maintain the electrostatic difference across the cell membrane and keep it as far away from equilibrium as possible. Yes, the whole story is much more complicated (and still far from being fully understood), but I just wanted to give you a rough idea why biologists like entropic forces.

Anyway, here is a question that a physicist wants to ask. If life doesn’t need elan to maintain itself, could it be that there are other things out there we physicists call forces, but which – equally – should actually be thought of in terms of entropy? A striking suggestion of this kind came from Ted Jacobson (Physical Review Letters, 1995) who argued that gravity might itself be an entropic force.

Without going into any unnecessary details, I can give you a simple, redux version of the argument. First, we acknowledge the fundamental thermodynamical relationship that entropy times temperature equals heat. Heat itself is nothing but a form of energy, which according to Einstein equals mass times the speed of light squared. We now assume that the entropy is proportional to area (the so-called Holographic principle), in other words, the radius squared (divided by the Planck length squared as it happens). The temperature is, according to Paul Davies and Bill Unruh, proportional to acceleration, which – in turn – is force divided by the mass (from Newton’s second law). Putting all this together gives us that the force equals the product of masses divided by the distance squared, namely Newton’s gravity! (Newton’s gravitational constant comes from the Planck length squared). Jacobson did this at the level of General Relativity, but the argument is more or less the same only using the geometrical concepts of curved spaces.

How watertight is this argument? I mean, how much of gravity is really just the Second Law of thermodynamics? The derivation involves two crucial relationships some believe to be true, but which, at present, have no experimental evidence to support them. First of all, the relationship between entropy and area – the aforementioned Holographic principle – has never been tested. Initially argued within black hole thermodynamics by Jacob Bekenstein, the relationship has been elevated to a universally-held principle (i.e., the Holographic principle) by Leonard Susskind. Although this principle is supported by a host of other theoretical ideas, it has never been experimentally tested. Secondly, the Unruh-Davies beautiful formula linking acceleration to temperature requires accelerations far beyond what can presently be achieved in order to generate enough detectable heat. It is also debatable if the effect of acceleration creating temperature is real in the first place, having been derived assuming the classical (not quantum) background spacetime.

However, lack of experimental evidence aside, there are more fundamental objections that might invalidate the whole logic. One is that the entropy and area connection is not exact and requires additional corrections. This realisation comes from a number of directions, including my own – in quantum information theory. Secondly, a physicist Danny Terno, from Macquarie University in Sidney, has argued that the geometric entropy (the one related to area) in quantum field theory is not a Lorentz scalar, while the black hole entropy is. His main point, published in 2004 in Physical Review Letters, is that different observers would each have to introduce a different temperature, preventing any description in terms of a single Davies-Unruh temperature that features in Jacobson’s derivation.

The jury is still out on entropic gravity, and we don’t know if there could be other entropic things in physics, but I’d like to now conclude with another observation. In quantum physics, entropy is a consequence of quantum entanglement. One can say that the entropy of the whole universe is in fact zero and that it stays that way throughout the evolution of the universe. This is just a reflection of the fact that the universe is operating according to the reversible laws of quantum physics and that it is a closed system in a pure state.

However, because different parts of the universe interact with each other, they become entangled, leading to an entropy increase in the individual subsystems within the universe. In fact, the arrow of time, which is a colourful name that Eddington gave to the tendency of natural processes to increase disorder, could be thought of as the universe starting in the state of no entanglement, which then goes in the direction of increasing entanglement due to various interactions unfolding over time.

I’ve first read about this idea that the “direction of entanglement increase is the arrow of time” as a high school student in the book titled “Ghost in the Atom”. It’s is a great collection on interviews Paul Davies conducted in the 70s and 80s with some of the leading quantum physicists at the time – and David Deutsch talks about entanglement as underpinning the Second Law of thermodynamics in his interview with Davies. Amazing stuff.

But there is more. Given that entanglement gives rise to entropy, could it be that the things called entropic forces should therefore ultimately actually be called “entanglement forces”? This leaves us with a fascinating possibility that life and gravity might be, in the final quantum analysis, manifestations of those entanglement forces. Spooky action for real! I don’t know about you, but thoughts like this make me happy to be a physicist (and, since we are here, how much entanglement does a Mars bar contain?).

Sign up to my substack if you'd like to have my articles delivered straight to your inbox

Leave a Comment





ASK ME ANYTHING!

If you'd like to ask me a question or discuss my research then please get in touch.