# [WARNING: Physics geeks only. If you are not into physics, move right along, nothing to see.]

[**WARNING**: Physics geeks only. If you are not into physics, move right along, nothing to see.]

A physicist, **Dr. Alexander D. Wissner-Gross**, has written a
celular automata program designed to maximise the entropy in a
simulation (think of it as a 3 year old child) that he's named
"Entropica".

He
then published a paper that's provoking quite a bit of reaction,
claiming that because this appears to solve various problems that
require mammals to have intelligence in order to solve, this has
implications for the nature of intelligence, and possibly explains why
we live in a universe that has entropy in - because such universes are
more likely to contain the conditions for intelligence to exist.

If this is the sort of thing you find interesting, here are a few links to further information:

- http://physics.aps.org/articles/v6/46
- http://physicsbuzz.physicscentral.com/2013/04/physicist-proposes-new-way-to-think.html
- http://phys.org/news/2013-04-emergence-complex-behaviors-causal-entropic.html
- http://robots.net/article/3578.html
- http://www.alexwg.org/publications/

Thoughts?

**Add your quick reply below:**

**Clairwil's Handy Dandy Guide to Entropy**

Ludwig Boltzmann a physicist who was a lifelong sufferer from depression. However, when he died after hanging himself, his grave was engraved with one of the greatest equations ever discovered in physics:

I'm going to try to explain to you what that equation means.

Imagine a helium balloon:

Inside the balloon are lots and lots of identical helium atoms moving about:

In the classical view, at any one time, each atom has a position and momentum ("momentum" is just a fancy name for the speed the atom is going at, combined with the direction it is going in) that we can know absolutely. Each unique combination of values for position and momentum is known as a "microstate". It is possible for several atoms to all share the same microstate (if they happen to be in the same location, and travelling in the same direction and the same speed).

You might ask yourself: "How can two atoms be in the same position at the same time?" and "Isn't it fantastically unlikely that they'd have precisely the same speed, if there are an infinite number of possible speeds between 3 miles per hour and 4 miles per hour (eg 3.1, 3.2, 3.17, 3.18, 3.174, 3.175, etc) ?"

However, when we look at things on a quantum level, we find that not only is our knowledge of the atom's position and momentum a bit uncertain, but the values they can hold are not infinitely divisible - instead there's a smallest possible change, and the value 'jumps' between these quantum levels. (So a "quantum leap" doesn't mean a massive change - it actually means the smallest possible change!)

Suppose, in our balloon, there are 1,000,000 helium atoms (ok, that's actually a tiny tiny balloon, but it makes the numbers more managable for people not used to big numbers).

And suppose there are only 5 possible microstates in which these atoms can exist within the balloon.

So, for instance, at some particular time, we might have in our tiny balloon:

300,000 atoms in microstate 1

100,000 atoms in microstate 2

0 atoms in microstate 3

200,000 atoms in microstate 4

400,000 atoms in microstate 5

The way physicists write this sort of information is using variables and subscripts. So they say there are N atoms (in our example, N = 1,000,000). And for the i'th microstate, there are Ni atoms (in our example N1 = 300,000 ; N2 = 100,000 ; N3 = 0 ; N4 = 200,000 ; N5 = 400,000).

We call these things "microstates", by the way, rather than just "states", because there's also the "macrostate", which means the state of the whole system (the balloon, in our case).

We're now in a position to define what the "W" in the equation on the grave meant. "W" stands for the word "*Wahrscheinlichkeit*" and it is a measure of how frequently our particular macrostate was likely to occur, which we can calculate using permutation theory by looking at the underlying microstates:

W = the factorial of N, divided by the factorial of each of the Ni.

(

In maths, a "factorial", is shown using an exclamation mark symbol.

4! = 4 x 3 x 2 x 1 = 24

7! = 7 x 6 x 5 x 4 x 3 x 2 x 1 = 5040

)

So, for the example of our balloon

N !

W = --------------------------------------

N1 ! x N2 ! x N3 ! x N4 ! x N5!

1,000,000 !

W = ------------------------------------------------------------------

300,000 ! x 100,000 ! x 0 ! x 200,000 ! x 400,000!

S is the entropy of that particular macrostate of the balloon

And k is a numerical constant, the one we now name after Boltzmann, that links together S and the natural log of W.

k = 1.38 × 10^{−23} Joules per Kelvin

So that's what entropy actually is:

S = k log W

That's not a metaphor. That is a 100% accurate mathematical description. There are a number of equivalent ways of arriving at a numerical value for S, but they all are based upon the underlying statistics of permutations of microstates - how likely it is that the atoms in a balloon will split into a particular division between equally likely microstates, by chance.

Knowing *what* something is, however, is rather different from understanding it, or its implications. :-)

Well, seems to me chaos has increased variables compared to neat & tidy orderly systems...

more variables means more possibilities

Possibilities for discovery, learning, problem solving...

oh wait...that's just my 12 y.o's rationalization for how she keeps her desk :D

(source)

# Entropy law linked to intelligence, say researchers

A modification to one of the most fundamental laws of physics may provide a link to the rise of intelligence, cooperation - even upright walking.

The idea of entropy describes the way in which the Universe heads inexorably toward a higher state of disorder.

A mathematical model in Physical Review Letters proposes that systems maximise entropy in the present and the future.

Simple simulations based on the idea reproduce a variety of real-world cases that reflect intelligent behaviour.

The idea of entropy is fundamentally an intuitive one - that the Universe tends in general to a more disordered state.

The classic example is a dropped cup: it will smash into pieces, but those pieces will never spontaneously recombine back into a cup. Analogously, a hot cup of coffee will always cool down if left - it will never draw warmth from a room to heat back up.

But the idea of "causal entropy" goes further, suggesting that a given physical system not only maximises the entropy within its current conditions, but that it reaches a state that will allow it more entropy - in a real sense, more options - in the future.

Alex Wissner-Gross of Harvard University and the Massachusetts Institute of Technology in the US and Cameron Freer from the University of Hawaii at Manoa, have now put together a mathematical model that ties this causal entropy idea - evident in a range of recent studies - into a single framework.

"In the past 10 to 15 years, there have been many hints from a variety of different disciplines that there was a deep link between entropy production and intelligence," Dr Wissner-Gross told BBC News.

"This paper is really the first result that clarifies what that link precisely is... to the point that it's prescriptive - it actually allows you to calculate in a sensible way answers to questions that couldn't reasonably be answered before."

The simplistic model considers a number of examples, such as a pendulum hanging from a moving cart. Simulations of the causal entropy idea show that the pendulum ends up pointing upward - an unstable situation, but one from which the pendulum can explore a wider variety of positions.

The researchers liken this to the development of upright walking.

Further simulations showed how the same idea could drive the development of tool use, social network formation and cooperation, and even the maximisation of profit in a simple financial market.

"While there were hints from a variety of other fields such as cosmology, it was so enormously surprising to see that one could take these principles, apply them to simple systems, and effectively for free have such behaviours pop out," Dr Wissner-Gross said.

'Beyond luck'

Raphael Bousso of the University of California Berkeley said: "It has always mystified me how well this principle models intelligent observers, and it would be wonderful if Alex's work could shed some light on this."

Prof Bousso showed in a 2007 paper in Physical Review D that models of the Universe that incorporated causal entropy were more likely to come up with a Universe that contains intelligent observers - that is, us.

However, he cautions that although the new paper bolsters the case for causal entropy, the idea still lacks explanatory power.

"The paper argues that intelligent behaviour, which is hard to quantify, can be reduced to maximising one's options, which is relatively easy to quantify. But it cannot explain intelligent behaviour from first principles," he told BBC News.

"It cannot explain how that 'intelligent agent' evolved in the first place, and why it seeks to maximise future options."

Axel Kleidon of the Max Planck Institute for Biogeochemistry in Germany, who authored a 2010 paper in Physics of Life Reviews using maximised entropy to consider the machinery of life on Earth, said that the work "shows some very intriguing examples" but that only time would tell if causal entropy was as fundamental as it may seem.

"It seems that it is beyond just luck and coincidence," he told BBC News.

"On the other hand, I know from my own research that applying thermodynamics to real-world systems is anything but simple and straightforward... I think it is through more examples that (we will see) how practical their approach will be, compared to other thermodynamic approaches."

Quoting ..MoonShine..:Will have to look at this with DH tomorrow.

Thoughts?

**I completely forgot. I'm sorry. Making a note for myself to watch it later.**

Quoting Clairwil:Quoting ..MoonShine..:Will have to look at this with DH tomorrow.

Thoughts?

If intelligence comes from maximizing ones options, then those who promote Marxist thinking and big government are dumbing down individuals so they can't think for themselves. Oh. Wait... ;)

My other thought is that this is reaching because just like DNA, I think we are going to find there is more to entropy and its going to lead to an intelligent designer, not chaos and disorder. But what do I know.

**Add your quick reply below:**

- Clairwil

on Apr. 28, 2013 at 2:31 AM