Layzer vs. Laplace's Demon

Discussions on the philosophical foundations, assumptions, and implications of science, including the natural sciences.

Layzer vs. Laplace's Demon

Postby hyksos on December 28th, 2016, 10:36 pm 

Is the universe completely deterministic, even in theory?

The first guy that adopted a position on this was Pierre-Simon Laplace in 1814.

https://en.wikipedia.org/wiki/Laplace's_demon

Obviously, the above article is chock full of debunks that, in turn, make up the bulk of the article's content. Laplace, living in 1814, obviously had no conception in any way of the nature of quantum fields, nor of the quantum mechanical nature of "fundamental particles". Perhaps what is most damaging, Laplace could not possibly have known that fundamental particles, if smashed together in supercolliders, can transform into each other.

Laplace's classical determinism required "eternal" particles that act like small pieces of dust. And though they can collide, cannot transform to "create" and "annihilate" each other. Laplace knew nothing about the procedure of assuming that the missing energy in a particle collision at CERN is assumed to have been carried away by a "massless neutrino".

Laplace, living in 1814, could not possibly imagine a universe in which two observers see two different events, if they are moving at different velocities. (Einstein's special relativity). The alleged "absolute position" and "absolute velocity" of a particle is rendered physically meaningless in a universe that operates by Special Relativity. We can, in the end, only ask about velocities relative to an observer's reference frame.

Even in General Relativity (our most accurate theory of gravity) , there is no physical meaning of the field of Gravity. We can only calculate the "curvature" of spacetime at a particular point.

The 2nd Law of Thermodynamics is only empirically true. There were attempts by Boltzman and others to try to show how the 2nd Law flows as a natural consequence , starting only from first principles. But in every case, certain assumptions placed in the initial premises of these "proofs" turned out to be equal to the assumption of the 2nd Law. Thus Boltzmann's "derivation" of the 2nd Law turned out to be circular reasoning.

https://en.wikipedia.org/wiki/Loschmidt%27s_paradox

Edwin Jaynes' complaint against proofs of the 2nd Law is probably the most interesting of all.

I think most people would say the paradox is resolved - but, as the answers to this question make clear, they wouldn't necessarily agree about who resolved it or what precisely the resolution is. For my money the paradox was elegantly resolved by Edwin Jaynes in this 1965 paper. In Jaynes' argument, the symmetry is broken by the fact that we, as experimenters, have the ability to directly intervene in the initial conditions of an (isolated) system, but we can only affect the final conditions indirectly, by changing the initial conditions.

Of course, this then leaves open the question of why our ability to interact with physical systems is time-asymmetric in this way. This is not a paradox but rather a physical fact in need of explanation.


We gain a sudden flash of genius insight here. The 2nd Law is empirically true, not because of some mystery of the way the universe is. But instead the 2nd Law is always observed to be true because of the way in which human beings conduct scientific experiments. We always find ourselves as information-processing open thermodynamic systems (warm mammals with a computer in our heads) -- we always manipulate a physical system into a initial state, whereafter we "let go and see what happens."

We can expect that a robot, which runs on batteries, would also see the 2nd Law happening for the same reason. The robot is producing energy and shifting a system into an initial state for an experiment -- "winding up the rubber bands" and "letting them go" so-to-speak.

The dirty secret that keeps pervading these conversations about entropy is the very question about the existence of Time in the universe. The entire onus about which this debates spins requires that the debators themselves have already agreed to a Laplacian-type of determinism before even arriving at the debate stage.

This type of "demon which knows the position and velocity of all" presumes the following binding axioms:

  • That the total information in the universe is constant.
  • That the total information in the universe is large, but finite.
  • In other words, the universe is not creating more information over time.

In order to bring David Layzer into this conversation, and pit him against the Laplace Demon, we must open our minds to the possibility that some of the "axioms" listed above may be totally false. In the context of this discussion, it comes down the possibility of the existence of an irreversible process happening in nature. Worse, anyone on this forum who contends that processes are necessarily irreversible would be breaking one of the above axioms in a wild way, even if they are not conscious of doing so.

We have had some of these conversations here before (tangentially). And one simplified way of putting it is thus: If you believe that quantum mechanics has an arrow of time built directly into it, one way or another, you as presuming that information from some metaphysical random source is nudging, bumping, and perturbing the real universe and its real particles --- in some way. At the level of information CONTENT, you are committing to the idea that some quantum process could not ever be reversed, because the system's past states cannot be derived from their present states. Necessarily, you are suggesting that the total information content of the system must have grown larger somewhere in that process.

Maybe before now you didn't know you were (inadvertently) committing yourself to these metaphysical stances and their consequences.

This next article link introduces David Layzer

http://www.informationphilosopher.com/p ... ecurrence/

dlayzer.png


This is a wordy way of putting it. But going back to Boltzman. Layzer is suggesting that the "sides of the container" are expanding so fast that some pockets of gas in Boltzman's idealized model will never ram into each other ever. Because of this impossibility of ever meeting, they cannot exchange energy, and hence, they cannot equilibriate among each other. Structures will necessarily persist. If structures are being built and persisting, we must admit that the information content of the "whole universe" is going up.

Okay this is cool and all , but why does this matter to us and why should it matter to you? We gain a pretty clear picture that you cannot say both of these things are true at the same time

(A.) "The universe is deterministically and causally closed."
(B.) "The universe has an arrow of time."

A and B do not appear at the surface to even be related to each other. But actually one implies the other one is false. You cannot adopt a position that contains both of these beliefs, because you will be contradicting yourself.

For the mathematically-minded reader. Any dynamical information system which differentiates the future states from the past ones, by its inability to reverse the present states into the past ones (has an "arrow of time") must necessarily be increasing its total information content.

(...now we can corral the horses and really drive this one home...)

There is no reasonable or logical way that the universe is deterministically Laplacian, and simultaneously the 2nd Law of Thermodynamics is true. That possibility was toyed with by scientists from about 1870 to 1930. On deeper inspection, it is just plain wrong.
User avatar
hyksos
Active Member
 
Posts: 1296
Joined: 28 Nov 2014
Braininvat liked this post


Re: Layzer vs. Laplace's Demon

Postby wolfhnd on December 29th, 2016, 2:51 am 

Nicely written, it will take some time to digest.
User avatar
wolfhnd
Resident Member
 
Posts: 4613
Joined: 21 Jun 2005
Blog: View Blog (3)


Re: Layzer vs. Laplace's Demon

Postby Natural ChemE on December 29th, 2016, 5:09 am 

tl;dr - The following long post is an attempt at a different sort of presentation style. The content's about how entropy, determinism, etc. aren't universal truths, but rather relative concepts in the eye-of-the-beholder.

hyksos,

I must say that it's interesting to see how classical thinkers used to approach some of these issues before they were commonly understood. Today, we get entropy, determinism, etc., pretty well, such that the classical points of confusion are ancient history. But, I don't know too many good sources that tell this story directly; rather, it's been communicated in academia largely through inference.

So, I want to try constructing a thought experiment that demonstrates what entropy really is, and why it's a relative, observer-dependent concept. Ditto for determinism, etc.. I don't usually tell thought experiment stories like this, but it seems like a fun thing to try.

Day 1: You're bored and flip 10 quarters around
Say that we have 10 quarters on the ground, each heads up. Every few seconds, you pick up a random quarter and flip it. You're bored, so you do this for a few hours. At the end of a few hours, how many heads vs. tails? We don't really know, but probably about 5-to-5, right?

This was a binary entropic process:
    Image.
If all of the quarters were heads or tails at the start, we'd have had zero entropy, corresponding to either the left side or right side of the above plot. And, statistically speaking, we tend to expect the middle result - that the coins will be roughly 50/50 heads/tails by the end. That's the Second Law of Thermodynamics, as we understand it today - entropy will tend to increase 'til it's maximized, then it sorta wobbles around the equilibrium point (in this case, 50/50 heads-to-tails) thereafter.

Also, notice that the entropy changes more for a single flip near the extreme than at the center equilibrium. This is like the coins; if you have 10 heads, and flip them all, it's extremely unlikely that they'll all be heads again, and pretty unlikely to even be close. But if you have 5 heads and 5 tails, and flip them all, it's not that unlikely that you'll get a similar result, or even the same result. That's entropy.

Day 2: You get a coin-flipping robot
So, it turns out that flipping coins is kinda relaxing, sorta like watching the clouds roll by on a lazy summer afternoon. But, it's also a tad tedious, so you opt to get a coin-flipping robot. Maybe you buy it, or maybe you make your own. Either way.

You spend the rest of the afternoon watching the robot flip coins for you. It's like Day 1, but you get to relax.

Day 3: You notice quirks
Okay, now it's Day 3, and you're a little worried about your own sanity. I mean, you're just flipping lots of quarters, and even got a robot! But, hey, if you're gonna be crazy, may as well have fun, right? Not a bad kinda crazy to be.

The weirdest part is that you've noticed defects in the process. You've been flipping the same 10 coins, and after all this time, you've made enough observations to realize that they're not perfectly fair, and they're not even biased the same way. That shiny 1976 quarter's particularly biased towards Heads; perhaps only 51%-to-49% odds on heads-vs.-tails, but you notice that now.

You pick up on other stuff, too. Like, that robot of yours is picking "random" quarters, but apparently it's using a fairly simple pseudo-random number generator. You've picked up the pattern, and you can now guess which "random" quarter is next.

Heck, you've even gotten good at watching the coins flip in the air; just after they go up, you can already tell what they're going to land on, provided that you can see the coin and the area that it's thrown on. You even get how the landing site affects the outcome; it's a very small effect in most cases, but you've got it down.

Day 4: I'm bored and come join you
So, I get bored, too, and you've made some interesting-sounding posts about the quarters. I come over and watch, thinking that you don't get statistics or entropy or anything. In fact, I use my awesome grade-school statistics tricks to estimate the likelihood of certain events, e.g. the number of heads after a few rounds, and agree to place some wagers with you when the odds seem to be in my favor.

After a while of this, I'm a little annoyed. My betting strategy seems sound, but somehow Lady Luck doesn't seem to be on my side, as you're winning more often than it seems like you should. I complain, explaining the statistics and entropy of the process. And you smile, playing dumb; you figure that you'll let me in on the hard-won knowledge after winning a few more bets.

It's weird to me, but you place higher bets when we're talking after the coin's already in the air. And you seem to win those wagers pretty consistently. In fact, you gave me 10-to-1 odds that a coin would come up Tails, which seemed great, but then it did and you won. Surely a fluke, so tried again on the next flip, and you won again. What's happening here?

Day 5: You're bored of winning
Okay, I kinda hate you at this point, you lucky bastard. But, you've had your fun and got your winnings, so now it's time to chat.

Turns out that my entropy calculations were correct; it's not like I suddenly forgot basic physics, or how to gamble, as my methods were sound. But, you won, using different values for entropy than what I had. My coin-flips-are-50/50 model wasn't horrible, but yours was obviously better.

And, that's where we are with entropy. It's about your models and perspective; it's about your physical knowledge. For me, the coin flips already in the air were pretty much random, but they were deterministic for you, 'cause you could see them.

Enough of that!
Anyway, long thought experiment short, the gist is that entropy's a modeling concept, relative to the knowledge base that it's applied to. The numbers and math differ with your model; there's not some "true" or "absolute" value of entropy.

And, because there's no absolute concept or value of entropy - it's just a relative, knowledge-related metric - it can't be used to prove any universal truth, like evidence about physical determinism. All we can ever say about physical determinism is that we personally don't currently have a reliable method for making precise, reliable predictions. This is, we can't say anything about whether or not the universe itself is deterministic; it's not just wrong, but rather the premise itself is a fallacy.

The danger in this fallacy is that people stop looking. Your brain would've never figured out that those coin flips weren't fair if some part of you wasn't watching, observing, and trying. Detractors who'd claim that coin flips are necessarily 50/50 are mistaking a practical, working approximation for truth, then using that mistaken belief to legitimize their ignorance.


PS - Also wanted to mention that, like in the thought experiment above, entropy can spontaneously decrease, contrary to the Second Law of Thermodynamics. It's just that it tends to increase, such that when observed in large systems (e.g. many coins being flipped), the overall entropy's apparent direction is very unlikely to be noticeably negative for very long. Or, if it is, then obviously the physical models on which that entropy calculation were based are themselves unreliable, implying that we have a good target for constructing new physical laws (correlations).
Natural ChemE
Forum Moderator
 
Posts: 2754
Joined: 28 Dec 2009
dandelion liked this post


Re: Layzer vs. Laplace's Demon

Postby Dave_Oblad on December 29th, 2016, 5:35 am 

Hello Hyksos,

Hyksos wrote:(A.) "The universe is deterministically and causally closed."
(B.) "The universe has an arrow of time."

A and B do not appear at the surface to even be related to each other. But actually one implies the other one is false. You cannot adopt a position that contains both of these beliefs, because you will be contradicting yourself.

I will debate that both (A) and (B) can be true without contradiction.

Let's use the Cellular Automaton as a Model for our Reality:

We allow the "Now" generation to be derived from its history.. but we never erase said history. Thus if we rewind "Time" back to any point in history and let it replay.. the Generation we marked as "Now" will always come back into the exact same pattern eventually. That is Hard Determinism and is Causally Closed.

The Arrow of Time is noticed by the fact that a Glider can not go backwards based on its Geometry, there can only be progression in one direction as the system evolves to each new layer of Time. Thus such a Universe does manifest such an Arrow of Time feature.

Thus (A) and (B) are both true in this scenario without contradiction.

Agreed?

Best Regards,
Dave :^)
User avatar
Dave_Oblad
Resident Member
 
Posts: 3229
Joined: 08 Sep 2010
Location: Tucson, Arizona
Blog: View Blog (2)


Re: Layzer vs. Laplace's Demon

Postby Braininvat on December 29th, 2016, 12:56 pm 

The Information Theory I'm getting here is a bit confusing. When you have a chaotic system, you need more information to describe it. After all, you have a bunch of gas molecules flying around randomly. Mapping them all takes an enormous amount of information. A structured system has less information. Neatly ordering something, you need far less information to describe where everything is and where it's going. For this reason, saying that increasing entropy means you need more information is essentially tautological. Well, yes, I can tell you where all the atoms are in a tidy little quartz crystal with far fewer bits than I would need to track those atoms after I vaporize the crystal. So, yes, of course information has increased. Same thing with a beaker of cream and a cup of black coffee. Fewer bits needed to say where the coffee and the cream molecules are. Stir cream into the coffee and, voila, I've "created" more information. There's no unstirring the coffee, in fact. The increase in information is clear and unambiguous. The same could be said for beta decay - more information is needed to describe the expanding electron field as it propagates outwards in a random decay, than to describe it imprisoned in the nucleus.

So, maybe I'm misunderstanding, Hyksos, when you say,

"If structures are being built and persisting, we must admit that the information content of the "whole universe" is going up."

If structures are being built up, then the information content is actually going down, isn't it? Structure requires less information than does chaos. Structure usually allows codes and maps and such that efficiently account for a huge number of particle locations and paths. Structure allows compression. Less information. The demon's job gets much easier when there is structure and orderliness.
User avatar
Braininvat
Forum Administrator
 
Posts: 6495
Joined: 21 Jan 2014
Location: Black Hills


Re: Layzer vs. Laplace's Demon

Postby hyksos on December 29th, 2016, 3:25 pm 

The Arrow of Time is noticed by the fact that a Glider can not go backwards based on its Geometry, there can only be progression in one direction as the system evolves to each new layer of Time. Thus such a Universe does manifest such an Arrow of Time feature.

Thus (A) and (B) are both true in this scenario without contradiction.

Agreed?


Oh but Dave, I am mildly disappointed in you! ;)

Go back to the Cellular Automata basics again. In a fundamental way nothing is actually moving in a CA grid. Remember, what you are perceiving as "motion" is the turning on and turning off of cells. A glider does not "move" (per se) , rather it keeps reproducing itself , shifted to the left every few cycles.
User avatar
hyksos
Active Member
 
Posts: 1296
Joined: 28 Nov 2014


Re: Layzer vs. Laplace's Demon

Postby hyksos on December 29th, 2016, 3:48 pm 

the gist is that entropy's a modeling concept, relative to the knowledge base that it's applied to. The numbers and math differ with your model; there's not some "true" or "absolute" value of entropy.

And, because there's no absolute concept or value of entropy - it's just a relative, knowledge-related metric - it can't be used to prove any universal truth, like evidence about physical determinism. All we can ever say about physical determinism is that we personally don't currently have a reliable method for making precise, reliable predictions. This is, we can't say anything about whether or not the universe itself is deterministic; it's not just wrong, but rather the premise itself is a fallacy.


This position has a name. It's called the "Subjectivist account of Probability" or sometimes the "Subjectivist Probability". The biggest proponent of this position is Edwin T Jaynes.

https://en.wikipedia.org/wiki/Edwin_Thompson_Jaynes

From my perspective, the most important thing Jaynes ever told is is that it does not make physical sense to say that an object has "an entropy". Rather it only makes sense to talk about a particular aspect of the physical system having an entropy.

Let's take the example of molten iron. Iron atoms are known to have a strong magnetic component, as if they are "little bar magnets" which line up or do not, and if they are lined up, the resulting bar of metal could be itself used as a magnet. Jaynes says we can only talk about the entropy of the magnetic poles of the iron atoms. Or we can only refer to the entropy of the temperatures of each iron atoms. Or we refer to the entropy of the positions of the iron atoms. All of these various entropies have a physical meaning, but the "entropy of the iron bar" makes absolutely no sense at all.

This is likely what lies at the dangerous (and pervasive) misunderstanding that entropy is somehow equal to temperature.
User avatar
hyksos
Active Member
 
Posts: 1296
Joined: 28 Nov 2014


Re: Layzer vs. Laplace's Demon

Postby Natural ChemE on December 29th, 2016, 8:50 pm 

hyksos » December 29th, 2016, 2:48 pm wrote:This position has a name. It's called the "Subjectivist account of Probability" or sometimes the "Subjectivist Probability". The biggest proponent of this position is Edwin T Jaynes.

I dislike assigning names to simple facts, as though they were one of a set of competing interpretations. The nature of entropy is on-par with 1+1=2 in the quality that it's a truism, as was demonstrated back in the late 1800's. It's ancient stuff.

But, surely you knew this? I mean you titled this thread after Laplace's demon. Laplace's demon was a thought experiment demonstrating that, if you had complete physical knowledge, then you could level out all entropy. Which is simply true; it's like how you didn't suffer from entropy in predicting coin tosses that were already in the air; you destroyed entropy by having physical insight that most can't achieve with their eyes. Laplace's demon's just that ideal taken to the extreme; it destroys all entropy as it's a perfect oracle. Of course, Laplace's demon only destroys entropy in its own eyes. Even if the demon were sitting there, playing out its game, human scientists would still perceive entropy, because we don't have the demon's knowledge.

Incidentally, the Wikipedia article on Laplace's demon has a lot of crack pottery in it. While I'm normally a big fan of Wikipedia, whoever's been doing that thing clearly doesn't get the concepts.
Natural ChemE
Forum Moderator
 
Posts: 2754
Joined: 28 Dec 2009


Re: Layzer vs. Laplace's Demon

Postby hyksos on December 29th, 2016, 9:26 pm 

Laplace's demon was a thought experiment demonstrating that, if you had complete physical knowledge, then you could level out all entropy. Which is simply true;


"simply true"

Analytically true, or physically true?

In 2016 I would go as far as to say that we know that the universe is not a flat, static, eternal collection of tiny eternal dust atoms. I would even go as far as to claim that such a (classical) universe does not even match the facts of what we measure.
User avatar
hyksos
Active Member
 
Posts: 1296
Joined: 28 Nov 2014


Re: Layzer vs. Laplace's Demon

Postby Natural ChemE on December 29th, 2016, 10:01 pm 

hyksos,

It'd be tautology.

Statistical mechanics was a grand triumph for humanity; its insights served to help bridge the classical world of Newton to the quantum world of today. With regards to entropy, we had an empirical correlation - i.e. classical thermodynamic entropy - that just seemed to work, divorced from other physical laws of nature, and ineloquent for it. Statistical mechanics showed how these seemingly skew rules could not only be reconciled with mechanical understanding, but be derived from it.

Today, entropy's a tool in our tool box. Folks talk about junk like black hole entropy, an analog to classical entropy, not because we observed it as in classical days, but because it's a modeling thingie now, about reconciling our mechanical models and such.

And closer to home, chemical engineers have to use classical thermodynamics when we design plants and whatnot. It's hard to find good examples of modern models in the public domain, but the non-random two-liquid (NRTL) model's a classic that we've built on, using those mechanical insights to predict classical quantities that first seemed weird and unexplained.

I'd add that, when I do my calculations, they're model-dependent; selecting a different thermodynamic model will get me different numbers. And, some models are more effective than others, like how your coin-flipping model was better than my 50/50 model. But, as long as engineers are reasonably consistent about how they have models interact - being careful to avoid logical fallacies - the results are usually workable.
Natural ChemE
Forum Moderator
 
Posts: 2754
Joined: 28 Dec 2009


Re: Layzer vs. Laplace's Demon

Postby Dave_Oblad on December 29th, 2016, 11:23 pm 

Hello Hyksos,

Hyksos wrote:Oh but Dave, I am mildly disappointed in you! ;)

Go back to the Cellular Automata basics again. In a fundamental way nothing is actually moving in a CA grid. Remember, what you are perceiving as "motion" is the turning on and turning off of cells. A glider does not "move" (per se) , rather it keeps reproducing itself , shifted to the left every few cycles.

Now who is disappointed... ;)
The Cells may be stationary but the information content within the Cells is Propagating. Your statement above is self conflicting.. when Information is Shifted.. then Information has Moved!

Here is Gospers gun:

LifeDemo2.gif
Gospers Gun in 2D

Are the Gliders Moving? Is the information content being propagated above?

But now I will show you why you are almost right.

Here is a Block Model of that same exact Gun, but in 3D..
where the visible added dimension is Time (from Top.. Downwards):

Gun3D.png
Gospers Gun in3D

In the above example, Time is flowing from Top towards the Bottom. So in a Block Model we see no motion.. it appears static. There is no Motion in a Block Model. However.. the information content is indeed still propagating (shifting/moving) on the Temporal Axis.

Sidebar: I watched a YouTube Video of some famous Physicist's lecture and he made the statement that the Block Model is False because it has no Motion. He then proceeded to flap his arms to indicate Real Motion.. thus proving the Static Block Model is False. I literally wanted to reach though my screen and throttle that guy for such a stupid statement. Stupid.. because he obviously didn't understand the Block Model. Shame on him.. lol.

I stand by my observation that the Propagation of Information is still Motion of said Information, even if said Propagation (Motion) is on the Temporal Axis.

Regards,
Dave :^)
User avatar
Dave_Oblad
Resident Member
 
Posts: 3229
Joined: 08 Sep 2010
Location: Tucson, Arizona
Blog: View Blog (2)


Re: Layzer vs. Laplace's Demon

Postby hyksos on January 1st, 2017, 2:47 pm 

Dave_O --

If you say that the universe is undergirded by a CA, then you most adopt all the consequences of that , which follow logically from it.

One of those consequences is that within the cellular automata rules there is no explicit copying of cell contents from one cell to another. Yes a glider is a configuration in a Turing-complete rule which can realize the motion of information. Nobody is denying that. But Turing Complete algorithms can also realize other more profound things , like addition, multiplication, and even division of integers.

IMHO, I would say that a glider in the example you have shown is not even really "carrying" information in it. It is acting here little more than a switch acts in a circuit. A glider does not pick up a value and carry that 'value' to another location. It acts as a simple switch. I mean, after all you are dealing with one of the simplest turing-complete rules known to man and science. The rule is so simple that it takes literally millions of cells to realize something simple like an ADDER.

One of those consequences is that motion is not physically real. Of course I am using the phrase "physically real" in the most grandiose metaphysical viewpoint. Sure there may be large-scale humans and other intelligent life forms in the universe, who being 1024 times larger than than the metaphysical quantum bits, would perceive an illusion of motion. Yes, our eyes perceive motion in the CA grid, but the motion is not "actually" there.

These distinctions really really matter in discussions of time. The scientists of the enlightenment believed motion is both continuous in the sense of calculus , and absolutely physically grounded -- like fundamentally true. If the universe were actually like that, then that has a huge impact on what the nature of time must be like , not in any exhaustive sense, but in a sense of sufficient. That is, any metaphysical claims you make about time must (at least) account for truly continuous motion in continuous space.

If you have anything to say about Loschmidt's Paradox or David Layzer, feel free to comment in that direction.
User avatar
hyksos
Active Member
 
Posts: 1296
Joined: 28 Nov 2014
Dave_ObladBraininvat liked this post


Re: Layzer vs. Laplace's Demon

Postby hyksos on January 1st, 2017, 3:00 pm 

In the above example, Time is flowing from Top towards the Bottom. So in a Block Model we see no motion.. it appears static. There is no Motion in a Block Model. However.. the information content is indeed still propagating (shifting/moving) on the Temporal Axis.


Let me be a little more specific here. I said NOTHING about block-model universes. This is a consequence of the way a cellular automata algorithm actually functions.


the information content is indeed still propagating

It indeed is not. The CA rules will determine the state of a cell in the next time step. I will show the fricking source code!

YOu are looking at one of the simplest Turing-Complete rules known to man. This rule is so simplified, that you are way below the level of logic gates like AND, OR, NOT, NAND, etc. These highly-simplified grids can only carry out "computation" by stringing together millions of these cells in a very specific way. One particular instruction in a useful suite of "computation" is the act of copying information values to another location in RAM. Does Conway's GOL copy numeric values from one RAM slot to another, natively? It certainly does not. COnway's GOL does not even natively perform a fricking NOT-gate.

A fricking NOT-gate!!

It takes a huge bunch of orchestrated glider guns and such to even realize a NOT-gate in Conway. While entirely possible -- it is not very 'natural'.
User avatar
hyksos
Active Member
 
Posts: 1296
Joined: 28 Nov 2014


Re: Layzer vs. Laplace's Demon

Postby Dave_Oblad on January 2nd, 2017, 3:59 am 

Hi Hyksos,

I don't disagree with you. The logic in Conway's GOL is simple voting logic.

But are we restricted to simple voting logic? Can an Automaton be generated using Cells as Logic Gates. (Yes.. it can.) Is it a requirement that such Cells only be connected to closest Neighbors? (No.. it's not.) We are free to make up any rules we want and any level of complexity for the number of Inputs and Outputs to a given Cell, how it connects to even distant neighbors, the Rules of Logic.. including mixed And, Nand, Or, Nor, or even Exclusive Nor Logic.. and any number of Dimensions. Then throw it all into a simulator and see what arises.

Automatons are not restricted to Conway's GOL. I know I'm preaching to the choir here, so please forgive me.

But it's true that Switches move information from place to place. That's the basics of all Computers. An Accumulator is a Nexus Point inside a Computer.. a short cut to simplify the hardware requirements for moving information around. Any argument that Information is not Moving in an Automaton would be dependent on the type of Automaton. You yourself expressed a statement that an Automaton can exist on a single substrate, with the right rules. I only assumed such an Automaton wasn't a static one.

I wouldn't bring all this up.. except this thread doesn't seem to be limited to Conway's GOL rule set.

Highest regards,
Dave :^)
User avatar
Dave_Oblad
Resident Member
 
Posts: 3229
Joined: 08 Sep 2010
Location: Tucson, Arizona
Blog: View Blog (2)


Re: Layzer vs. Laplace's Demon

Postby hyksos on January 8th, 2017, 4:48 pm 

So, yes, of course information has increased. Same thing with a beaker of cream and a cup of black coffee. Fewer bits needed to say where the coffee and the cream molecules are. Stir cream into the coffee and, voila, I've "created" more information.


No I think you have merely mixed the cream and coffee. That is to say for cream into coffee, you have neither destroyed information nor created any new information. And I would surmise that the entire situation could be reliably simulated using classical physics only.

A structured system has less information. Neatly ordering something, you need far less information to describe where everything is and where it's going.


This is correct I won't deny. But the argument goes that the galaxies have not disintegrated into a featureless gas at equilibrium.

You could reply to that observation by saying I am engaging in "selection bias" and that all we need to do is wait and that will happen eventually. But I have to wait until you actually adopt that stance first.
Last edited by hyksos on January 8th, 2017, 4:54 pm, edited 1 time in total.
User avatar
hyksos
Active Member
 
Posts: 1296
Joined: 28 Nov 2014


Re: Layzer vs. Laplace's Demon

Postby hyksos on January 8th, 2017, 4:49 pm 

Dave_Oblad » January 2nd, 2017, 11:59 am wrote:I don't disagree with you. The logic in Conway's GOL is simple voting logic.

But are we restricted to simple voting logic? Can an Automaton be generated using Cells as Logic Gates. (Yes.. it can.) Is it a requirement that such Cells only be connected to closest Neighbors? (No.. it's not.) We are free to make up any rules we want and any level of complexity for the number of Inputs and Outputs to a given Cell, how it connects to even distant neighbors, the Rules of Logic.. including mixed And, Nand, Or, Nor, or even Exclusive Nor Logic.. and any number of Dimensions. Then throw it all into a simulator and see what arises.

Automatons are not restricted to Conway's GOL. I know I'm preaching to the choir here, so please forgive me.

But it's true that Switches move information from place to place. That's the basics of all Computers. An Accumulator is a Nexus Point inside a Computer.. a short cut to simplify the hardware requirements for moving information around. Any argument that Information is not Moving in an Automaton would be dependent on the type of Automaton. You yourself expressed a statement that an Automaton can exist on a single substrate, with the right rules. I only assumed such an Automaton wasn't a static one.

I wouldn't bring all this up.. except this thread doesn't seem to be limited to Conway's GOL rule set.

My reply to this is contained in another thread

viewtopic.php?f=19&t=32176
User avatar
hyksos
Active Member
 
Posts: 1296
Joined: 28 Nov 2014


Re: Layzer vs. Laplace's Demon

Postby neuro on January 9th, 2017, 9:41 am 

hyksos » January 8th, 2017, 9:48 pm wrote:
So, yes, of course information has increased. Same thing with a beaker of cream and a cup of black coffee. Fewer bits needed to say where the coffee and the cream molecules are. Stir cream into the coffee and, voila, I've "created" more information.


No I think you have merely mixed the cream and coffee. That is to say for cream into coffee, you have neither destroyed information nor created any new information. And I would surmise that the entire situation could be reliably simulated using classical physics only.


I may be wrong, but when you mix cream and coffee you get a system which has higher information entropy (and actually has higher entropy): this means that a large amount of missing information is needed to determine the microstate given the macrostate.
However, again if I am not wrong, the actual (useful) information the system gives is defined as the decrease in information entropy (negentropy), so that the cream separated from the coffee actually offers you more information (the amount of "missing" information needed to determine the microstate has decreased) than a purely random situation (maximum entropy).

In other words, the information content of a system (information entropy) tells you how much information is added when you determine the microstate (i.e. the position of each particle), with respect to the knowledge of the macrostate only:
- high info content (added by knowing the location of each particle) if each cream and coffee particle has the same probability of being at any place (macrostate knowledge)
- lower information content (added by knowing the location of each particle) if cream particles are almost completely confined to the top and coffee particles to the rest of the cup (macrostate knowledge).
But this means that if you "need" less info to know the microstate when cream and coffee are separated, you already have more information on the system (microstates) when they are separated than you have when you have mixed them up.

Increased entropy --> increased "missing" info --> decreased available information on the system.
Decreased entropy --> decreased "missing" info --> increased available information on the system.
User avatar
neuro
Forum Moderator
 
Posts: 2635
Joined: 25 Jun 2010
Location: italy


Re: Layzer vs. Laplace's Demon

Postby Braininvat on January 9th, 2017, 4:21 pm 

I put "creating" in quotes, re information, to suggest a touch of irony...that information is a description of how we experience a system, rather than a real thing. In the long run, the U grows more disordered and requires more bits of information to describe its microstates. Which is why less information is available to an observer. No one cares to know all the microstates of a thin cloud of gas. Nor could any feasible amount of observation supply such information. The availability of information is inversely proportional to the amount of information that is potentially out there, that is to say the number of disordered microstates. An immaculate Porsche 911 has few disordered microstates, ergo it has high availability of information that will serve as complete description. Vaporize it and let it disperse and you have the reverse. Information is about classification and description for a sentient being, it has no discernible ontological status in itself. Information isn't created or destroyed, it is found out.
User avatar
Braininvat
Forum Administrator
 
Posts: 6495
Joined: 21 Jan 2014
Location: Black Hills


Re: Layzer vs. Laplace's Demon

Postby hyksos on January 16th, 2017, 12:21 am 

I'm not entirely comfortable using the word "information" to refer to something that I would normally call "minimum description length" or even (in some cases) "Kolmogorov Complexity".

https://en.wikipedia.org/wiki/Entropy_(information_theory)
User avatar
hyksos
Active Member
 
Posts: 1296
Joined: 28 Nov 2014


Re: Layzer vs. Laplace's Demon

Postby neuro on March 5th, 2017, 8:56 am 

If I am not wrong, the meaning of using the term "information" to relate to "minimum description length" is that the information CONTENT of a set of data is the minimum number of bits that are needed to reproduce all the information contained in such set of data. The rest of the data are useless, in terms of information, because they arise from correlation and can therefore be recovered from the "minimum description".
User avatar
neuro
Forum Moderator
 
Posts: 2635
Joined: 25 Jun 2010
Location: italy


Re: Layzer vs. Laplace's Demon

Postby hyksos on March 6th, 2017, 12:28 am 

Okay if we are going to hold hard-and-fast to the conclusion that all the fundamental findings of statistical mechanics DO NOT refer to any kind of objective information found in the physical system under study, but

(hmm... this sentecne is going to turn into a run-on. so let me re-parse what I'm tryign to communicate).

Some people have joined this thread and decided to preach at me that all the fundamental theorems and laws of statistical mechanics only refer to observer-side information... or "information maps" of a system. As one person has pointed out "nobody cares about all the microstates in a thin cloud."

Well before I say anything else, we need to get some sort of baseline of communication going.

Entropy is not the compressibility of a description of a system. (say like the degree you could compress an image made of pixel data). Entropy is rather the bulk number of microstates which could justifiably be underlying the same macro-scale state. If there is a gigantic set of such microstates corresponding to "cloud in equilibrium" , then cloud-in-equilibrium has high entropy. If there is a tiny set of microstates which could correspond to "cloud moving upwards" then cloud-moving-upwards has low entropy.

The problem here is about language. Throughout the 20th century, the word "information" has been used in so many different contexts, from information theory to social sciences to espionage, to statistical mechanics, and even in recent quantum mechanics like the 'entropy of a black hole' or the holographic principle. It becomes natural that these different versions of "information" might be conflated. That is likely happening in this thread.
User avatar
hyksos
Active Member
 
Posts: 1296
Joined: 28 Nov 2014


Re: Layzer vs. Laplace's Demon

Postby Braininvat on March 6th, 2017, 10:22 am 

Agree we need to nail down a single workable definition of information. I don't think anyone was "preaching" however. The idea of information as a description sufficient to reproduce a system can certainly lead to false equivalences, I'll go with that. Some assembly required....back later.
User avatar
Braininvat
Forum Administrator
 
Posts: 6495
Joined: 21 Jan 2014
Location: Black Hills


Re: Layzer vs. Laplace's Demon

Postby Braininvat on March 6th, 2017, 12:31 pm 

Maybe we could go with the simplest physical definition: when information about a system decreases, then entropy increases. (a mundane example would be generating heat) As microstates in the system become unavailable to the observer (it's just a thin cloud of gas, I can't tell you where any atom is located), entropy increases.

We can form a union of thermodynamic entropy and information theory entropy by saying, "Entropy is simply that portion of the (classical) physical information contained in a system of interest whose identity (as opposed to amount) is unknown - from the POV of an observer."
User avatar
Braininvat
Forum Administrator
 
Posts: 6495
Joined: 21 Jan 2014
Location: Black Hills



Return to Philosophy of Science

Who is online

Users browsing this forum: No registered users and 2 guests