Epiphenomenalism: more questions

General philosophy discussions. If you are not sure where to place your thread, please post it here. Share favorite quotes, discuss philosophers, and other topics.

Re: Epiphenomenalism: more questions

Postby Dave_C on February 6th, 2018, 10:48 pm 

Hi Asp,
Before moving onto solutions, I think it’s important to elaborate on the problem of epiphenomenalism because what you find as ‘solutions’ to the problem are often times an appeal (in one way or another) to extreme complexity, nonlinear physical systems, and the equating of physical interactions to information followed by an appeal to downward causation. So when you’re faced with an argument that runs along the lines of “the amount of information is astronomical” or “nonlinear systems are more than the sum of the parts”, I think we should be weary because at their heart, those systems must still follow the rules laid out for any phenomenon which supervenes on the interaction of matter at a classical level. As Christof Koch put it,
Although brains obey quantum mechanics, they do not seem to exploit any of its special features. …

Two key biophysical operations underlie information processing in the brain: chemical transmission across the synaptic cleft, and the generation of action potentials. These both involve thousands of ions and neurotransmitter molecules, coupled by diffusion or by the membrane potential that extends across tens of micrometers. Both processes will destroy any coherent quantum states. Thus, spiking neurons can only receive and send classical, rather than quantum, information.

Ref: https://www.nature.com/articles/440611a (sorry it's behind a firewall)

I would accept that as an axiom which doesn’t need to be challenged. It’s very widely accepted in neuroscience. So this means that per the standard paradigm of mind, classical physics ‘is to blame’ for the emergence of phenomenal consciousness in a brain. Obviously, classical physics does not have a branch labeled “subjective experience mechanics” but that’s essentially what neuroscience is telling us there must be. We should expect to find a classical physics of phenomenal consciousness somehow – a higher level science which reduces to the interactions of neurons through bridge laws.

No one believes that, which is why nonreductive explanations have arisen and are commonly appealed to for an explanation. More on that another day.

The point however, is that classical mechanics is used to model the brain. Neuroscience uses computational models using numerical analysis just as engineers such as myself use computational models. In engineering, we typically call these models Finite Element Analysis (FEA) and these models are used for structural, fluid, thermal, heat transfer, electromechanical and all sorts of other analysis. In neuroscience, they don’t call it FEA, they call it “compartment models”. The original compartment model was the Hodgkin & Huxley model of the 1950’s.

What all these models have in common is their adherence to the separability of classical mechanics (classical physics). I provided an explanation of separability here:
viewtopic.php?p=276121

Because the interactions between neurons and parts thereof are characterized by some gross aggregate of molecules, we can ignore the fine detail at the molecular level and we can even consider the bulk properties at various points in a neuron to all be the same without any loss of fidelity. There are many programs used in neuroscience for doing this kind of analysis. The two most common ones I’ve seen are called “Neuron” and “Genesis”. There’s a very detailed book online that goes through how Genesis is developed. In there it says in Chapter 2:
When constructing detailed neuronal models that explicitly consider all of the potential complexities of a cell, the increasingly standard approach is to divide the neuron into a finite number of interconnected anatomical compartments. … Each compartment is then modeled with equations describing an equivalent electrical circuit (Rall 1959) [similar to Hodgkin & Huxley models]. With the appropriate differential equations for each compartment, we can model the behavior of each compartment as well as its interactions with neighboring compartments.


The method of breaking down large, highly complex, massively parallel, “information” (ie: physical interaction) systems into finite systems that can be modeled on digital computers (think “Turing machine”) is a standard approach to modeling the brain today. The Blue Brain Project will not be using Genesis but it will use the other very common finite type compartment method program called “Neuron”. The plan is to model larger and larger sections of brain to understand better the interactions between neurons, the various types of neurons and all the surrounding material so that predictions can be made and interactions understood.

So what’s the importance of any of this? And what does this have to do with epiphenomenalism? And why do you care?

Any system that exhibits a phenomenon which can be characterized using classical physics can be modeled to a degree of accuracy sufficient to characterize all objectively observable phenomena using this type of finite method. In the case of brains, we call it a compartment method. In the case of fluid mechanics we call it computational fluid dynamics, and in the case of just about any other field of classical physics, we call it finite element analysis. These systems are being characterized today using digital computers and the expectation is that, regardless of how complex, regardless of how nonlinear the physical systems are, no matter how we want to equate information to those physical states, science today is using digital computational methods, based on the separability of classical mechanics, to model those systems. In other words, any phenomenon we wish to understand in detail, we should be able to model using digital computers or even a Turing machine for that matter. There’s no extra physics that is created by nonlinear interactions, parallel architecture, etc…

Now consider what that means to phenomenal consciousness. It basically forces one to accept epiphenomenalism. If you can model a person’s brain (in principal, we can’t do it today) and produce the human behaviors which we are out to characterize using digital computers, then there is no space left over for the gross aggregate of brain cells to cause a change at the local level (ie: between two adjacent neuron compartments). The 2 adjacent compartments interact and that interaction propagates through the brain at a rate which is a function of the means of propagation. If that’s hard to understand, remember that the digital computer is what’s modeling the system and it only has the ability for neighboring switches to interact. If we have 10 billion, massively parallel switches interacting, we might believe that system is so complex that we can’t possibly know how it will act, and therefore we can’t know if the system could have a physical state that influences in some way, the lower level (ie: downward causation). But if we have 1 billion people, each watching 10 switches to make sure they all interact in a way which is consistent with the local interaction between switches, I’m sure we’ll find all 1 billion people tell you they saw nothing but local interactions between the switches.

If the entire operation of a brain can be reduced to local compartments, modeled by computer switches, each of which interacts only with the immediate neighbor such that every switch (transistor) operates deterministically just like dominoes falling over, that means there’s no room for the higher level physical state to influence anything. And if the higher level physical state says, “I’m in pain” or “I see the color red” then you can be sure it will say that because that phenomenon is determined, in whole and without remainder, by the local, physical interactions between the computer’s parts. So if our paradigm of mind is correct, it predicts epiphenomenalism and it predicts that we can’t know anything about our experiences. If it weren’t so obviously flawed, neuroscience, philosophers of mind, physicists, etc… would have agreed to some explanation of “the hard problem” long ago.
User avatar
Dave_C
Member
 
Posts: 304
Joined: 08 Jun 2014
Location: Allentown
DragonFlyBraininvatPositor and one more user liked this post


Re: Epiphenomenalism: more questions

Postby Asparagus on February 7th, 2018, 9:50 am 

Dave_C
Thanks for the info. The information on separability is new to me, so it'll take me a while to make sense of it. Does separability apply to time as well as space? If you answered that in the background info you gave, just ignore the question. I'll get to it. :)

The brand of determinism I'm most familiar with is Schopenhauer-esque and among some goes by the name actualism. It's essentially the recognition that apriori, an event can only have one outcome. That, along with an assumed inter-relatedness of all the parts of the universe creates an image of a monolithic universal Event in which the concept of choice doesn't seem to make any sense. Actualism is obviously entirely constructed of apriori parts and so is in limbo pending some reason to believe our apriori intuitions are telling us something about the world (something along the lines of Kant.)

To what extent is separability something more than apriori?
Asparagus
Member
 
Posts: 258
Joined: 16 Dec 2017
Blog: View Blog (2)


Previous

Return to Anything Philosophy

Who is online

Users browsing this forum: No registered users and 8 guests