Before moving onto solutions, I think it’s important to elaborate on the problem of epiphenomenalism because what you find as ‘solutions’ to the problem are often times an appeal (in one way or another) to extreme complexity, nonlinear physical systems, and the equating of physical interactions to information followed by an appeal to downward causation. So when you’re faced with an argument that runs along the lines of “the amount of information is astronomical” or “nonlinear systems are more than the sum of the parts”, I think we should be weary because at their heart, those systems must still follow the rules laid out for any phenomenon which supervenes on the interaction of matter at a classical level. As Christof Koch put it,

Although brains obey quantum mechanics, they do not seem to exploit any of its special features. …

Two key biophysical operations underlie information processing in the brain: chemical transmission across the synaptic cleft, and the generation of action potentials. These both involve thousands of ions and neurotransmitter molecules, coupled by diffusion or by the membrane potential that extends across tens of micrometers. Both processes will destroy any coherent quantum states. Thus, spiking neurons can only receive and send classical, rather than quantum, information.

Ref: https://www.nature.com/articles/440611a (sorry it's behind a firewall)

I would accept that as an axiom which doesn’t need to be challenged. It’s very widely accepted in neuroscience. So this means that per the standard paradigm of mind, classical physics ‘is to blame’ for the emergence of phenomenal consciousness in a brain. Obviously, classical physics does not have a branch labeled “subjective experience mechanics” but that’s essentially what neuroscience is telling us there must be. We should expect to find a classical physics of phenomenal consciousness somehow – a higher level science which reduces to the interactions of neurons through bridge laws.

No one believes that, which is why nonreductive explanations have arisen and are commonly appealed to for an explanation. More on that another day.

The point however, is that classical mechanics is used to model the brain. Neuroscience uses computational models using numerical analysis just as engineers such as myself use computational models. In engineering, we typically call these models Finite Element Analysis (FEA) and these models are used for structural, fluid, thermal, heat transfer, electromechanical and all sorts of other analysis. In neuroscience, they don’t call it FEA, they call it “compartment models”. The original compartment model was the Hodgkin & Huxley model of the 1950’s.

What all these models have in common is their adherence to the separability of classical mechanics (classical physics). I provided an explanation of separability here:

viewtopic.php?p=276121

Because the interactions between neurons and parts thereof are characterized by some gross aggregate of molecules, we can ignore the fine detail at the molecular level and we can even consider the bulk properties at various points in a neuron to all be the same without any loss of fidelity. There are many programs used in neuroscience for doing this kind of analysis. The two most common ones I’ve seen are called “Neuron” and “Genesis”. There’s a very detailed book online that goes through how Genesis is developed. In there it says in Chapter 2:

When constructing detailed neuronal models that explicitly consider all of the potential complexities of a cell, the increasingly standard approach is to divide the neuron into a finite number of interconnected anatomical compartments. … Each compartment is then modeled with equations describing an equivalent electrical circuit (Rall 1959) [similar to Hodgkin & Huxley models]. With the appropriate differential equations for each compartment, we can model the behavior of each compartment as well as its interactions with neighboring compartments.

The method of breaking down large, highly complex, massively parallel, “information” (ie: physical interaction) systems into finite systems that can be modeled on digital computers (think “Turing machine”) is a standard approach to modeling the brain today. The Blue Brain Project will not be using Genesis but it will use the other very common finite type compartment method program called “Neuron”. The plan is to model larger and larger sections of brain to understand better the interactions between neurons, the various types of neurons and all the surrounding material so that predictions can be made and interactions understood.

So what’s the importance of any of this? And what does this have to do with epiphenomenalism? And why do you care?

Any system that exhibits a phenomenon which can be characterized using classical physics can be modeled to a degree of accuracy sufficient to characterize all objectively observable phenomena using this type of finite method. In the case of brains, we call it a compartment method. In the case of fluid mechanics we call it computational fluid dynamics, and in the case of just about any other field of classical physics, we call it finite element analysis. These systems are being characterized today using digital computers and the expectation is that, regardless of how complex, regardless of how nonlinear the physical systems are, no matter how we want to equate information to those physical states, science today is using digital computational methods, based on the separability of classical mechanics, to model those systems. In other words, any phenomenon we wish to understand in detail, we should be able to model using digital computers or even a Turing machine for that matter. There’s no extra physics that is created by nonlinear interactions, parallel architecture, etc…

Now consider what that means to phenomenal consciousness. It basically forces one to accept epiphenomenalism. If you can model a person’s brain (in principal, we can’t do it today) and produce the human behaviors which we are out to characterize using digital computers, then there is no space left over for the gross aggregate of brain cells to cause a change at the local level (ie: between two adjacent neuron compartments). The 2 adjacent compartments interact and that interaction propagates through the brain at a rate which is a function of the means of propagation. If that’s hard to understand, remember that the digital computer is what’s modeling the system and it only has the ability for neighboring switches to interact. If we have 10 billion, massively parallel switches interacting, we might believe that system is so complex that we can’t possibly know how it will act, and therefore we can’t know if the system could have a physical state that influences in some way, the lower level (ie: downward causation). But if we have 1 billion people, each watching 10 switches to make sure they all interact in a way which is consistent with the local interaction between switches, I’m sure we’ll find all 1 billion people tell you they saw nothing but local interactions between the switches.

If the entire operation of a brain can be reduced to local compartments, modeled by computer switches, each of which interacts only with the immediate neighbor such that every switch (transistor) operates deterministically just like dominoes falling over, that means there’s no room for the higher level physical state to influence anything. And if the higher level physical state says, “I’m in pain” or “I see the color red” then you can be sure it will say that because that phenomenon is determined, in whole and without remainder, by the local, physical interactions between the computer’s parts. So if our paradigm of mind is correct, it predicts epiphenomenalism and it predicts that we can’t know anything about our experiences. If it weren’t so obviously flawed, neuroscience, philosophers of mind, physicists, etc… would have agreed to some explanation of “the hard problem” long ago.