Incomplete Nature - Part 17
Library

Part 17

THE COMPLEMENT TO COMPUTATION.

Although there is little debate about whether the operations of human thinking and feeling involve physical processes taking place in the brain, the question that divides the field is whether the physical account as framed in terms of computation, even in its widest sense, is sufficient to explain mental experience. The computational (machine) a.n.a.logy has been undeniably useful. Besides providing a methodology for formulating more precise models of cognitive processes that offer testable empirical predictions, it offered a clue to the resolution of the cla.s.sical paradox concerning the bifurcation of the mental and physical realms. Computation a.s.sumes that information processes correspond to the manipulation of specific physical tokens. As a model of cognition, it implies that the processes of perceiving and reasoning about the world are also const.i.tuted by algorithms "instantiated" as physical processes, whether in symbolic or subsymbolic form. And if mental processes can be fully understood in terms of the organized manipulation of physical tokens, then the operations of thought will inherit whatever physical consequences these manipulations might be capable of. Simply stated: If computation is a mechanical process and thought is computation, then thoughts can be physically efficacious. The mind/body problem disappears. Or does it?

I take this to be an invaluable insight. However, there are two problems that it leaves us with. First, it does not in itself commit us to any specific theory concerning the nature of the physical process that const.i.tutes the representational and sentient capacities of the brain. It applies quite generally to any account of cognition that is based on a physical process, not merely what is usually meant by computation. If what we mean by computation is any physical process used for sensing, representing information, performing inferences, and controlling effectors, then it would seem that to deny that brains compute is to also deny the possibility of any physical account of mental processes. If all that was meant was that thought is a physically embodied process, there would be little controversy, and little substance to the claim. But the concept of computation, even in this most broadly stated form, is not as generic as this. Implicit in this account is the a.s.sumption that computation is const.i.tuted by a specific physical process (such as a volley of neuronal action potentials or the changes in connection weights in a neural net) that corresponds to a specific teleological process (such as solving a mathematical operation or categorizing a physical pattern).

This a.s.sumption is the basis for a second, even more serious problem. What const.i.tutes correspondence? And what is the difference between just any physical correspondence (the swirling of the wind and the rotation of a merry-go-round) and one that involves representation? In all computational theories-including so-called neural net theories-representation is understood in terms of a rule-governed mapping of specific extrinsic properties or meanings to correspondingly specific machine states. Although this may at first sound like an innocuous claim, it actually begs the fundamental question: How is representation const.i.tuted? Simply a.s.serting the existence of a mapping between properties in the word and properties of a mechanistic process presumed to take place in the brain implicitly frames the relationship in dualistic terms. There are two domains: the physical processes, and whatever const.i.tutes the representational relationships between them. Everything depends on the nature of the correspondence, and specifically, what makes it different from either an isomorphic or an arbitrary one-to-one mapping between components or phases of two distinct processes. Computation is merely a way of describing (or prescribing) machine processes. It is not an intrinsic property of any given machine process.

There is an instructive parallel between the logic of computation and the logic of natural selection, as we described it in the previous chapter. As we've seen, although natural selection in any form requires processes that generate, reconst.i.tute, and reproduce these selectively favored traits, the specific details of these processes are irrelevant to the logic and the efficacy of the process. Natural selection is in this sense agnostic with respect to how these adapted traits are physically generated. It is an abstraction about certain kinds of processes and their consequences. Similarly, the logic of computation is defined in a way that is agnostic to how (or whether) representation, inference, and interaction with the world are implemented. These issues are bracketed from consideration in order to define the notion of an algorithm: an abstract description of a machine process that corresponds to a description of or instructions for performing a semiotic process, such as solving a mathematical equation or organizing a list. This agnosticism is the source of the power of computation. As Alan Turing first demonstrated, a generic machine implementation can be found for any reasoning process (or semiotic process in general) that can be precisely described in terms of a token-manipulation process. This generic description, an algorithm, and its generic mechanistic a.s.sumptions, guarantee its validity in any of an infinite number of possible physically manifested forms.

As we discovered in the case of biological evolution, however, there is only a rather restricted range of dynamics that can actually fulfill the criteria required for natural selection to occur. a.n.a.logously, in the case of mental processes, we will find that only a very restricted form of dynamics is capable of const.i.tuting mental representation. So it cannot be merely a.s.sumed to exist by fiat without ignoring what will likely turn out to be important restrictions. Further, without representation, computation is just machine operation, and without human intentionality, machine operation is just physical change. So, while at one time it was possible to argue that the computational conception of the mind/brain relationship was the only viable game in town, it now appears that it never really addressed the problem at all, but only a.s.sumed that it didn't matter.

Whether described in terms of machine code, neural nets, or symbol processing, computation is an idealized physical process in the sense that the thermodynamic details of the process can be treated as irrelevant. In most cases, these physical details must be kept from interfering with the state-to-state transitions being interpreted as computations. And because it is an otherwise inanimate mechanism, there must also be a steady supply of energy to keep the computational process going. Any microscopic fluctuations that might otherwise blur the distinction between different states a.s.signed a representational value must also be kept below some critical threshold. This insulation from thermodynamic unpredictability is a fundamental design principle for all forms of mechanism, not just computing devices. In other words, we construct our wrenches, clocks, motors, musical instruments, and computers in such a way that they can only a.s.sume a certain restricted number of macro states, and we use inflexible metals, regularized structures, bearings, oil, and numerous thresholds for interactions, in order to ensure that incidental thermodynamic effects are minimized. In this way, only those changes of state described as functional can be favored.

A comparison with what we have learned in previous chapters about living processes can begin to remedy some of the limitations of this view. Although living processes also must be maintained within quite narrow operating conditions, the role that thermodynamic factors play in this process is basically the inverse of its role in the design of tools and the mechanisms we use for computation. The const.i.tuents of organisms are largely malleable, only semi-regular, and are constantly changing, breaking down, and being replaced. More important, the regularity achieved in living processes is not so much the result of using materials that intrinsically resist modification, or using component interactions that are largely insensitive to thermodynamic fluctuation, but rather due to using thermodynamic chemical processes to generate regularities; that is, via morphodynamic organization.

Generally speaking, these two distinct design strategies might be contrasted as regularity achieved by a means of thermodynamic isolation in machines, and regularity achieved by a means of thermodynamic openness in life. In addition, there is a fundamental inversion with respect to the source of constraints that determine the critical dynamical properties. In machines, the critical constraints are imposed extrinsically, from the top down, so to speak, to defend against the influence of lower-level thermodynamic effects. In life, the critical constraints are generated intrinsically and maintained by taking advantage of the amplification of lower-level thermodynamic and morphodynamic effects. The teleological features of machine functions are imposed from outside, a product of human intentionality. The teleodynamic features of living processes emerge intrinsically and autonomously.

But there is more than just an intrinsic/extrinsic distinction between computation and cognition. Kant's distinction between a mere mechanism and an organism applies equally well to mental processes. To rephrase Kant's definition of organism as a definition of mind, we could describe mind as possessing a self-amplifying, self-differentiating, self-rectifying formative power. A computation exhibits merely motive power but a mind also exhibits formative power. Or to put this in emergent dynamical terms: computation only transfers extrinsically imposed constraints from substrate to substrate, while cognition (semiosis) generates intrinsic constraints that have a capacity to propagate and self-organize. The difference between computation and mind is a difference in the source of these formal properties. In computation, the critical formal properties are descriptive distinctions based on selected features of a given mechanism. In cognition, they are distinctive regularities which are generated by recursive dynamics, and which progressively amplify and propagate constraints to other regions of the nervous system.

Like the mechanistic conception of life, the computational theory of mind, even in its most general form, a.s.sumes implicitly that the physical properties that delineate an information-bearing unit are only extrinsically defined glosses of otherwise simple physical mechanics. It also a.s.sumes that the machine operations that represent semiotic operations of any given type (e.g., face recognition or weather prediction) are also merely externally imposed re-descriptions of some extrinsically a.s.sessed correspondence relationship. Thus the mapping from computer operation to some interpreted physical process is nothing more that an imposed descriptive gloss. In the same way that Shannonian information, prior to interpretation, is only potentially information about something, the operations of a computing device before they are a.s.signed an interpretation are also just potential computational operations. What we need is an account where the relevant interpretive process is intrinsic, not extrinsic.

COMPUTING WITH MEAT.

In a now famous very short story about explorers from an alien planet investigating the source of a radio broadcast from Earth, the science fiction writer Terry Bisson portrays a conversation they have about the humans they've discovered. Taking their dialogue from a midpoint in the discussion: "They're meat all the way through."

"No brain?"

"Oh, there is a brain all right. It's just that the brain is made out of meat!"

"So . . . what does the thinking?"

"You're not understanding, are you? The brain does the thinking. The meat."

"Thinking meat! You're asking me to believe in thinking meat!"

"Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you getting the picture?"

"OmiG.o.d. You're serious then. They're made out of meat."4 With exaggerated irony, Bisson points out that we humans are pretty sloppy computing devices. It may be tongue-in-cheek, but it hints at an important challenge to the computer theory of mind: the design principles of brains just don't compute! Consider the basic building blocks of brains compared to those necessary to build devices with the formal properties that are required to produce computations. Neurons are living cells. These cells have been adapted, over the course of evolution, to use some of their otherwise generic metabolic and intercellular communication capabilities for the special purpose of conveying point-to-point signals between one another over long (in cellular terms) distances. Because they were not "designed" for this function, but recruited during the course of evolution from cells that once had more generic roles to play, they tend to be a bit unruly, noisy, and only modestly reliable transducers and conveyors of signals. Additionally, they still have to perform a great many non-signal transmission tasks that are necessary for any cell to stay alive, and this too constrains their capacity to perform purely signaling functions.

But how unruly? It would probably not be too far off the mark to estimate that of all the output activity that a neuron generates, a small percentage is precisely correlated with input, while at least as much is the result of essentially unpredictable molecular-metabolic noise; and the uncorrelated fraction might be a great deal more.5 Neurons are effectively poised at the edge of chaos, so to speak. They continually maintain an ionic potential across their surface by incessantly pumping positively charged ions (such as the sodium ions split from dissolved salt) outside their membrane. On this electrically unstable surface, hundreds of synapses from other neurons are tweaking the local function of these pumps, causing or preventing what amount to ion leaks, which destabilize the cell surface. As a result, they are constantly generating output signals generated by their intrinsic instability and modified by these many inputs.

In addition, neural networks, in even modestly large brains (such as those of small vertebrates), are quite degenerate in the sense that precise control of connectivity is not obtained by the minimal genetic specification that comes from embryogenesis, and from the remarkably limited number of molecular signals that appear to be determining cell phenotypes and connections, when compared with the astronomical number of precise connections that are in need of specification. In fact, contrary to what might be expected, large mammalian brains (such as our own) do not appear to use significantly more genetic information for their construction than the smallest mammalian brains (such as mouse brains), even though the numbers of connections must differ by many orders of magnitude. This is a humbling thought, but it is not the whole story. The specification of brain architecture is highly conserved across the entire range of animal forms, and in vertebrate brains connection patterns are also significantly dependent upon body structure and environmental information for their final specifications. The point of bringing this up in the present context is simply to make it clear that brains are not wired up in point-to point detail. Not even close. So we could hardly expect neuronal connection patterns to be the biological equivalent of logic gates or computer circuits.

In summary, it doesn't appear on the surface that brains are made of signal-processing and circuitry components that are even vaguely well suited to fill the requirements for computations in the normal sense of this word.

So what is minimally required to carry out computations in the way a computer does them? In a word, predictability. What about messy neural computers? What happens if the circuits in a computer happen to garble the signals-as one should expect would happen in a biocomputer? The "virtual machine" breaks down, of course. Sort of like damaging a gear in a real machine, it will at the very least fail to function as expected, fail to compute, and it may even cease to operate altogether: it might crash. Now imagine that all the connections and all the logic elements in your computer were a little unpredictable and unreliable. Certainly, computing in the typical way would be impossible under these conditions. But this infirmity doesn't by itself rule out biological computing in brains.

The first computer designers had to take this problem seriously, because at the time, in the late 1940s and early fifties, the tens of thousands of logic elements needed for a computer were composed of vacuum tubes and mechanical relays, and tended to be somewhat unreliable. To deal with this, they employed a technique long used for dealing with other forms of unreliable communication, and which had been mathematically formalized only a few years earlier by another pioneer of information technology, Claude Shannon. As we saw earlier, the answer is to include a bit of redundancy in the process-send the message many times in sequence, or send multiple messages simultaneously, or do the same computation multiple times, and then compare the results for consistency. So, even if brains are made of messy elements, it is at least conceivable that some similar logic of noise reduction might allow brains to compute, if with some inefficiency due to the need for redundant error-correcting processes. At least in principle, then, brains could be computers, provided the noise problem wasn't so bad that error correction doesn't get too far out of hand.

Unfortunately, there are a number of reasons to think that it would get out of hand. The first is that brains the size of average mammal brains are astronomically huge, highly interconnected, highly re-entrant networks. In such networks, noise can tend to get wildly amplified, and even very clean signal processing can produce unpredictable results; "dynamical chaos," it is often called. But additionally, many of the most relevant parts of mammal brains for "higher" cognitive functions include an overwhelmingly large number of excitatory connections-a perfect context for amplifying chaotic noisy activity.

We are forced to conclude from this list of computationally troublesome characteristics of brains that they appear to be organized according to a very different logic than one would expect of a computing device in any of the usual senses of the word. Indeed, if we turn the question around and ask what these structural features seem to suggest, we would conclude that noise must be a "design feature" of brains, not a "bug" to be dealt with and eliminated. But does this make any sense? Is it a realistic possibility that a major function of brains is to generate and communicate and amplify noise, such as is produced by the random variations of molecular metabolic processes in nerve cells? Is it even possible to reconceive of "computation" in a way that could accommodate this sort of process? Or will we need to sc.r.a.p the computational model altogether for something else? And if something else, what else?

In fact, there is another approach. It is suggested by the very facts that make the computational approach appear so ill-fitted: by the significant role played by noise in living brains, and by a neural circuit architecture which should tend to amplify rather than damp this noisiness. We need to think of neural processes the way we think about emergent dynamic processes in general. Both self-organizing and evolutionary processes epitomize the way that lower-order unorganized dynamics-the dynamical equivalent of noise-can under special circ.u.mstances produce orderliness and high levels of dynamical correlations. Although unpredictable in their details, globally these processes don't produce messy results. This is the starting point for a very different way to link neuronal processes to mental processes.

Life and sentience are deeply interrelated, because they are each manifestations of teleodynamic processes, although at different levels. Sentience is not just a product of biological evolution, but in many respects a micro-evolutionary process in action. To put this hypothesis in very simple and unambiguous terms: the experience of being sentient is what it feels like to be evolution. And evolution is a process dependent on component teleodynamic processes and their relationships to the environment on which they in turn depend, and on the perturbing influence of the second law of thermodynamics-noise, and its tendency to degrade constraint and correlation.

The presumption that we can equate the information const.i.tuting mental content to the patterns produced by specific neuronal signals or even groups of signals, as suggested by the computer metaphor, is in fact an order of magnitude too low. These signals are efficacious, and indeed control body functions, though in vertebrate nervous systems the efficacy of neural signals is almost certainly a product of activity statistics rather than specific patterns. These are sentient at the level of these cells and their relations to other cells, but do not reach the level of whole brain sentience. That is a function of dynamical regularities at a more global level. The activity of any single neuron is no more critical to the regularities of neuronal activity that give rise to experience than are the movements of individual molecules responsible for Benard cell formation. Focusing on the neuronal level treats these individual processes as though they are computational tokens, when instead they are more like individual thermodynamic fluctuations among the vast numbers of interacting elements that contribute to the formation of global morphodynamic effects.

Focusing on the way that the activity of individual neurons correlates with specific stimuli is a seductive methodological strategy. Indeed, as external observers of these correlations, neuroscientists can even sometimes predict the relevant stimulus features or behaviors that individual neurons are responding to. So the search for stimulus correlations with neuronal activity in experimental animals has become one of the major preoccupations of neurophysiology. In this respect, large brains are not as stochastic in their organization as many non-biological thermodynamic systems. Interactions between neurons are not merely local, and are far more complex and context-sensitive than are simple dynamical interactions. The highly plastic network structures const.i.tuting brains provide the ground for a vast range of global interaction constraints which can be effected by adjustments of neuronal responsiveness, and which cycles of recursive neuronal activity can amplify to the point of global morphodynamic expression.

But neurophysiologists are extrinsic observers. They are able to independently observe both the activity of neurons and the external stimulus presented to them, whether these stimuli are applied directly in the form of electrical stimulation or indirectly through sensory pathways. Yet this in no way warrants the inference that there is any intrinsic way for the nervous system to likewise interpret these regularities, and indeed, there is no independent way to compare stimuli with their neuronal influences. The computational metaphor depends on such an extrinsic homunculus to a.s.sign an interpretation to any given correlation, but this function must arise intrinsically and autonomously in the brain. This is another reason to expect that the phenomena which const.i.tute sensory experiences and behavioral intentions are dynamically supervenient on these neuronal dynamics and not directly embodied by them. Since morphodynamic regularities emerge as the various sources of interaction constraints compound with one another, they necessarily reflect the relationship between network structural constraints and extrinsic constraints. And different morphodynamic processes brought into interaction can generate orthograde or contragrade effects, including teleogenic effects.

Another important caveat is that correlations between neuronal activity patterns in different brain regions, and between neuronal activity and extrinsic stimuli, are not necessarily reflections of simple causal linkages. They may be artifacts of simply being included in some higher-order, network-level morphodynamic process, in the same way that causally separated but globally correlated dynamics can emerge even in quite separated regions in non-biological morphodynamic processes. For example, recall that molecules separated by many millimeters in separate Benard cells within a dish can exhibit highly correlated patterns of movement, despite being nearly completely dynamically isolated from one another. So, in accord with the cla.s.sic criticisms of correlation theories of reference (such as those discussed in chapters 12 and 13), neural firing correlation alone is neither a necessary nor sufficient basis for being information about anything. Interpretation requires something more, something dynamic: a self, and a self is something teleodynamic.

A focus on correlative neuronal activity as the critical link between mental and environmental domains has thus served mostly to reinforce dualism and to justify eliminative materialism. When the neurophysiologist making these a.s.sessments is excluded from the a.n.a.lysis, there is no homunculus, only simple mechanistic causal influences and correlations. The road to a scientific theory of sentience, which preserves rather than denies its distinctiveness, begins with the recognition of the centrality of its dynamical physical foundation. It is precisely by reconstructing the emergence of the teleodynamics of thought from its neuronal thermodynamic and morphodynamic foundations that we will rediscover the life that computationalism has drained from the science of mind.

FROM ORGANISM TO BRAIN.

Organisms with nervous systems, and particularly those with brains, have evolved to augment and elaborate a basic teleodynamic principle that is at the core of all life. Brains specifically evolved in animate multicelled creatures-animals-because being able to move about and modify the surroundings require predictive as well as reactive capacities. The evolution of this "antic.i.p.atory sentience"-nested within, const.i.tuted by, and acting on behalf of the "reactive (or vegetative) sentience" of the organism-has given rise to emergent features that have no precedent. Animal sentience is one of these. As brains have evolved to become more complex, the teleodynamic processes they support have become more convoluted as well, and with this the additional distinctively higher-order mode of human symbolically mediated sentience has emerged. These symbolic abilities provide what might be described as sentience of the abstract.

Despite its familiarity, however, starting with the task of trying to explain first-person human conscious experience, even with all the wonderful new tools of in vivo brain imaging, cellular neuroscience, and human introspection at our fingertips, is daunting. We scarcely have a general theory of how the simplest microscopic brains operate, much less an understanding of why a human brain that is just slightly larger and minimally different in its organization from most other mammal brains is capable of such different ways of thinking. Although it may not address many questions that are of interest to those who seek a solution to the mystery of human consciousness, I propose to take a different approach than is typical of efforts to understand consciousness. I believe we can best approach this mystery, as we have successfully approached other natural phenomena, by a.n.a.lyzing the physical processes involved, but doing so in terms of their emergent dynamics. As with the problem of understanding the concept of self, we must start small and work our way upward. But approaching human subjectivity as an a.n.a.logue of the primitive ententional properties of brainless organisms offers only a very crude guide. It ignores the fact that brain-based sentience is the culmination of vastly many stages in the evolution of sentience built upon sentience. So although we must build on the a.n.a.logy of the teleodynamic processes of autogens and simple organisms, as did evolution, in order to avoid a.s.suming what we want to explain, we must also pay attention to the special emergent differences that arise when higher-order teleodynamic processes are built upon lower-order teleodynamic processes.

But can we really understand subjective experience by a.n.a.lyzing the emergent dynamical features of the physical processes involved? Aren't we still outside? Although many philosophers of mind have argued that the so-called third-person and first-person approaches to this problem are fundamentally incompatible, I am not convinced. The power of the emergent dynamic approach is that it forces us to abandon simple material approaches to ententional phenomena, and to take account of both what is present and what is absent in the phenomena we investigate. Although it does not promise to dissolve the interior/exterior distinction implicit in this problem, it may provide something nearly as useful: an account of the process that helps explain why such an inside (first-person) and outside (third-person) distinction emerged in the first place, and with it many of the features we find so enigmatic.

By grounding our initial a.n.a.lysis of ententional phenomena on a minimal exemplar of this emergent shift in dynamics-an autogen-we have been able to demystify many phenomena that have previously seemed outside the realm of natural science. Perhaps starting with an effort to explain the most minimal form of sentience can also yield helpful insights into the dynamical basis of this most personal of all mysteries. At the very least it can bring these seemingly incompatible domains into closer proximity. So let's begin by using the autogen concept, again, this time as a guide for thinking about how we might conceive of a Chinese roomlike approach for claims about sentient processes.

This suggests that as a first step we can compare conscious mindful processes to the teleodynamic organization of a molecular autogen and compare the mindless computational processes (such as Searle envisions) to the simple mechanistically organized chemical processes that autogens are supervenient upon. In other words, understanding and responding in Chinese may be different from merely matching Chinese characters, the way autogenesis is different from, say, the growth of a crystal precipitating from a solution. The a.n.a.logy is admittedly vague at this point, and the Chinese room allegory also besets us with an added human factor that could get in the way.6 But considered in light of emergent dynamics, Searle intends to illuminate a similar distinction: the difference between a mechanistic (homeodynamic) process and an ententional (teleodynamic) process. If, as we have argued, autogenic systems exemplify precisely how ententional processes supervene on and emerge from non-ententional processes, then it should be possible to augment the Chinese room a.n.a.logy in similar ways to precisely identify what it is missing. To do so would provide a useful metric for determining whether a given brain process is merely computational or also sentient.

Indeed, this a.n.a.logy already suggests a first step toward resolving a false dichotomy that has often impeded progress in this endeavor: the false dichotomy between computation and conscious thought. This is not to say that conscious thought can be reduced to computation, but rather that computation in some very generic sense is like the thermodynamic chemistry underlying autogenic molecular processes. It is a necessary subvenient dynamic, but it is insufficient to account for the emergent dynamic properties we are interested in. And just as it was necessary to stop trying to identify the teleodynamics of life with molecular dynamics alone, and instead begin to pay attention to the constraints thereby embodied, it will be necessary to stop thinking of the computational mechanics as mapping to the intentional properties of mentality. In this way, we may be able to develop some clear criteria for a Turing test of sentience.

Of course, we will need to do more than just identify the parallels. We need to additionally understand how these teleodynamic features emerge in nervous systems, and what additionally emerges as a result of the fact that these teleodynamic processes are themselves emergent from the lower-level teleodynamic characteristic of the neurons that brains are made of.

In the following chapter, I offer only a very generic sketch of an emergent dynamics approach to this complex problem. Once we begin to understand how this occurs, and why, there will still be enormous challenges ahead: filling in the special details of how this knowledge can be integrated with what we have learned about the anatomy, physiology, chemistry, and signal-processing dynamics of brains. If this approach is even close to correct, however, it will mean that a considerable rea.s.sessment of cognitive neuroscience may be required. Not that what we've learned about neurons and synapses will need to change, or that information about sensory, motor, or cognitive regional specialization will be rendered moot. Our growing understanding of the molecular, cellular, connectional, and signal transduction details of brain function is indeed critical. What may need to change is how we think about the relationship between brain function and mental experience, and the physical status of the contents of our thoughts and expectations. What awaits is the possibility of a hitherto unrecognized methodology for studying mind/brain issues that have, up to now, been considered beyond the reach of empirical science.

17.

CONSCIOUSNESS.

n.o.body has the slightest idea how anything material could be conscious. n.o.body even knows what it would be like to have the slightest idea how anything material could be conscious.

-JERRY FODOR, 1992.

THE HIERARCHY OF SENTIENCE.

The central claim of this a.n.a.lysis is that sentience is a typical emergent attribute of any teleodynamic system. But the distinct emergent higher-order form of sentience that is found in animals with brains is a form of sentience built upon sentience. So, although there is a hierarchic dependency of higher-order forms of sentience on lower-order forms of sentience, there is no possibility of reducing these higher-order forms (e.g., human consciousness) to lower-order forms (e.g., neuronal sentience, or the vegetative sentience of brainless organisms and free-living cells). This irreducibility arises for the same reason that teleodynamic processes in any form are irreducible to the thermodynamic processes that they depend on. Nevertheless, human consciousness could not exist without these lower levels of sentience serving as a foundation. To the extent that sentience is a function of teleodynamics, it is necessarily level-specific. If teleodynamic processes can emerge at levels above the molecular processes as exemplified in autogenic systems, such as simple single-cell organisms, multicelled plants and animals, and nervous systems (and possibly even at higher levels), then at each level at which teleogenic closure occurs, there will be a form of sentience characteristic to that level.

In the course of evolution, a many-tiered hierarchy of ever more convoluted forms of feeling has emerged, each dependent upon but separate from the form of feeling below. So, despite the material continuity that const.i.tutes a multicelled animal with a brain, at each level that the capacity for sentience emerges, it will be discontinuous from the sentience at lower and evolutionarily previous levels. We should therefore carefully distinguish molecular, cellular, organismal, and mental forms of sentience, even when discussing brain function. Indeed, all these forms of sentience should be operative in parallel in the functioning of complex nervous systems.

A neuron is a single cell, and simpler in many ways than almost any other single-cell eukaryotic organisms, such as an amoeba. But despite its dependence on being situated within a body and within a brain, and having its metabolism constantly tweaked by signals impinging on it from hundreds of other neurons, in terms of the broad definition of sentience I have described above, neurons are sentient agents. That doesn't mean that this is the same, or even fractionally a part of the emergent sentience of mental processes. The discontinuity created by the dynamical supervenience of mental (whole brainlevel) teleodynamics on neuronal (cellular-level) teleodynamics makes these entirely separate realms.

Thus the sentient experience you have while reading these words is not the sum of the sentient responsiveness of the tens of billions of individual neurons involved. The two levels are phenomenally discontinuous, which is to say that a neuron's sentience comprises no fraction of your sentience. This higher-order sentience, which const.i.tutes the mental subjective experience of struggling with these ideas, is const.i.tuted by the teleodynamic features emerging from the flux of intercellular signals that neurons give rise to. Neurons contribute to this phenomenon of mental experience by virtue of the way their vegetative sentience (implicit in their individual teleodynamic organization) contributes non-mechanistic interaction characteristics to this higher-order neural networklevel teleodynamics. The teleodynamics that const.i.tutes this characteristic form of cellular-level adaptive responsiveness, contributed by each of the billions of neurons involved, is therefore separate and distinct from that of the brain. But since brain-level teleodynamics supervenes on this lower level of sentient activity, it inevitably exhibits distinctive higher-order emergent properties. In this respect, this second-order teleodynamics is a.n.a.logous to the way that the teleodynamics of interacting organisms within an ecosystem can contribute to higher-order population dynamics, including equilibrating (homeodynamic) and self-organizing (morphodynamic) population effects. Indeed, as we will explore further below, the tendency for population-level morphodynamic processes to emerge in the recursive flow of signals within a vast extended network of interconnected neurons is critical to the generation of mental experience. But the fact that these component interacting units-neurons-themselves are adaptive teleodynamic individuals means that even as these higher-order population dynamics are forming, these components are adapting with respect to them, to fit them or even resist their formation. This tangled hierarchy of causality is responsible for the special higher-order sentient properties (e.g., subjective experience) that brains are capable of producing, which their components (neurons) are not.

In other words, sentience is const.i.tuted by the dynamical organization, not the stuff (signals, chemistry) or even the neuronal cellular-level sentience that const.i.tutes the substrate of that dynamics. The teleodynamic processes occurring within each neuron are necessary for the generation of mental experience only insofar as they contribute to the development of a higher-order teleodynamics of global signal processing. The various nested levels of sentience-from molecular to neuronal to mental-are thus mutually inaccessible to one another, and can exhibit quite different properties. Sentience has an autonomous locus at a specific level of dynamics because it is const.i.tuted by the self-creative, self-bounding nature of teleogenic individuation. The dynamical reflexivity and constraint closure that characterizes a teleodynamic system, whether const.i.tuting intraneuronal processes or the global-signaling dynamics developing within an entire brain, creates an internal/external self/other distinction that is determined by this dynamical closure. Its locus is ultimately something not materially present-a self-creating system of constraints with the capacity to do work to maintain its dynamical continuity-and yet it provides a precise dynamical boundedness.

The sentience at each level is implicit in the capacity to do self-preservative work, as this const.i.tutes the system's sensitivity to non-self influences via an intrinsic tendency to generate a self-sustaining contragrade dynamics. This tendency to generate self-preserving work with respect to such influences is a spontaneous defining characteristic of such reciprocity of constraint creation. Closure and autonomy are thus the very essence of sentience. But they are also the reason that higher-order sentient teleogenic systems can be const.i.tuted of lower-order teleogenic systems, level upon level, and yet produce level-specific emergent forms of sentience that are both irreducible and unable to be entirely merged into larger conglomerates.2 It is teleogenic closure that produces sentience but also isolates it, creating the fundamental distinction between self and other, whether at a neuronal level or a mental level.

So, while the lower level of cellular sentience cannot be dispensed with, it is a realm apart from mental experience. There is the world of the neuron and the world of mind, and they are distinct sentient realms. Neuronal sentience provides the ground for the interactions that generate higher-order homeodynamic, morphodynamic, and teleodynamic processes of neural activity. If neurons were not teleodynamically responsive to the activities of other neurons (and thereby also to the extrinsic stimuli affecting sensory cells), there would be no possibility for these higher-order dynamical levels of interaction to emerge, and thus no higher-level sentience; no subjective experience. But with this transition from the realm of individual neuronal signal dynamics to the dynamics that emerges in a brain due to the recursive effects that billions of neuronal signals exert on one another, there is a fundamental emergent discontinuity. Mental sentience is something distinct from neuronal sentience, and yet this nested dependency means that mental sentience is const.i.tuted by the dynamics of other sentient interactions. It is a second-order sentience emergent from a base of neuronal sentience, and additionally inherits the constraints of whole organism teleodynamics (and its vegetative sentience). So subjective sentience is fundamentally more complex and convoluted in its teleodynamic organization. It therefore exemplifies emergent teleodynamic properties that are unprecedented at the lower level.

EMOTION AND ENERGY.

An emergent dynamic account of the relationship between neurological function and mental experience differs from all other approaches by virtue of its necessary requirement for specifying a homeodynamic and morphodynamic basis for its teleodynamic (intentional) character. This means that every mental process will inevitably reflect the contributions of these necessary lower-level dynamics. In other words, certain ubiquitous aspects of mental experience should inevitably exhibit organizational features that derive from, and a.s.sume certain dynamical properties characteristic of, thermodynamic and morphodynamic processes. To state this more concretely: experience should have clear equilibrium-tending, dissipative, and self-organizing characteristics, besides those that are intentional. These are inseparable dynamical features that literally const.i.tute experience. What do these dynamical features correspond to in our phenomenal experience?

Broadly speaking, this dynamical infrastructure is "emotion" in the most general sense of that word. It is what const.i.tutes the "what it feels like" of subjective experience. Emotion-in the broad sense that I am using it here-is not merely confined to such highly excited states as fear, rage, s.e.xual arousal, love, craving, and so forth. It is present in every experience, even if often highly attenuated, because it is the expression of the necessary dynamic infrastructure of all mental activity. It is the tension that separates self from non-self; the way things are and the way they could be; the very embodiment of the intrinsic incompleteness of subjective experience that const.i.tutes its perpetual becoming. It is a tension that inevitably arises as the incessant shifting course of mental teleodynamics encounters the resistance of the body to respond, and the insistence of bodily needs and drives to derail thought, as well as the resistance of the world to conform to expectation. As a result, it is the mark that distinguishes subjective self from other, and is at the same time the spontaneous tendency to minimize this disequilibrium and difference. In simple terms, it is the mental tension that is created because of the presence of a kind of inertia and momentum a.s.sociated with the process of generating and modifying mental representations. The term e-motion is in this respect curiously appropriate to the "dynamical feel" of mental experience.

This almost Newtonian nature of emotion is reflected in the way that the metaphors of folk psychology have described this aspect of human subjectivity over the course of history in many different societies. Thus English speakers are "moved" to tears, "driven" to behave in ways we regret, "swept up" by the mood of the crowd, angered to the point that we feel ready to "explode," "under pressure" to perform, "blocked" by our inability to remember, and so forth. And we often let our "pent-up" frustrations "leak out" into our casual conversations, despite our best efforts to "contain" them. Both the motive and resistive aspects of experience are thus commonly expressed in energetic terms.

In the Yogic traditions of India and Tibet, the term kundalini refers to a source of living and spiritual motive force. It is figuratively "coiled" in the base of the spine, like a serpent poised to strike or a spring compressed and ready to expand. In this process, it animates body and spirit. The subjective experience of bodily states has also often been attributed to physical or ephemeral forms of fluid dynamics. In ancient Chinese medicine, this fluid is chi; in the Ayurvedic medicine of India, there were three fluids, the doshas; and in Greek, Roman, and later Islamic medicine, there were four humors (blood, phlegm, light and dark bile) responsible for one's state of mental and physical health. In all of these traditions, the balance, pressures, and free movement of these fluids were critical to the animation of the body, and their proper balance was presumed to be important to good health and "good humor." The humor theory of Hippocrates, for example, led to a variety of medical practices designed to rebalance the humors that were disturbed by disease or disruptive mental experience. Thus bloodletting was deemed an important way to adjust relative levels of these humors to treat disease.

This fluid dynamical conception of mental and physical animation was naturally reinforced by the ubiquitous correlation of a pounding heart (a pump) with intense emotion, stress, and intense exertion. Both Rene Descartes and Erasmus Darwin (to mention only two among many) argued that the nervous system likewise animates the body by virtue of differentially pumping fluid into various muscles and organs through microscopic tubes (presumably the nerves). When, in the 1780s, Luigi Galvani discovered that a severed frog leg could be induced to twitch in response to contact by electricity, he considered this energy to be an "animal electricity." And the vitalist notion of a special ineffable fluid of life, or elan vital, persisted even into the twentieth century.

This way of conceiving of the emotions did not disappear with the replacement of vitalism and with the rise of anatomical and physiological knowledge in the nineteenth and early twentieth century. It was famously reincarnated in Freudian psychology as the theory of libido. Though Freud was careful not to identify it with an actual fluid of the body, or even a yet-to-be-discovered material substrate, libido was described in terms that implied that it was something like the nervous energy a.s.sociated with s.e.xuality. Thus a repressed memory might block the "flow" of libido and cause its flow to be displaced, acc.u.mulated, and released to animate inappropriate behaviors. Freud's usage of this hydrodynamic metaphor became interpreted more concretely in the Freudian-inspired theories of Wilhelm Reich, who argued that there was literally a special form of energy, which he called "orgone" energy, that const.i.tuted the libido. Although such notions have long been abandoned and discredited with the rise of the neurosciences, there is still a sense in which the pharmacological treatments for mental illness are sometimes conceived of on the a.n.a.logy of a balance of fluids: that is, neurotransmitter "levels." Thus different forms of mental illness are sometimes described in terms of the relative levels of dopamine, norepinephrine, or serotonin that can be manipulated by drugs that alter their production or interfere with their effects.

This folk psychology of emotion was challenged in the 1960s and 1970s by a group of prominent theorists, responsible for ushering in the information age. Among them was Gregory Bateson, who argued that the use of these energetic a.n.a.logies and metaphors in psychology made a critical error in treating information processes as energetic processes. He argued that the appropriate way to conceive of mental processes was in informational and cybernetic terms.3 Brains are not pumps, and although axons are indeed tubular, and molecules such as neurotransmitters are actively conveyed along their length, they do not contribute to a hydrodynamic process. Nervous signals are propagated ionic potentials, mediated by molecular signals linking cells across tiny synaptic gaps. On the model of a cybernetic control system, he argued that the differences conveyed by neurological signals are organized so that they regulate the release of "collateral energy," generated by metabolism. It is this independently available energy that is responsible for animating the body. Nervous control of this was thus more accurately modeled cybernetically. This collateral metabolic energy is a.n.a.logous to the energy generated in a furnace, whose level of energy release is regulated by the much weaker changes in energy of the electrical signals propagated around the control circuit of a thermostat. According to Bateson, the mental world is not const.i.tuted by energy and matter, but rather by information. And as was also pioneered by the architects of the cybernetic theory whom Bateson drew his insights from, such as Wiener and Ashby, and biologists such as Warren McCulloch and Mayr, information was conceived of in purely logical terms: in other words, Shannon information. Implicit in this view-which gave rise to the computational perspective in the decades that followed-the folk wisdom expressed in energetic metaphors was deemed to be misleading.

By more precisely articulating the ways that thermodynamic, morphodynamic, and teleodynamic processes emerge from, and depend on, one another, however, we have seen that it is this overly simple energy/information dichotomy that is misleading. Information cannot so easily be disentangled from its basis in the capacity to reflect the effects of work (and thus the exchange of energy), and neither can it be simply reduced to it. Energy and information are asymmetrically and hierarchically interdependent dynamical concepts, which are linked by virtue of an intervening level of morphodynamic processes. And by virtue of this dynamical ascent, the capacity to be about something not present also emerges; not as mere signal difference, but as something extrinsic and absent yet potentially relevant to the existence of the teleodynamic (interpretive) processes thereby produced.

It is indeed the case that mental experience cannot be identified with the ebb and flow of some vital fluid, nor can it be identified directly with the buildup and release of energy. But as we've now also discovered by critically deconstructing the computer a.n.a.logy, it cannot be identified with the signal patterns conveyed from neuron to neuron, either. These signals are generated and a.n.a.lyzed with respect to the teleodynamics of neuronal cell maintenance. They are interpreted with respect to cellular-level sentience. Each neuron is bombarded with signals that const.i.tute its Umwelt. They perturb its metabolic state and force it to adapt in order to reestablish its stable teleodynamic "resting" activity. But, as was noted in the previous chapter, the structure of these neuronal signals does not const.i.tute mental information, any more than the collisions between gas molecules const.i.tute the attractor logic of the second law of thermodynamics.

FIGURE 17.1: The formal differences between computation and cognition (as described by this emergent dynamics approach) are shown in terms of the correspondences between the various physical components and dynamics of these processes (dependencies indicated by arrows). The multiple arrow links depicting cognitive relationships symbolize stochastically driven morphodynamic relationships rather than one-to-one correspondences between structures or states. This intervening form generation dynamic is what most distinguishes the two processes. It enables cognition to autonomously ground its referential and teleological organization, whereas computational processes must have these relationships "a.s.signed" extrinsically, and are thus parasitic on extrinsic teleodynamics (e.g., in the form of programmers and interpreters). Computational "information" is therefore only Shannon information.

As we will explore more fully below, mental information is const.i.tuted at a higher population dynamic level of signal regularity. As opposed to neuronal information (which can superficially be a.n.a.lyzed in computational terms), mental information is embodied by distributed dynamical attractors. These higher-order, more global dynamical regularities are const.i.tuted by the incessantly recirculating and restimulating neural signals within vast networks of interconnected neurons. The attractors form as these recirculating signals damp some and amplify other intrinsic constraints implicit in the current network geometry. Looking for mental information in individual neuronal firing patterns is looking at the wrong level of scale and at the wrong kind of physical manifestation. As in other statistical dynamical regularities, there are a vast number of microstates (i.e., network activity patterns) that can const.i.tute the same global attractor, and a vast number of trajectories of microstate-to-microstate changes that will tend to converge to a common attractor. But it is the final quasi-regular network-level dynamic, like a melody played by a million-instrument orchestra, that is the medium of mental information. Although the contribution of each neuronal response is important, it is more with respect to how this contributes a local micro bias to the larger dynamic. To repeat again, it is no more a determinate of mental content than the collision between two atoms in a gas determines the tendency of the gas to develop toward equilibrium (though the fact that neurons are teleodynamic components rather than simply mechanical components makes this a.n.a.logy far too simple).

This shift in level makes it less clear that we can simply dismiss these folk psychology force-motion a.n.a.logies. If the medium of mental representation is not mere signal difference, but instead is the large-scale global attractor dynamic produced by an extended interconnected population of neurons, then there may also be global-level homeodynamic properties to be taken into account as well. As we have seen in earlier chapters, these global dynamical regularities will exhibit many features that are also characteristic of force-motion dynamics.

THE THERMODYNAMICS OF THOUGH.

So what would it mean to understand mental representations in terms of global homeodynamic and morphodynamic attractors exhibited by vast ensembles of signals circulating within vast neuronal networks? Like their non-biological counterparts, these higher-order statistical dynamical effects should have very distinctive properties and requirements.

Consider, for example, morphodynamic attractor formation. It is a consequence of limitations on the rate of constraint dissipation within a system that is being incessantly destabilized. Because of this constant perturbation, higher-order dynamical regularities form as less efficient uncorrelated trajectories of micro constraint dissipation are displaced by more efficient globally correlated trajectories. In neurological terms, incessant destabilization is provided by some surplus of uncorrelated neuronal activity being generated within and circulating throughout a highly reentrant network. Above some threshold level of spontaneous signal generation, local constraints on the distribution of this activity will compound faster than they can dissipate. Thus, pushed above this threshold of incessant intrinsic agitation, the signals circulating within a localized network will inevitably regularize with respect to one another as constraints on signal distribution within the network tend to redistribute. This will produce dynamical attractors which reflect the distribution of biases in the connectional geometry of that network, and thus will be a means for probing and expressing such otherwise diffusely distributed biases.

But unlike the regularities exhibited by individual neural signals, or the organized volleys of signals conducted along a bundle of axons, or even the signal patterns converging from hundreds of inputs on a single neuron, morphodynamic regularities of activity patterns within a large neural network will take time to form. This is because, a.n.a.logous to the simple mechanical dynamical effects of material morphodynamics, these regularities are produced by the incremental compounding of constraints as they are recirculated continuously within the network. This recirculation is driven by constant perturbation. Consequently, without persistent above-threshold excitation of a significant fraction of the population of the neurons comprising a neural network, higher-order morphodynamic regularities of activity dynamics would not tend to form.

What might this mean for mental representation, a.s.suming that mental content is embodied as population-level dynamical attractors? First, it predicts that the generation of any mental content, whether emerging from memory or induced by sensory input, should take time to develop. Second, it should require something like a persistent metabolic boost, to provide a sufficient period of incessant perturbation to build up local signal imbalances and drive the formation of larger-scale attractor regularities. This suggests that there should consequently be something akin to inertia a.s.sociated with a change of mental content and shift in attention. But this further suggests that self-initiated shifts in cognitive activity will require something a.n.a.logous to work in order to generate, and that stimuli from within or without that are capable of interrupting ongoing cognitive activities are also doing work that is contragrade to current mental processes. Third, because of the time it takes for the non-linear recursive circulation of these signals to self-organize large-scale network dynamics, mental content should also not emerge all or none into awareness, but should rather differentiate slowly from vague to highly detailed structures. And the level of differentiation achieved should be correlated both with sustained high levels of activation and with the length of time this persists. Generating more precise mental content takes both more "effort" and a more sustained "focus" of attention.

Because the basis of this process is incessant spontaneous neuronal activity, a constant supply of resources-oxygen and glucose-is its most basic requirement. Without sufficient metabolic resources, neuronal activity levels will be insufficient to generate highly regular morphodynamic attractors, though with too much perturbation (as in simpler thermodynamic dissipative systems), chaotic dynamics may result. This suggests that we should expect some interesting trade-offs.

Larger-scale morphodynamic processes should be more capable of morphodynamic work (affecting other morphodynamic processes developing elsewhere in the brain) but will also be highly demanding of sustained metabolic support to develop to a high degree of differentiation. But metabolic resources are limited, and neurons themselves are incapable of sustaining high levels of activity for long periods without periodically falling into refractory states. For this reason, we should expect that mental focus should also tend to spontaneously shift, as certain morphodynamic attractors degrade and dedifferentiate with reduced homeodynamic support, and others begin to emerge in their stead. Moreover, a morphodynamic neural activity pattern that is distributed over a wide network involving diverse brain systems will be highly susceptible to the relative distribution of metabolic resources. This means that the differential distribution of metabolic support in the brain will itself play a major role in determining what morphodynamic processes are likely to develop when and where. So metabolic support itself may independently play a critical role in the generation, differentiation, modification, and degradation of mental representations. And the more differentiated the mental content and more present to mind, so to speak, the more elevated the regional network metabolism and the more organized the attractors of network activity.

How might this thermodynamics of thought be expressed? Consider first the fact that morphodynamic attractors will only tend to form when these resources are continually available and neurons are active above some resting threshold. Morphodynamic attractor formation is an orthograde tendency at that level, but it requires lower-order homeodynamic (and thus thermodynamic) work to drive it. If neural activity levels are normally fluctuating around this threshold, however, some degree of local morphodynamic attractor activity will tend to develop spontaneously. This means that minimally differentiated mental representations should constantly arise and fade from awareness as though from no specific antecedent or source of stimulation. If the morphodynamics is only incipient, and not able to develop to a fully differentiated and robust attractor stage, it will only produce what amounts to an undifferentiated embryo of thought or mental imagery. This appears to be the state of unfocused cognition at rest-as in daydreaming-suggesting that indeed the awake, unfocused brain hovers around this threshold. Only if metabolic support or spontaneous neuronal activity falls consistently below this threshold will there be no mental experience-as may be the case for the deepest levels of sleep. Normally, then, network-level neural dynamics in the alert state is not so much at the edge of chaos as at the threshold of morphodynamics.

But entering a state of mind focused on certain mnemonically generated content or sensory-motor contingencies means that some morphodynamic processes must be driven to differentiate to the point that a robust attractor forms. Since this requires persistent high metabolic support to maintain the dissipative throughput of signal activity, it poses a sort of chicken-and-egg relationship between regional brain metabolism and the development of mental experience. Well-differentiated mental content requires persistent widespread, elevated metabolic activation, and it takes elevated spontaneous neuronal activity to differentiate complex mental states. Which comes first? Of course, as the metaphor suggests, each can be precursor or consequence, depending on the context.

If the level of morphodynamic development in a given region is in part a function of the ebb and flow of metabolic resources allocated to that region, adjusting local metabolism up or down extrinsically should also increase or decrease the differentiation and robustness of the corresponding mental representation. This possibility depends on the extent to which individual neuronal acti