“Well, I never heard it before,” said the Mock Turtle; “but it sounds uncommon nonsense.”

~ Lewis Carroll, Alice in Wonderland

Chapter 14

Consciousness, Free Will, and the Law

OUTLINE

Anatomical Orientation

Consciousness

Neurons, Neuronal Groups, and Conscious Experience

The Emergence of the Brain Interpreter in the Human Species

Abandoning the Concept of Free Will

The Law

ON OCTOBER 1, 1993, 12-year-old Polly Klaas (Figure 14.1) had two girlfriends over to her suburban home in Petaluma, California, for a slumber party. At about 10:30 p.m., with her mother and sister asleep down the hall, Polly opened her bedroom door to get her friend’s sleeping bags from the living room. An unknown man with a knife was standing in her doorway. He made the girls lie down on their stomachs, tied them up, put pillowcases over their heads, and kidnapped Polly. Fifteen minutes later, the two friends had freed themselves and ran to wake Polly’s mother, who made a frantic 911 call. A massive ground search was launched to find Polly and her abductor—who, it was later found, had left his palm print behind in Polly’s room. Over the course of 2 months, 4,000 volunteers helped with the search.

The year 1993 was early in the history of the Internet. The system was not used for information sharing to the extent that it is today. While local businesses donated thousands of posters and paid for and mailed 54 million flyers, two local residents contacted the police and suggested digitizing Polly’s missing child poster and using the Internet to disseminate the information about her. This approach had never been taken before. Thanks to all these efforts, Polly’s plight became widely publicized and known nationally and internationally. Two months later, a twice-convicted kidnapper with a history of assaults against women, Richard Allen Davis, was arrested on a parole violation. He had been paroled 5 months earlier after serving half of a 16-year sentence for kidnapping, assault, and robbery. He had spent 18 of the previous 21 years in and out of prison. After his release, he had been placed in a halfway house, had a job doing sheet-metal work, was keeping his parole officer appointments, and was passing his drug tests. As soon as he had enough money to buy a car, however, things changed. He stopped showing up at work, disappeared from the halfway house, and was in violation of his parole. When Davis’s prints were found to match the palm print left behind in Polly’s room, he was charged with her abduction. Four days after his arrest, he led police to a place about 50 miles north of Petaluma and showed them Polly’s half-dressed, decomposed body, lying under a blackberry bush and covered with a piece of plywood. Davis admitted to strangling her twice, once with a cloth garrote and again with a rope to be sure she was dead. He was later identified by several residents as having been seen in the park across the street from Polly’s house or in the neighborhood during the 2 months before her abduction.

FIGURE 14.1 Polly Klaas.

Davis was tried and found guilty of the first-degree murder of Polly Klaas with special circumstances, which included robbery, burglary, kidnapping, and a lewd act on a child. This verdict made him eligible for the death sentence in California, which the jury recommended. Polly’s father stated, “It doesn’t bring our daughter back into our lives, but it gets one monster off the streets,” and agreed with the jury, saying that, “Richard Allen Davis deserves to die for what he did to my child” (Kennedy, 1996).

Incapacitation, retributive punishment, and rehabilitation are the three choices society has for dealing with criminal behavior. The judge does the sentencing, and in this case, the jury’s recommendation was followed. Richard Allen Davis is currently on California’s death row. When society considers public safety, it is faced with the decision about which perspective those making and enforcing the laws should take: retribution, an approach focused on punishing the individual and bestowing “just deserts,” or consequentialism, a utilitarian approach holding that what is right is what has the best consequences for society.

Polly’s kidnapping and murder were a national and international story that sparked widespread outrage. Not only was a child taken from the supposed safety of her home while her mother was present, but the perpetrator was a violent repeat offender who had been released early from prison and again was free to prey upon innocent victims. Although this practice was common enough, most of the public was unaware of its scope. Following the Polly Klaas case, people demanded a change. Many thought that Davis should not have been paroled, that he was still a threat. They also thought that certain behavior warranted longer incarceration. The response was swift. In 1993, Washington State passed the first threestrikes law, mandating that criminals convicted of serious offenses on three occasions be sentenced to life in prison without the possibility of parole. The next year, California followed suit and 72 % of voters supported that state’s rendition of the three-strikes law, which mandated a 25-year to life sentence for the third felony conviction. Several states have enacted similar habitual offender laws designed to counter criminal recidivism by physical incapacitation via imprisonment.

Throughout this book, we have come to see that our essence, who we are and what we do, is the result of our brain processes. We are born with an intricate brain, slowly developing under genetic control, with refinements being made under the influence of the environment and experience. The brain has particular skill sets, with constraints, and a capacity to generalize. All of these traits, which evolved under natural selection, are the foundation for a myriad of distinct cognitive abilities represented in different parts of the brain. We have seen that our brains possess thousands, perhaps millions, of discrete processing centers and networks, commonly referred to as modules, working outside of our conscious awareness and biasing our responses to life’s daily challenges. In short, the brain has distributed systems running in parallel. It also has multiple control systems. What makes some of these brain findings difficult to accept, however, is that we feel unified and in control: We feel that we are calling the shots and can consciously control all our actions. We do not feel at the mercy of multiple systems battling it out in our heads. So what is this unified feeling of consciousness, and how does it come about? The question of what exactly consciousness is and what processes are contributing to it remains the holy grail of neuroscience. What are the neural correlates of consciousness? Are we in conscious control or not? Are all animals equally conscious, or are there degrees of consciousness? We begin the chapter by looking at these questions.

As neuroscience comes to an increasingly physicalist understanding of brain processing, some people’s notions about free will are being challenged. This deterministic view of behavior disputes long-standing beliefs about what it means for people to be responsible for their actions. Some scholars assert the extreme view that humans have no conscious control over their behavior, and thus, they are never responsible for any of their actions. These ideas challenge the very foundational rules regulating how we live together in social groups. Yet research has shown that both accountability and what we believe influences our behavior. Can a mental state affect the physical processing of our brain? After we examine the neuroscience of consciousness, we will tackle the issue of free will and personal responsibility. In so doing, we will see if, indeed, our mental states influence our neuronal processes.

Philosopher Gary Watson pointed out that we shape the rules that we decide to live by. From a legal perspective, we are the law because we make the law. Our emotional reactions contribute to the laws we make. If we come to understand that our retributive responses to antisocial behavior are innate and have been honed by evolution, can or should we try to amend or ignore them and not let them affect the laws we create? Or, are these reactions the sculptors of a civilized society? Do we ignore them to our peril? Is accountability what keeps us civilized, and should we be held accountable for our behavior? We close the chapter by looking at these questions.

ANATOMICAL ORIENTATION

The anatomy of consciousness

The cerebral cortex, the thalamus, the brainstem, and the hypothalamus are largely responsible for the conscious mind.

Anatomical Orientation

The conscious mind primarily depends on three brain structures: the brainstem, including the hypothalamus; the thalamus; and the cerebral cortex (see the Anatomical Orientation box). When we look at the anatomical regions that contribute to consciousness, it is helpful to distinguish wakefulness from simple awareness and from more complex states. Neurologist Antonio Damasio has done this for us. First he makes the point that wakefulness is necessary for consciousness (except in dream sleep), but consciousness is not necessary for wakefulness. For example, patients in a vegetative state may be awake, but not conscious. Next he trims consciousness down to two categories: core consciousness and extended consciousness (Damasio, 1998). Core consciousness (or awareness) is what goes on when the consciousness switch is flipped “on.” The organism is alive, awake, alert, and aware of one moment: now, and in one place: here. It is not concerned with the future or the past. Core consciousness is the foundation for building increasingly complex levels of consciousness, which Damasio calls extended consciousness. Extended consciousness provides an organism with an elaborate sense of self. It places the self in individual historic time, includes thoughts of the past and future, and depends on the gradual buildup of an autobiographical self from memories and expected future experiences. Thus consciousness has nested layers of organizational complexity (Damasio & Meyer, 2008).

The Brainstem

The brain regions needed to modulate wakefulness, and to flip the consciousness “on” switch, are located in the evolutionarily oldest part of the brain, the brainstem. The primary job of brainstem nuclei is homeostatic regulation of the body and brain. This is performed mainly by nuclei in the medulla oblongata along with some input from the pons. Disconnect this portion of the brainstem, and the body dies (and the brain along with it). This is true for all mammals. Above the medulla are the nuclei of the pons and the mesencephalon. Within the pons is the reticular formation and the locus coeruleus (LC). The reticular formation is a heterogeneous collection of nuclei contributing to a number of neural circuits involved with motor control, cardiovascular control, pain modulation, and the filtering out of irrelevant sensory stimuli. Some nuclei influence the entire cortex via direct cortical connections, and some through neurons that comprise the neural circuits of the reticular activating system (RAS). The RAS has extensive connections to the cortex via two pathways. The dorsal pathway courses through the intralaminar nucleus of the thalamus to the cortex, and the ventral pathway zips through the hypothalamus and the basal forebrain and on to the cortex. The RAS is involved with arousal, regulating sleep–wake cycles, and mediating attention. Damage or disruption to the RAS can result in coma. Depending on the location, damage to the pons could result in locked-in syndrome, coma, a vegetative state, or death.

Arousal is also influenced by the outputs of the LC in the pons, which is the main site of norepinephrine production in the brain. The LC has extensive connections throughout the brain and, when active, prevents sleep by activating the cortex. With cell bodies located in the brainstem, it has projections that follow a route similar to that of the RAS up through the thalamus.

From the spinal cord, the brainstem receives afferent neurons involved with pain, interoception, somatosensory, and proprioceptive information as well as vestibular information from the ear and afferent signals from the thalamus, hypothalamus, amygdala, cingulate gyrus, insula, and prefrontal cortex. Thus, information about the state of the organism in its current milieu, along with ongoing changes in the organism’s state as it interacts with objects and the environment, is all mediated via the brainstem.

The Thalamus

The neurons that connect the brainstem with the intralaminar nuclei (ILN) of the thalamus play a key role in core consciousness. The thalamus has two ILN, one on the right side and one on the left. Small and strategically placed bilateral lesions to the ILN in the thalamus turn core consciousness off forever, although a lesion in one alone will not. Likewise, if the neurons connecting the thalamic ILN and the brainstem are severed or blocked, so that the ILN do not receive input signals, core consciousness is lost.

We know from previous chapters that the thalamus is a well-connected structure. As a result, it has many roles relating to consciousness. First, all sensory input, both about the body and the surrounding world (except smell, as we learned in Chapter 5), pass through the thalamus. This brain structure also is important to arousal, processing information from the RAS that arouses the cortex or contributes to sleep. The thalamus also has neuronal connections linking it to specific regions all over the cortex. Those regions send connections straight back to the thalamus, thus forming connection loops. These circuits contribute to consciousness by coordinating activity throughout the cortex. Lesions anywhere from the brainstem up to the cortex can disrupt core consciousness.

The Cerebral Cortex

In concert with the brainstem and thalamus, the cerebral cortex maintains wakefulness and contributes to selective attention. Extended consciousness begins with contributions from the cortex that help generate the core of self. These contributions are records from the memory bank of past activities, emotions, and experiences. Damage to the cortex may result in the loss of a specific ability, but not loss of consciousness itself. We have seen examples of these deficits in previous chapters. For instance, in Chapter 7, we came across patients with unilateral lesions to their parietal cortex: These people were not conscious of half of the space around them; that is, they suffered neglect.


TAKE-HOME MESSAGES


Consciousness

The problem of consciousness, otherwise known as the mind–brain problem, was originally the realm of philosophers. The basic question is, how can a purely physical system (the body and brain) construct conscious intelligence (the mind)? In seemingly typical human fashion, philosophers have adopted dichotomous perspectives: dualism and materialism. Dualism, famously expounded by Descartes, states that mind and brain are two distinct and separate phenomena, and conscious experience is nonphysical and beyond the scope of the physical sciences. Materialism asserts that both mind and body are physical mediums and that by understanding the physical workings of the body and brain well enough, an understanding of the mind will follow. Within these philosophies, views differ on the specifics, but each side ignores an inconvenient problem. Dualism tends to ignore biological findings, and materialism overlooks the reality of subjective experience.

Notice that we have been throwing the word consciousness around without having defined it. Unfortunately, this has been a common problem and has led to much confusion in the literature. In both the 1986 and 1995 editions of the International Dictionary of Psychology, the psychologist Stuart Sutherland defined consciousness as follows:

Consciousness The having of perceptions, thoughts, and feelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means. Many fall into the trap of equating consciousness with selfconsciousness—to be conscious it is only necessary to be aware of the external world. Consciousness is a fascinating but elusive phenomenon: it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.

Harvard psychologist Steve Pinker also was confused by the different uses of the word: Some said that only man is conscious; others said that consciousness refers to the ability to recognize oneself in a mirror; some argued that consciousness is a recent invention by man or that it is learned from one’s culture. All these viewpoints provoked him to make this observation:

Something about the topic of consciousness makes people, like the White Queen in Through the Looking Glass, believe six impossible things before breakfast. Could most animals really be unconscious—sleepwalkers, zombies, automata, out cold? Hath not a dog senses, affections, passions? If you prick them, do they not feel pain? And was Moses really unable to taste salt or see red or enjoy sex? Do children learn to become conscious in the same way that they learn to wear baseball caps turned around? People who write about consciousness are not crazy, so they must have something different in mind when they use the word. (Pinker, 1997, p. 133)

In reviewing the work of the linguist Ray Jackendoff of Brandeis University and the philosopher Ned Block at New York University, Pinker pulled together a framework for thinking about the problem of consciousness in his book How the Mind Works (1997). The proposal for ending this consciousness confusion consists of breaking the problem of consciousness into three issues: self-knowledge, access to information, and sentience. Pinker summarized and embellished the three views as follows:

Self-knowledge: Among the long list of people and objects that an intelligent being can have accurate information about is the being itself. As Pinker said, “I cannot only feel pain and see red, but think to myself, ‘Hey, here I am, Steve Pinker, feeling pain and seeing red!’” Pinker says that self-knowledge is no more mysterious than any other topic in perception or memory. He does not believe that “navel-gazing” has anything to do with consciousness in the sense of being alive, awake, and aware. It is, however, what most academic discussions have in mind when they banter about consciousness.

Access to information: Access awareness is the ability to report on the content of mental experience without the capacity to report on how the content was built up by all the neurons, neurotransmitters, and so forth, in the nervous system. The nervous system has two modes of information processing: conscious processing and unconscious processing. Conscious processing can be accessed by the systems underlying verbal reports, rational thought, and deliberate decision making and includes the product of vision and the contents of short-term memory. Unconscious processing, which cannot be accessed, includes autonomic (gut-level) responses, the internal operations of vision, language, motor control, and repressed desires or memories (if there are any).

Sentience: Pinker considers sentience to be the most interesting meaning of consciousness. It refers to subjective experience, phenomenal awareness, raw feelings, and the first person viewpoint—what it is like to be or do something. Sentient experiences are called qualia by philosophers and are the elephant in the room ignored by the materialists. For instance, philosophers are always wondering what another person’s experience is like when they both look at the same color. In a paper spotlighting qualia, philosopher Thomas Nagel famously asked, “What is it like to be a bat?” (1974), which makes the point that if you have to ask, you will never know. Explaining sentience is known as the hard problem of consciousness. Some think it will never be explained.

By breaking the problem of consciousness into these three parts, cognitive neuroscience can be brought to bear on the topic of consciousness. Through the lens of cognitive neuroscience, much can be said about access to information and self-knowledge, but the topic of sentience remains elusive.

FIGURE 14.2 Blindsight.
Weiskrantz and colleagues reported the first case of blindsight in a patient with a lesion in the visual cortex. The hatched areas indicate preserved areas of vision for the left L R and right eyes for patient D.B.

Conscious Versus Unconscious Processing and the Access of Information

We have seen throughout this book that the vast majority of mental processes that control and contribute to our conscious experience happen outside of our conscious awareness. An enormous amount of research in cognitive science clearly shows that we are conscious only of the content of our mental life, not what generates the content. For instance, we are aware of the products of mnemonic processing and the perceptual processing of imaging, not what produced the products. Thus, when considering conscious processes, it is also necessary to consider unconscious processes and how the two interact. A statement about conscious processing involves conjunction—putting together awareness of the stimulus with the identity, or the location, or the orientation, or some other feature of the stimulus. A statement about unconscious processing involves disjunction—separating awareness of the stimulus from the features of the stimulus such that even when unaware of the stimulus, participants can still respond to stimulus features at an above-chance level.

When Ned Block originally drew distinctions between sentience and access, he suggested that the phenomenon of blindsight provided an example where one existed without the other. Blindsight, a term coined by Larry Weiskrantz at Oxford University (1974; 1986), refers to the phenomenon that patients suffering a lesion in their visual cortex can respond to visual stimuli presented in the blind part of their visual field (Figure 14.2). Most interestingly, these activities happen outside the realm of consciousness. Patients will deny that they can do a task, yet their performance is clearly above that of chance. Such patients have access to information but do not experience it.

Weiskrantz believed that subcortical and parallel pathways and centers could now be studied in the human brain. A vast primate literature had already developed on the subject. Monkeys with occipital lesions not only can localize objects in space but also can make color, luminance, orientation, and pattern discriminations. It hardly seemed surprising that humans could use visually presented information not accessible to consciousness. Subcortical networks with interhemispheric connections provided a plausible anatomy on which the behavioral results could rest.

Since blindsight demonstrates vision outside the realm of conscious awareness, this phenomenon has often been invoked as support for the view that perception happens in the absence of sensation, for sensations are presumed to be our experiences of impinging stimuli. Because the primary visual cortex processes sensory inputs, advocates of the secondary pathway view have found it useful to deny the involvement of the primary visual pathway in blindsight. Certainly, it would be easy to argue that perceptual decisions or cognitive activities routinely result from processes outside of conscious awareness. But it would be difficult to argue that such processes do not involve primary sensory systems.

Evidence supports the notion that the primary sensory systems are still involved. Involvement of the damaged primary pathway in blindsight has been demonstrated by Mark Wessinger and Robert Fendrich at Dartmouth College (Fendrich et al., 1992). They investigated this fascinating phenomenon using a dual Purkinje image eye tracker that was augmented with an image stabilizer, allowing for the sustained presentation of information in discrete parts of the visual field (Figure 14.3). Armed with this piece of equipment and with the cooperation of C.L.T., a robust 55-year-old outdoorsman who had suffered a right occipital stroke 6 years before his examination, they began to tease apart the various explanations for blindsight.

FIGURE 14.3 Schematic of the Purkinje image eye tracker.
The eye tracker compensates for a subject’s eye movements by moving the image in the visual field in the same direction as the eyes, thus stabilizing the image on the retina.

Standard perimetry indicated that C.L.T. had a left homonymous hemianopia with lower-quadrant macular sparing. Yet the eye tracker found small regions of residual vision (Figure 14.4). C.L.T.’s scotoma was explored carefully, using high-contrast, retinally stabilized stimuli and an interval, two-alternative, forced-choice procedure. This procedure requires that a stimulus be presented on every trial and that the participant respond on every trial, even though he denies having seen a stimulus. Such a design is more sensitive to subtle influences of the stimulus on the participant’s responses. C.L.T. also indicated his confidence on every trial. The investigators found regions of above-chance performance surrounded by regions of chance performance within C.L.T.’s blind field. Simply stated, they found islands of blindsight.

Magnetic resonance imaging (MRI) reconstructions revealed a lesion that damaged the calcarine cortex, which is consistent with C.L.T.’s clinical blindness. But MRI also demonstrated some spared tissue in the region of the calcarine fissure. We assume that this tissue mediates C.L.T.’s central vision with awareness. Given this, it seems reasonable that similar tissue mediates C.L.T.’s islands of blindsight. More important, both positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) conclusively demonstrated that these regions are metabolically active—these areas are alive and processing information! Thus, the most parsimonious explanation for C.L.T.’s blindsight is that it is directed by spared, albeit severely dysfunctional, remnants of the primary visual pathway rather than by a more general secondary visual system.

Before it can be asserted that blindsight is due to subcortical or extrastriate structures, we first must be extremely careful to rule out the possibility of spared striate cortex. With careful perimetric mapping, it is possible to discover regions of vision within a scotoma that would go undetected with conventional perimetry. Through such discoveries, we can learn more about consciousness.

FIGURE 14.4 Results of stabilized image perimetry in left visual hemifield.
Results of stabilized image perimetry in C.L.T.’s left visual hemifield. Each test location is represented by a circle. The number in a circle represents the percentage of correct detections. The number under the circle indicates the number of trials at each location. White circles are unimpaired detection, green circles are impaired detection that was above the level of chance, and purple circles indicate detection that was no better than chance.

Similar reports of vision without awareness in other neurological populations can similarly inform us about consciousness. It is commonplace to design demanding perceptual tasks on which both neurological and nonneurological participants routinely report low confidence values but perform at a level above chance. Yet it is unnecessary to propose secondary visual systems to account for such reports, since the primary visual system is intact and fully functional. For example, patients with unilateral neglect (see Chapter 7) as a result of right-hemisphere damage are unable to name stimuli entering their left visual field. The conscious brain cannot access this information. When asked to judge whether two lateralized visual stimuli, one in each visual field, are the same or different (Figure 14.5), however, these same patients can do so. When they are questioned on the nature of the stimuli after a trial, they easily name the stimulus in the right visual field but deny having seen the stimulus in the neglected left field. In short, patients with parietal lobe damage, but spared visual cortex can make perceptual judgments outside of conscious awareness. Their failure to consciously access information for comparing the stimuli should not be attributed to processing within a secondary visual system, because their geniculostriate pathway is still intact. They lost the function of a chunk of parietal cortex, and because of that loss, they lost a chunk of conscious awareness.

The Extent of Subconscious Processing

A variety of reports extended these initial observations that information presented in the extinguished visual field can be used for decision making. In fact, quite complex information can be processed outside of conscious awareness (Figure 14.6). In one study of right-sided neglect patients, a picture of a fruit or an animal was quickly presented to the right visual field. Subsequently, a picture of the same item or of an item in the same category was presented to the left visual field. In another condition, the pictures presented in each field had nothing to do with each other (Volpe et al., 1979). All patients in the study denied that a stimulus had been presented in the left visual field. When the two pictures were related, however, patients responded faster than they did when the pictures were different. The reaction time to the unrelated pictures did not increase. In short, high-level information was being exchanged between processing systems, outside the realm of conscious awareness.

The vast staging for our mental activities happens largely without our monitoring. The stages of this production can be identified in many experimental venues. The study of blindsight and neglect yields important insights. First, it underlines a general feature of human cognition: Many perceptual and cognitive activities can and do go on outside the realm of conscious awareness. We can access information of which we are not sentient. Further, this feature does not necessarily depend on subcortical or secondary processing systems: More than likely, unconscious processes related to cognitive, perceptual, and sensory-motor activities happen at the level of the cortex. To help understand how consciousness and unconsciousness interact within the cortex, it is necessary to investigate both conscious and unconscious processes in the intact, healthy brain.

FIGURE 14.5 The same–different paradigm presented to patients with neglect.
(a, b) The patient is presented with a single image, first to one hemifield, then to the other. The patient subsequently is asked to judge if the images are the same or different, a task that he is able to perform. (c, d) When the images are presented simultaneously to both hemifields, the patient with unilateral neglect is able to determine whether the images are the same or different, but cannot verbalize what image he saw in the extinguished hemifield that enabled him to make his correct comparison and decision.

Richard Nisbett and Lee Ross (1980) at the University of Michigan clearly made this point. In a clever experiment, using the tried-and-true technique of learning word pairs, they first exposed participants to word associations like ocean–moon. The idea is that participants might subsequently say “Tide” when asked to free-associate the word detergent. That is exactly what they do, but they do not know why. When asked, they might say, “Oh, my mother always used Tide to do the laundry.” As we know from Chapter 4, that was their left brain interpreter system coming up with an answer from the information that was available to it.

Now, any student will commonly and quickly declare that he is fully aware of how he solves a problem even when he really does not know. Students solve the famous Tower of Hanoi (Figure 14.7) problem all the time. When researchers listen to the running discourse of students articulating what they are doing and why they are doing it, the result can be used to write a computer program to solve the problem. The participant calls on facts known from short- and long-term memory. These events are accessible to consciousness and can be used to build a theory for their action. Yet no one is aware of how the events became established in shortor long-term memory. Problem solving is going on at two different levels, the conscious and the unconscious, but we are only aware of one.

FIGURE 14.6 Category discrimination test presented to patients with right sided neglect.
(a) A picture of an item, such as a cat, was flashed to the left visual field. (b) A picture of the same item or a related item, such as a dog, was presented to the right visual field, and the participant was asked to discriminate the category that the second item belonged to. If the items were related by category, the time needed to categorize the second word was shorter.

FIGURE 14.7 The Tower of Hanoi problem.
The task is to rebuild the rings on another tower without ever putting a larger ring on top of a smaller ring. It can be done in seven steps, and after much practice, students learn the task. After they have solved it, however, their explanations for how they solved it can be quite bizarre.

Cognitive psychologists also have examined the extent and kind of information that can be processed unconsciously. Freud staked out the most complex range, where the unconscious was hot and wet. Deep emotional conflicts are fought, and their resolution slowly makes its way to conscious experience. Other psychologists placed more stringent constraints on what can be processed. Many researchers maintain that only low-level stimuli—like the lines forming the letter of a word, not the word itself—can be processed unconsciously. Over the last century, these matters have been examined time and again; only recently has unconscious processing been examined in a cognitive neuroscience setting.

The classic approach was to use the technique of subliminal perception. Here a picture of a girl either throwing a cake at someone, or simply presenting the cake in a friendly manner, is flashed quickly. A neutral picture of the girl is presented subsequently, and the participant proves to be biased in judging the girl’s personality based on the subliminal exposures he received (Figure 14.8). Hundreds of such demonstrations have been recounted, although they are not easy to replicate. Many psychologists maintain that elements of the picture are captured subconsciously and that this result is sufficient to bias judgment.

FIGURE 14.8 Testing subliminal perception.
A participant is quickly shown just one picture of a girl, similar to the images in the top row, in such a way that the participant is not consciously aware of the picture’s content. The participant is then shown a neutral picture (bottom row) and is asked to describe the girl’s character. Judgments of the girl’s character have been found to be biased by the previous subthreshold presentation.

Cognitive psychologists have sought to reaffirm the role of unconscious processing through various experimental paradigms. A leader in this effort has been Tony Marcel of Cambridge University (1983a, 1983b). Marcel used a masking paradigm in which the brief presentation of either a blank screen or a word was followed quickly by a masking stimulus of a crosshatch of letters. One of two tasks followed presentation of the masking stimulus. In a detection task, participants merely had to choose whether a word had been presented. On this task, participants responded at a level of chance. They simply could not tell whether a word had been presented. If the task became a lexical decision task, however, the subliminally presented stimulus had effects. Here, following presentation of the masking stimulus, a string of letters was presented and participants had to specify whether the string formed a word. Marcel cleverly manipulated the subthreshold words in such a way that some were related to the word string and some were not. If there had been at least lexical processing of the subthreshold word, related words should elicit faster response times, and this is exactly what Marcel found.

FIGURE 14.9 Picture-to-word priming paradigm.
(a) During the study, either extended and unmasked (top) or brief and masked (bottom) presentations were used. (b) During the test, participants were asked to complete word stems (kan and bic were the word stems presented in this example). Priming performance was identical between extended and brief presentations. (c) Afterward, participants were asked if they remembered seeing the words as pictures. Here performance differed—participants usually remembered seeing the extended presentations but regularly denied having seen the brief presentations.

Since then, investigations of conscious and unconscious processing of pictures and words have been combined successfully into a single cross-form priming paradigm. This paradigm involves presenting pictures for study and word stems for the test (Figure 14.9). Using both extended and brief periods of presentation, the investigators also showed that such picture-to-word priming can occur with or without awareness. In addition to psychophysically setting the brief presentation time at identification threshold, a pattern mask was used to halt conscious processing. Apparently not all processing was halted, however, because priming occurred equally well under both conditions. Given that participants denied seeing the briefly presented stimuli, unconscious processing must have allowed them to complete the word stems (primes). In other words, they were extracting conceptual information from the pictures, even without consciously seeing them. How often does this happen in everyday life? Considering the complexity of the visual world, and how rapidly our eyes look around, briefly fixating from object to object (about 100–200 ms), this situation probably happens quite often! These data further underscore the need to consider both conscious and unconscious processes when developing a theory of consciousness.

Gaining Access to Consciousness

As cognitive neuroscientists make further attempts to understand the links between conscious and unconscious processing, it becomes clear that these phenomena remain elusive. We now know that obtaining evidence of subliminal perception depends on whether subjective or objective criteria set the threshold. When the criteria are subjective (i.e., introspective reports from each subject), priming effects are evident. When criteria are set objectively by requiring a forced choice as to whether a participant saw any visual information, no priming effects are seen. Among other things, these studies point out the gray area between conscious and unconscious. Thresholds clearly vary with the criteria.

Pinker (1997) presented an enticing analysis on how evolutionary pressures gave rise to access-consciousness. The general insight has to do with the idea that information has costs and benefits. He argued that at least three dimensions must be considered: cost of space to store and process it, cost of time to process and retrieve it, and cost of resources—energy in the form of glucose—to process it. The point is that any complex organism is made up of matter, which is subject to the laws of thermodynamics, and there are restrictions on the information it accesses. To operate optimally within these constraints, only information relevant to the problem at hand should be allowed into consciousness, which seems to be how the brain is organized.

Access-consciousness has four obvious features that Pinker recounted. It is brimming with sensations: the shocking pink sunset, the fragrance of jasmine, the stinging of a stubbed toe. Second, we are able to move information into and out of our awareness and into and out of short-term memory by turning our attentional spotlight on it. Third, this information always comes with salience, some kind of emotional coloring. Finally, there is the “I” that calls the shots on what to do with the information as it comes into the field of awareness.

Jackendoff (1987) argued that for perception, access is limited to the intermediate stages of information processing. Luckily, we do not ponder the elements that go into a percept, only the output. Consider the patient described in Chapter 6, who could not see objects but could see faces, thus indicating he was a face processor. When this patient was shown a picture that arranged pieces of vegetables in such a way as to make them look like a face, the patient immediately said he saw the face but was totally unable to state that the eyes were garlic cloves and the nose a turnip. He had access only to output of the module.

Concerning attention and its role in access, the work of Anne Treisman (1991) at Princeton University reveals that unconscious parallel processing can go only so far. Treisman proposed a candidate for the border between conscious and unconscious processes. In her famous popout experiments that we discussed in Chapter 7, a participant picks a prespecified object from a field of others. The notion is that each point in the visual field is processed for color, shape, and motion, outside of conscious awareness. The attention system then picks up elements and puts them together with other elements to make the desired percept. Treisman showed, for example, that when we are attending to a point in space and processing the color and form of that location, elements at unattended points seem to be floating. We can tell the color and shape, but we make mistakes about what color goes with what shape. Attention is needed to conjoin the results of the separate unconscious processes. The illusory conjunctions of stimulus features are first-glimpse evidence for how the attentional system combines elements into whole percepts.

We have discussed emotional salience in Chapter 10, and we will get to the “I” process in a bit. Before turning to such musings, let’s consider an often overlooked aspect of consciousness: the ability to move from conscious, controlled processing to unconscious, automatic processing. Such “movement” from conscious to unconscious is necessary when we are learning complex motor tasks such as riding a bike or driving a car, as well as for complex cognitive tasks such as verb generation and reading.

At Washington University in St. Louis, Marcus Raichle and Steven Petersen, two pioneers in the brain imaging field, proposed a “scaffolding to storage” framework to account for this movement (Petersen et al., 1998). Initially, according to their framework, we must use conscious processing during practice while developing complex skills (or memories)—this activity can be considered the scaffolding process. During this time, the memory is being consolidated, or the skill is being developed and honed. Once the task is learned, brain activity and brain involvement change. This change can be likened to the removal of the scaffolding, or the disinvolvement of support structures and the involvement of more permanent structures as the tasks are “stored” for use.

Petersen and Raichle demonstrated this scaffolding to storage movement in the awake-behaving human brain. Using PET techniques participants either performed a verb generation task which was compared to simply reading verbs, or a maze tracing task, compared to tracing a square. They clearly demonstrated that early, unlearned, conscious processing uses a much different network of brain regions than does later, learned, unconscious processing (Figure 14.10). They hypothesized that during learning, a scaffolding set of regions is used to handle novel task demands. Following learning, a different set of regions is involved, perhaps regions specific to the storage or representation of the particular skill or memory. Further, once this movement from conscious to unconscious has occurred (once the scaffolding is removed), it is sometimes difficult to reinitiate conscious processing. A classic example is learning to drive with a clutch. Early on, you have to consciously practice the steps of releasing the gas pedal while depressing the clutch, moving the shift lever, and slowly releasing the clutch while applying pressure to the gas pedal again—all without stalling the car. After a few jerky attempts, you know the procedures well: The process has been stored, but it is rather difficult to separate the steps.

Similar processes occur in learning other complex skills. Chris Chabris, a cognitive psychologist at Harvard University, has studied chess players as they progress from the novice to the master level (Chabris & Hamilton, 1992). During lightning chess, masters play many games simultaneously and very fast. Seemingly, they play by intuition as they make move after move after move, and in essence they are playing by intuition—“learned intuition,” that is. They intuitively know, without really knowing how they know, what the next best move is. For novices, such lightning play is not possible. They have to painstakingly examine the pieces and moves one by one (OK, if I move my knight over there, she will take my bishop; no, that won’t work. Let’s see, if I move the rook—no, then she will move her bishop and then I can take her knight... whoops, that will put me in check... hmmm). But after many hours of practice and hard work, as the novices develop into chess masters, they see and react to the chessboard differently. They now begin to view and play the board as a series of groups or clumps of pieces and moves, as opposed to separate pieces with serial moves. Chabris’s research has shown that during early stages of learning, the talking, language-based, left brain is consciously controlling the game. With experience, however, as the different moves and possible groupings are learned, the perceptual, feature-based, right brain takes over.

FIGURE 14.10 Activated areas of the brain change as tasks are practiced.
Based on positron emission tomography (PET) images, these eight panels show that practicing a task results in a shift in which regions of the brain are most active. (a) When confronted with a new verb generation task, areas in the left frontal region, such as the prefrontal cortex, are activated (green areas in leftmost panel). As the task is practiced, blood flow to these areas decreases (as depicted by the fainter color in the adjacent panel). In contrast, the insula is less active during naïve verb generation. With practice, however, activation in the insula increases, suggesting that with practice, activity in the insula replaces activity previously observed in the frontal regions. (b) An analogous shift in activity is observed elsewhere in the brain during a motor learning maze-tracing task. Activity in the premotor and parietal areas seen early in the maze task (red areas in leftmost panel) subsides with practice (fainter red in the adjacent panel) while increases in blood flow are then seen in the primary and supplementary motor areas as a result of practice.

For example, International Grandmaster chess player and two-time U.S. chess champion Patrick Wolff, who at age 20 defeated the world chess champion Gary Kasparov in 25 moves, was given 5 seconds to look at a picture of a chessboard with all the pieces set in a pattern that made chess sense. He was then asked to reproduce it, and he quickly and accurately did so, getting 25 out of 27 pieces in the correct position. Even a good player would place only about five pieces correctly. In a different trial, however, with the same board, the same number of pieces, but pieces in positions that didn’t make chess sense, he got only a few pieces right, just like a person who doesn’t play chess. Wolff’s original accuracy was from his right brain automatically matching up patterns that it had learned from years of playing chess.

Although neuroscientists may know that Wolff’s right-brain pattern perception mechanism is all coded, runs automatically, and is the source of this capacity, he did not. When he was asked about his ability, his left-brain interpreter struggled for an explanation: “You sort of get it by trying to, to understand what’s going on quickly and of course you chunk things, right?... I mean obviously, these pawns, just, but, but it, I mean, you chunk things in a normal way, like I mean one person might think this is sort of a structure, but actually I would think this is more, all the pawns like this....” When asked, the speaking left brain of the master chess player can assure us that it can explain how the moves are made, but it fails miserably to do so—as often happens when you try, for example, to explain how to use a clutch to someone who doesn’t drive a car with a standard transmission.

The transition of controlled, conscious processing to automatic, unconscious processing is analogous to the implementation of a computer program. Early stages require multiple interactions among many brain processes, including consciousness, as the program is written, tested, and prepared for compilation. Once the process is well under way, the program is compiled, tested, recompiled, retested, and so on. Eventually, as the program begins to run and unconscious processing begins to take over, the scaffolding is removed, and the executable file is uploaded for general use.

This theory seems to imply that once conscious processing has effectively allowed us to move a task to the realm of the unconscious, we no longer need conscious processing. This transition would allow us to perform that task unconsciously and allow our limited conscious processing to turn to another task. We could unconsciously ride our bikes and talk at the same time.

One evolutionary goal of consciousness may be to improve the efficiency of unconscious processing. The ability to relegate learned tasks and memories to unconsciousness allows us to devote our limited consciousness resources to recognizing and adapting to changes and novel situations in the environment, thus increasing our chances of survival.

Sentience

Neurologist Antonio Damasio (2011) defines consciousness as a mind state in which the regular flow of mental images (defined as mental patterns in any of the sensory modalities) has been enriched by subjectivity, meaning mental images that represent body states. He suggests that various parts of the body continuously signal the brain and are signaled back by the brain in a perpetual resonant loop. Mental images about the self—that is, the body—are different from other mental images. They are connected to the body, and as such they are “felt.” Because these images are felt, an organism is able to sense that the contents of its thoughts are its own: They are formulated in the perspective of the organism, and the organism can act on those thoughts. This form of self-awareness, however, is not meta self-awareness, or being aware that one is aware of oneself. Sentience does not imply that an organism knows it is sentient.

Neurons, Neuronal Groups, and Conscious Experience

Neuroscientists interested in higher cognitive functions have been extraordinarily innovative in analyzing how the nervous system enables perceptual activities. Recording from single neurons in the visual system, they have tracked the flow of visual information and how it becomes encoded and decoded during a perceptual activity. They have also directly manipulated the information and influenced an animal’s decision processes. One of the leaders in this approach to understanding the mind is William Newsome at Stanford University.

Newsome has studied how neural events in area MT of the monkey cortex, which is actively involved in motion detection, correlate with the actual perceptual event (Newsome et al., 1989). One of his first findings was striking. The animal’s psychophysical performance capacity to discriminate motion could be predicted by the neuronal response pattern of a single neuron (Figure 14.11). In other words, a single neuron in area MT was as sensitive to changes in the visual display as was the monkey.

FIGURE 14.11 Motion discrimination can be predicted by a single-neuron response pattern.
Motion stimuli, with varying levels of coherent motion, were presented to rhesus monkeys trained in a task to discriminate the direction of motion. The monkey’s decision regarding the direction of apparent motion and the responses of 60 single middle temporal visual area (MT) cells (which are selective for direction of motion) were recorded and compared to the stimulus coherence on each trial. On average, individual cells in MT were as sensitive as the entire monkey. In subsequent work, the firing rate of single cells predicted (albeit weakly) the monkey’s choice on a trial-by-trial basis.

This finding stirred the research community because it raised a fundamental question about how the brain does its job. Newsome’s observation challenged the common view that the signal averaging that surely goes on in the nervous system eliminated the noise carried by individual neurons. From this view, the decision-making capacity of pooled neurons should be superior to the sensitivity of single neurons. Yet Newsome did not side with those who believe that a single neuron is the source for any one behavioral act. It is well known that killing a single neuron, or even hundreds of them, will not impair an animal’s ability to perform a task, so a single neuron’s behavior must be redundant.

An even more tantalizing finding, which is of particular interest to the study of conscious experience, is that altering the response rate of these same neurons by careful microstimulation can tilt the animal toward making the right decision on a perceptual task. Maximum effects are seen during the interval the animal is thinking about the task. Newsome and his colleagues (Salzman et al., 1990; Celebrini & Newsome, 1995), in effect, inserted an artificial signal into the monkey’s nervous system and influenced how it thinks.

Based on this discovery, can the site of the microstimulation be considered as the place where the decision is made? Researchers are not convinced that this is the way to think about the problem. Instead, it’s believed they have tapped into part of a neural loop involved with this particular perceptual discrimination. They argue that stimulation at different sites in the loop creates different perceptual subjective experiences. For example, let’s say that the stimulus was moving upward and the response was as if the stimulus were moving downward. If this were your brain, you might think you saw downward motion if the stimulation occurred early in the loop. If, however, the stimulation occurred late in the loop and merely found you choosing the downward response instead of the upward one, your sensation would be quite different. Why, you might ask yourself, did I do that?

This question raises the issue of the timing of consciousness. When do we become conscious of our thoughts, intentions, and actions? Do we consciously choose to act, and then consciously initiate an act? Or is an act initiated unconsciously, and only afterward do we consciously think we initiated it?

Benjamin Libet (1996), an eminent neuroscientist-philosopher, researched this question for nearly 35 years. In a groundbreaking and often controversial series of experiments, he investigated the neural time factors in conscious and unconscious processing. These experiments are the basis for his backward referral hypothesis. Libet and colleagues (Libet et al., 1979) concluded that awareness of a neural event is delayed approximately 500 milliseconds after the onset of the stimulating event and, more important, this awareness is referred back in time to the onset of the stimulating event. To put it another way, you think that you were aware of the stimulus from the onset of the stimulus and are unaware of the time gap. Surprisingly, according to participant reports, brain activity related to an action increased as many as 300 ms before the conscious intention to act. Using more sophisticated fMRI techniques, John-Dylan Haynes (Soon et al., 2008) showed that the outcomes of a decision can be encoded in brain activity up to 10 seconds before it enters awareness.

Fortunately, backward referral of our consciousness is not so delayed that we act without thinking. Enough time elapses between the awareness of the intent to act and the actual beginning of the act than we can override inappropriately triggered behavior. This ability to detect and correct errors is what Libet believes is the basis for free will.

FIGURE 14.12 Model of the conflict monitoring system.
Study participants were presented with two letters (S and/or H), one in red and the other in green. They were cued to respond to the red letter (dark black arrows) and asked whether it was an S or an H. Basic task-related components are in pink, and control-related components are in blue. The anterior cingulate cortex responds to conflict of response units. This directs the locus coeruleus (labeled LC) that leads to increases in responsivity of multiple processing units (via the squares). Specifically, selective attention is modulated via the prefrontal cortex (PFC), and motor preparation is modulated via the response units. Patients with damage to control-related components, particularly the PFC, have problems in recognizing and correcting their mistakes. The model was suggested by Gehring and Knight (2000).

Whether or not error detection and correction are indeed experimental manifestations of free will, such abilities have been linked to brain regions (Figure 14.12). Not all people can detect and correct errors adequately. In a model piece of brain science linking event-related potentials (ERPs) and patient studies, Robert Knight at the University of California, Berkeley, and William Gehring at the University of Michigan, Ann Arbor, characterized the role of the frontal lobe in checking and correcting errors (Gehring & Knight, 2000). By comparing and contrasting the performance of patients and healthy volunteers on a letter discrimination task, they conclusively demonstrated that the lateral prefrontal cortex was essential for corrective behavior. The task was arranged such that flanking “distracters” often disrupted responses to “targets.” Healthy volunteers showed the expected “corrective” neural activity in the anterior cingulate (see Chapter 12). Patients with lateral prefrontal damage also showed the corrective activity for errors. The patients, however, also showed the same sort of “corrective” activity for non-errors; that is, patients could not distinguish between errors and correct responses. It seems that patients with lateral prefrontal damage no longer have the ability to monitor and integrate their behavior across time. Perhaps they have even lost the ability to learn from their mistakes. It is as if they are trapped in the moment, unable to go back yet unable to decide to go forward. They seem to have lost a wonderful and perhaps uniquely human benefit of consciousness—the ability to escape from the here and now of linear time, or to “time-shift” away.


TAKE-HOME MESSAGES


The Emergence of the Brain Interpreter in the Human Species

The brain’s modular organization has now been well established. The functioning modules do have some kind of physical instantiation, but brain scientists cannot yet specify the exact nature of the neural networks. It is clear that these networks operate mainly outside the realm of awareness, each providing specialized bits of information. Yet, even with the insight that many of our cognitive capacities appear to be automatic domain-specific operations, we feel that we are in control. Despite knowing that these modular systems are beyond our control and fully capable of producing behaviors, mood changes, and cognitive activity, we think we are a unified conscious agent—an “I” with a past, a present, and a future. With all of this apparent independent activity running in parallel, what allows for the sense of conscious unity we possess?

A private narrative appears to take place inside us all the time. It consists partly of the effort to tie together into a coherent whole the diverse activities of thousands of specialized systems that we have inherited through evolution to handle the challenges presented to us each day from both environmental and social situations. Years of research have confirmed that humans have a specialized process to carry out this interpretive synthesis, and, as we discussed in Chapter 4, it is located in the brain’s left hemisphere. This system, called the interpreter, is most likely cortically based and works largely outside of conscious awareness. The interpreter makes sense of all the internal and external information that is bombarding the brain. Asking how one thing relates to another, looking for cause and effect, it offers up hypotheses, makes order out of the chaos of information, and creates a running narrative. The interpreter is the glue that binds together the thousands of bits of information from all over the cortex into a cause-and-effect, “makes sense” narrative: our personal story. It explains why we do the things we do, and why we feel the way we do. Our dispositions, emotional reactions, and past learned behavior are all fodder for the interpreter. If some action, thought, or emotion doesn’t fit in with the rest of the story, the interpreter will rationalize it (I am a really cool, macho guy with tattoos and a Harley and I got a poodle because... ah, um... my great grandmother was French).

The interpreter, however, can use only the information that it receives. For example, a patient with Capgras’ syndrome will recognize a familiar person but will insist that an identical double or an alien has replaced the person, and they are looking at an imposter. In this syndrome, it appears that the emotional feelings for the familiar person are disconnected from the representation of that person. A patient will be looking at her husband, but she feels no emotion when she sees him. The interpreter has to explain this phenomenon. It is receiving the information from the face identification module (“That’s Jack, my husband”), but it is not receiving any emotional information. The interpreter, seeking cause and effect, comes up with a solution: “It must not really be Jack, because if it really were Jack I’d feel some emotion, so he is an imposter!”

The interpreter is a system of primary importance to the human brain. Interpreting the cause and effect of both internal and external events enables the formation of beliefs, which are mental constructs that free us from simply responding to stimulus–response aspects of everyday life. When a stimulus, such as a pork roast, is placed in front of your dog, he will scarf it down. When you are faced with such a stimulus, however, even if you are hungry you may not partake if you have a belief that it is unhealthy, or that you should not eat animal products, or your religious beliefs forbid it. Your behavior can hinge on a belief.

Looking at the past decades of split-brain research, we find one unalterable fact. Disconnecting the two cerebral hemispheres, an event that finds one half of the cortex no longer interacting in a direct way with the other half, does not typically disrupt the cognitive-verbal intelligence of these patients. The left dominant hemisphere remains the major force in their conscious experience and that force is sustained, it would appear, not by the whole cortex but by specialized circuits within the left hemisphere. In short, the inordinately large human brain does not render its unique contributions simply by being a bigger brain, but by the accumulation of specialized circuits.

We now understand that the brain is a constellation of specialized circuits. We know that beyond early childhood, our sense of being conscious never changes. We know that when we lose function in particular areas of our cortex, we lose awareness of what that area processes. Consciousness is not another system but a felt awareness of the products of processing in various parts of the brain. It reflects the affective component of specialized systems that have evolved to enable human cognitive processes. With an inferential system in place, we have a system that empowers all sorts of mental activity.

Left- and Right-Hemisphere Consciousness

Because of the processing differences between the hemispheres, the quality of consciousness emanating from each hemisphere might be expected to differ radically. Although left-hemisphere consciousness would reflect what we mean by normal conscious experience, right hemisphere consciousness would vary as a function of the specialized circuits that the right half of our brain possesses. Mind Left, with its complex cognitive machinery, can distinguish between sorrow and pity and appreciate the feelings associated with each state. Mind Right does not have the cognitive apparatus for such distinctions and consequently has a narrower state of awareness. Consider the following examples of reduced capacity in the right hemisphere and the implications they have for consciousness.

Split-brain patients without right-hemisphere language have a limited capacity for responding to patterned stimuli. The capacity ranges from none whatsoever to the ability to make simple matching judgments above the level of chance. Patients with the capacity to make perceptual judgments not involving language were unable to make a simple same–different judgment within the right brain when both the sample and the match were lateralized simultaneously. Thus, when a judgment of sameness was required for two simultaneously presented figures, the right hemisphere failed.

This minimal profile of capacity stands in marked contrast to patients with right-hemisphere language. One patient, J.W., who after his surgery initially was unable to access speech from the right hemisphere, years later developed the ability to understand language and had a rich right-hemisphere lexicon (as assessed by the Peabody Picture Vocabulary Tests and other special tests). Patients V.P. and P.S. could understand language and speak from each half of the brain. Would this extra skill give the right hemisphere greater ability to think, to interpret the events of the world?

It turns out that the right hemispheres of both patient groups (those with and without right-hemisphere language) are poor at making simple inferences. When shown two pictures, one after the other (e.g., a picture of a match and a picture of a woodpile), the patient (or the right hemisphere) cannot combine the two elements into a causal relation and choose the proper result (i.e., a picture of a burning woodpile as opposed to a picture of a woodpile and a set of matches). In other testing, simple words are presented serially to the right side of the brain. The task is to infer the causal relation between the two lexical elements and pick the answer from six possible answers in full view of the participant. A typical trial consists of words like pin and finger being flashed to the right hemisphere, and the correct answer is bleed. Even though the patient (right hemisphere) can always find a close lexical associate of the words used, he cannot make the inference that pin and finger should lead to bleed.

In this light, it is hard to imagine that the left and right hemispheres have similar conscious experiences. The right cannot make inferences, so it has limited awareness. It deals mainly with raw experience in an unembellished way. The left hemisphere, though, is constantly—almost reflexively—labeling experiences, making inferences as to cause, and carrying out a host of other cognitive activities. The left hemisphere is busy differentiating the world, whereas the right is simply monitoring it.

Is Consciousness a Uniquely Human Experience?

Humans, chimpanzees, and bonobos have a common ancestor, so it is reasonable to assume that we share many perceptual, behavioral, and cognitive skills. If our conscious state has evolved as a product of our brain’s biology, is it possible that our closest relatives might also possess this mental attribute or a developing state of our ability?

One way to tackle the question of nonhuman primate consciousness would be to compare different species’ brains to those of humans. Comparing the species on a neurological basis has proven to be difficult, though it has been shown that the human prefrontal cortex is much larger in area than that of other primates. Another approach, instead of comparing pure biological elements, is to focus on the behavioral manifestation of the brain in nonhuman primates. This approach parallels that of developmental psychologists, who study the development of self-awareness and theory of mind (see Chapter 13) in children. It draws from the idea that children develop abilities that outwardly indicate conscious awareness of themselves and their environment.

FIGURE 14.13 Evidence for self-awareness in chimpanzees.
When initially presented with a mirror, chimpanzees react to it as if they are confronting another animal. After 5 to 30 minutes, however, chimpanzees will engage in self-exploratory behaviors, indicating that they know they are indeed viewing themselves.

Trying to design a test to demonstrate self-awareness in animals has proven difficult. In the past, it was approached from two angles. One is by mirror self-recognition (MSR), and the other is through imitation. Gordon Gallup (1970) designed the MSR test and proposed that if an animal could recognize itself in a mirror, then it implies the presence of a self-concept and self-awareness (Gallup, 1982). Only a few members of a few species can pass the test. It develops in some chimps (Figure 14.13), around puberty, and is present to a lesser degree in older chimps (Povinelli et al., 1993). Orangutans also may show MSR, but only the rare gorilla possesses it (Suarez & Gallup, 1981; Swartz, 1997). Children reliably develop MSR by age 2 (Amsterdam, 1972). Gallup’s suggestion that mirror self-recognition implies the presence of a selfconcept and self-awareness has come under attack. For instance, Robert Mitchell (1997), a psychologist at Eastern Kentucky University, questioned what degree of selfawareness is demonstrated by recognizing oneself in the mirror. He points out that MSR requires only an awareness of the body, rather than any abstract concept of self. No need to invoke more than matching sensation to visual perception; people do not require attitudes, values, intentions, emotion, and episodic memory to recognize their body in the mirror. Another problem with the MSR test is that some patients with prosopagnosia, although they have a sense of self, are unable to recognize themselves in a mirror. They think they are seeing someone else. So although the MSR test can indicate a degree of self-awareness, it is of limited value in evaluating just how self-aware an animal is. It does not answer the question of whether an animal is aware of its visible self only, or if it is aware of unobservable features.

Imitation provides another approach. If we can imitate another’s actions, then we are capable of distinguishing between our own actions and the other person’s. The ability to imitate is used as evidence for self-recognition in developmental studies of children. Although it has been searched for extensively, scant evidence has been found that other animals imitate. Most of the evidence in primates points to the ability to reproduce the result of an action, not to imitate the action itself (Tennie et al., 2006, 2010).

Another avenue has been the search for evidence of theory of mind, which has been extensive. In 2008, Josep Call and Michael Tomasello from the Max Planck Institute for Evolutionary Anthropology reviewed the research from the 30 years since Premack and Woodruff posed the question asking whether chimpanzees have a theory of mind. Call and Tomasello concluded:

There is solid evidence from several different experimental paradigms that chimpanzees understand the goals and intentions of others, as well as the perception and knowledge of others. Nevertheless, despite several seemingly valid attempts, there is currently no evidence that chimpanzees understand false beliefs. Our conclusion for the moment is, thus, that chimpanzees understand others in terms of a perception–goal psychology, as opposed to a full-fledged, human-like belief–desire psychology.

What chimpanzees do not do is share intentionality (such as their beliefs and desires) with others, perhaps as a result of their different theory-of-mind capacity. On the other hand, children from about 18 months of age do (for a review, see Tomasello, 2005). Tomasello and Malinda Carpenter suggest that this ability to share intentionality is singularly important in children’s early cognitive development and is at the root of human cooperation. Chimpanzees can follow someone else’s gaze, can deceive, engage in group activities, and learn from observing, but they do it all on an individual, competitive basis. Shared intentionality in children transforms gaze following into joint attention, social manipulation into cooperative communication, group activity into collaboration, and social learning into instructed learning. That chimpanzees do not have the same conscious abilities makes perfect sense. They evolved under different conditions than the hominid line. They have always called the tropical forest home and have not had to adapt to many changes. Because they have changed very little since their lineage diverged from the common ancestor shared with humans, they are known as a conservative species. In contrast, many species have come and gone along the hominid lineage between Homo sapiens and the common ancestor. The human ancestors that left the tropical forest had to deal with very different environments when they migrated to woodlands, savanna, and beyond. Faced with adapting to radically different environments and social situations, they, unlike the chimpanzee lineage, underwent many evolutionary changes—one of which may well be shared intentionality.


TAKE-HOME MESSAGES


Abandoning the Concept of Free Will

Even after some visual illusions have been explained to us, we still see them (an example is Roger Shepard’s “Turning the Tables” illusion; see http://www.michaelbach.de/ot/sze_usshepardTables/index.html). Why does this happen? Our visual system contains hardwired adaptations that under standard viewing conditions allow us to view the world accurately. Knowing that we can tweak the interpretation of the visual scene by some artificial manipulations does not prevent our brain from manufacturing the illusion. It happens automatically. The same holds true for the human interpreter. Using its capacity for seeking cause and effect, the interpreter provides the narrative, which creates the illusion of a unified self and, with it, the sense that we have agency and “freely” make decisions about our actions. The illusion of a unified self calling the shots is so powerful that, just as with some visual illusions, no amount of analysis will change the feeling that we are in control, acting willfully and with purpose. Does what we have learned about the deterministic brain mechanisms that control our cognition undermine the concept of a self, freely willing actions? At the personal psychological level, we do not believe that we are pawns in the brain’s chess game. Are we? Are our cherished concepts of free will and personal responsibility an illusion, a sham?

One goal of this section is to challenge the concept of free will, yet to leave the concept of personal responsibility intact. The idea is that a mechanistic concept of how the mind works eliminates the need for the concept of free will. In contrast, responsibility is a property of human social interactions, not a process found in the brain. Thus, no matter how mechanistic and deterministic the views of brain function become, the idea of personal responsibility will remain intact. In what follows, we view brain–mind interactions as a multi-layered system (see Doyle & Csete, 2011) plunked down in another layer, the social world. The laws of the higher social layer, which include personal responsibility, constrain the lower layer (people) that the social layer is made of (Gazzaniga, 2013).

Another goal is to suggest that mental states emerge from stimulus-driven (bottom-up) neural activity that is constrained by goal-directed (top-down) neural activity. That is, a belief can constrain behavior. This view challenges the traditional idea that brain activity precedes conscious thought and that brain-generated beliefs do not constrain brain activity. This concept, that there is bidirectional causation, makes it clear that we must decode and understand the interactions among hierarchical levels (layers) of the brain (Mesulam, 1998) to understand the nature of brain-enabled conscious experience. These interactions are both anatomical (e.g., molecules, genes, cells, ensembles, mini-columns, columns, areas, lobes) and functional (e.g., unimodal, multimodal, and transmodal mental processing). Each brain layer animates the other, just as software animates hardware and vice versa. At the point of interaction between the layers, not in the staging areas within a single layer, is where phenomenal awareness arises—our feeling of free will. The freedom that is represented in a choice not to eat the jelly doughnut comes from an interaction between the mental layer belief (about health and weight) and the neuronal layer reward systems for calorie-laden food. The stimulus-driven pull sometimes loses out to a goal-directed belief in the battle to initiate an action: The mental layer belief can trump the pull to eat the doughnut because of its yummy taste. Yet the top layer was engendered by the bottom layer and does not function alone or without it.

If this concept is correct, then we are not living after the fact; we are living in real time. And there’s more: This view also implies that everything our mechanistic brain generates and changes (such as hypotheses, beliefs, and so forth) as we go about our business can influence later actions. Thus, what we call freedom is actually the gaining of more options that our mechanistic brain can choose from as we relentlessly explore our environment. Taken together, these ideas suggest that the concept of personal responsibility remains intact, and that brain-generated beliefs add further richness to our lives. They can free us from the sense of inevitability that comes with a deterministic view of the world (Gazzaniga, 2013).

Philosophical discussions about free will have gone on at least since the days of ancient Greece. Those philosophers, however, were handicapped by a lack of empirical information about how the brain functions. Today, we have a huge informational advantage over our predecessors that arguably makes past discussions obsolete. In the rest of the chapter, we examine the issue of determinism, free will, and responsibility in light of this modern knowledge.

Determinism and Physics

The idea that we aren’t in control of our actions all sounds like crazy academic talk. Your parents don’t believe it, and neither does the local district attorney. Who the heck came up with it? It all began when Isaac Newton wrote down Galileo’s laws of motion as algebraic equations and realized that the equations also described Kepler’s observations about planetary motion. Newton surmised that all the physical matter of the universe—everything from your chair to the moon—operates according to a set of fixed, knowable laws.

Physics class may not have put you into an existential crisis, but that’s what happened to the people in 17th-century England. If the universe and everything in it follows a set of determined laws, then everything must be determined, including people’s behavior. Determinism is the philosophical belief that all current and future events and actions, including human cognition, decisions, and behavior, are caused by preceding events combined with the laws of nature. The corollary, then, is that every event, action, and so on, can in principle be predicted in advance, if all parameters are known. Newton’s laws also work in reverse, which means that time does not have a direction. So you can also know about something’s past by looking at its present state. Determinists believe that the universe and everything in it are completely governed by causal laws and are predictable.

No one really likes the ramifications of this idea. If the universe and everything in it are following causal laws and are predetermined, then that seems to imply that individuals are not personally responsible for their actions. Sure, cheat on the test; it was preordained at the big bang about 13.7 billion years ago. So what if he raped and killed your daughter—his neurons, which he has no control over, made him do it. Forgive and forget about it. Many scientists and determinists think this is the way things are. The rest of us just don’t believe it. If we were to be logical neuroscientists, however, shouldn’t we?

Well, the physicists who got us into this mess are shaking their heads. In fact, most physicists have given up on determinism. What happened? The conception of the physical universe and the physicist’s confidence in predicting its behavior changed dramatically in the early 1900s with the development of two new branches of physics: chaos theory and quantum mechanics.

Chaos

In 1889, French mathematician and physicist Jules Henri Poincaré gave the determinists pause when he made a major contribution to what had become known as “the three-body problem,” or “n-body problem,” that had been bothering mathematicians since Newton’s time. Newton’s laws, when applied to the motion of planets, were completely deterministic. The laws implied that if you knew the initial position and velocity of the planets, you could accurately determine their position and velocity in the future (or the past, for that matter). Although this proposal was true for simple astronomical systems with two bodies, it was not true for astronomical systems consisting of three or more orbiting astronomical bodies with interactions among all three. Everyone at the time realized that measurements weren’t accurate, but it hadn’t bothered them very much because they figured it was a measuring error: Improve the precision of the initial measurement, and the precision of the predicted answer would equally improve. All they needed was a better measuring device. Poincaré pointed out that no matter how carefully the initial measurement was done, it would never be infinitely precise. It would always contain a small degree of error, and even tiny differences in initial measurements would produce substantially different results, far out of proportion to what would be expected mathematically. In these types of systems, now known as chaotic systems, extreme sensitivity to initial conditions is called dynamic instability or chaos. Poincaré’s findings were forgotten for about a half century. They didn’t see the light of day until they were rediscovered by a mathematician-turned-meteorologist, Edward Lorenz.

Lorenz was developing nonlinear models (models where the components are not directly proportional to each other) to describe how an air current would rise and fall while being heated by the Sun. Having never heard of Poincaré’s systems with extreme sensitivity to initial conditions, he thought that minute differences in input data were insignificant. He realized, however, that he was wrong. With only minute variations in his input data (initially he had rounded off the decimal 0.506127 to 0.506), his (deterministic) computer program produced wildly different results. Lorenz had rediscovered what is now known as chaos theory. In 1972, he gave a talk about how even tiny uncertainties would eventually overwhelm any calculations and defeat the accuracy of a long-term weather forecast. From this lecture, titled Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas? came the term “butterfly effect” (O’Connor & Roberson, 2008). The problem with a chaotic system is that even though it is determined purely by mathematical laws, using the laws of physics to make precise long-term predictions is impossible, even in theory. Thus, for practical purposes, a deterministic process can be unpredictable. Chaotic behavior has been observed in many systems, including electrical circuits, population growth, and the dynamics of action potentials in neurons.

Quantum Theory

Why had Poincaré’s work been lost from sight? At the time, most physicists’ attention was not focused on the macro world of planets and hurricanes, but on the micro world of atoms and subatomic particles. Physicists were in a dither because they had found that atoms didn’t obey the so-called universal laws of motion. How could Newton’s laws be fundamental universal laws, if atoms—the stuff objects are made of—didn’t obey the same laws as the objects themselves? As the brilliant and entertaining California Institute of Technology physicist Richard Feynman (1998) once pointed out, exceptions prove the rule... wrong. Newton’s laws must not be universal.

Quantum theory was developed to explain why an electron stays in its orbit, which could not be explained by either Newton’s laws or Maxwell’s laws of classical electromagnetism. In quantum theory, the Schrodinger equation is the equivalent to Newton’s laws (and it is time reversible). The Schrodinger equation has successfully described particles and atoms in molecules. Its insights have led to transistors and lasers. But here’s the rub: The Schrodinger equation cannot predict with certainty where the electron is in its orbit at any one state in time; instead, that location is expressed as a probability. This is because certain pairs of physical properties are related in such a way that both properties cannot be known precisely at the same time. In the case of the electron in orbit, the paired properties are position and momentum. The theoretical physicist Werner Heisenberg presented this as the Uncertainty Principle. Physicists with their deterministic views don’t like uncertainty but have been forced into a different way of thinking. Niels Bohr (1937) wrote, “The renunciation of the ideal of causality in atomic physics... has been forced upon us.” Systems theorist and emeritus professor at the State University of New York at Binghamton Howard Pattee (2001) describes the fundamental problem with causality: Because the microscopic equations of physics are time symmetric and therefore, reversible, they cannot support the irreversible concept of causation. Heisenberg went even further when he wrote, “I believe that indeterminism, that is, the nonvalidity of rigorous causality, is necessary” (quoted in Isaacson, 2007, p. 332). Quantum mechanics made it clear to physicists that when considering fundamental matter, they needed to shift their thinking from an inherently deterministic to an inherently nondeterministic worldview.

Physics had stumbled onto the fact that the physical world is organized on more than one level, and each level has its own set of laws. Although the Newtonian laws of classical mechanics were able to explain the behavior of macroscopic systems, such as baseballs and skyscrapers, they were unable to describe the behavior of microscopic systems like atoms and subatomic particles. It seems that when quantum matter aggregates into macroscopic objects, a new system emerges that follows new laws. Thus a nondeterministic process (quantum mechanics) can give rise to things that are predictable (Newtonian laws), which in the three-body problem become unpredictable in a new sense. This view suggests there are different levels of organization, and those different levels have their own laws that can be understood only at the level being examined. Or, is it even more complicated? Do the levels interact, giving rise to yet another abstraction? This brings us to the topic of emergence.

Emergence

A complex system is one composed of many interconnected parts, such that when they self-organize into a single system, the resulting system exhibits one or more properties not obvious from the properties of the individual parts. Examples of complex systems are ant colonies, plant communities such as the chaparral, the brain, the climate, and human social structures. One (the whole) is said to emerge from the other (the individual parts), and the behavior, function, and other properties of the new whole system are different from, or more than, the sum of the parts. Emergence, then, is the arising of a new structure (previously nonexistent), with a new level of organization and new properties, that occurs during the self-organization of a complex system (Goldstein, 1999). It is a phenomenon of collective organization. Thus Newton’s laws are not fundamental, they are emergent; when quantum matter (which follows quantum laws) aggregates into macroscopic objects, a new level of organization emerges with its own set of laws, Newton’s laws.

The key to understanding emergence is to understand that there are “layers” of organization. For example, consider traffic. One layer of organization is car parts, such as a brake pad and a fan belt, but traffic is another layer of organization, composed of a bunch of cars, human drivers, location, time, weather, and so forth. There are two schools of thought on emergence. The hard deterministic view is that there is only “weak emergence,” where the new properties arise as a result of the interactions at an elemental level and the emergent property is reducible to its individual components. In short, you can predict one level to the next. From the viewpoint of weak emergence, Newton’s laws could be predicted from the laws of quantum mechanics, and vice versa; it’s just that we don’t yet know enough to do so. Or, using our example of looking at car parts, we could predict that the Harbor Freeway in LA, between Wilshire and West 7th Street, on Friday, May 25, at 2:15 p.m., will be (or will not be) bumper to bumper; we just don’t know how to do it yet. In “strong emergence,” on the other hand, the new property is irreducible, is more than the sum of its parts, and the laws of one level cannot be predicted by an underlying fundamental theory or from an understanding of the laws of another level of organization. Thus, from this viewpoint, Newton’s laws could not be predicted from quantum theory, nor could we predict the state of the 101 Freeway by looking at car parts. A new set of laws emerge that aren’t predicted from the parts alone. The whole is more than the sum of its parts.

Physicists don’t like the idea of unpredictable phenomena much, but many (not all) have come to accept that this is the way things are. That is, they accept “strong” emergence. One such physicist was Richard Feynman, who in his 1961 lectures to Caltech freshmen declared:

Yes! Physics has given up. We do not know how to predict what would happen in a given circumstance, and we believe now that it is impossible, that the only thing that can be predicted is the probability of different events. It must be recognized that this is a retrenchment in our earlier ideal of understanding nature. It may be a backward step, but no one has seen a way to avoid it.... So at the present time we must limit ourselves to computing probabilities. We say “at the present time,” but we suspect very strongly that it is something that will be with us forever—that it is impossible to beat that puzzle—that this is the way nature really is. (Feynman et al., 1995, p. 135)

Whether or not nature will always remain unpredictable to us, and whether emergence is weak or strong, most physicists would agree that at different levels of structure, there are different types of organization with unique types of interactions governed by their own laws; and that one emerges from the other. This reality, however, introduces a complicating issue for neuroscience research. The differences in neuronal organization between the human brain and the brains of other animals may result in different emergent properties.

FIGURE 14.14 The pyloric rhythm and pyloric circuit architecture of the spiny lobster.
(a) In the spiny lobster, the stomatogastric ganglion, which has a small number of neurons and a stereotyped motor pattern, produces the pyloric rhythm. The pyloric rhythm has a triphasic motor pattern with bursts occurring first from the anterior burster (AB) neuron electronically coupled to two pyloric dilator (PD) neurons. The next burst is from a lateral pyloric (LP) neuron, followed by a pyloric (PY) neuron. The recordings are done intracellularly from neurons in the stomatogastric ganglion. (b) A schematic representation of a simplified version of the underlying circuit. All synapses in the circuit are inhibitory. To generate the 20 million model circuits, the strengths of the seven synapses were varied and five or six different versions of the neurons in the circuit were used.

Emergence is a common phenomenon accepted by many in physics, biology, chemistry, sociology, and even art, but hard determinism reigns in neuroscience. Why? Because neuroscientists look at all the evidence which suggests that the brain functions automatically and that our conscious experience is an after-the-fact experience. From this they infer that neural processing produces mental states in a deterministic fashion. In their view, mental states, such as a belief, do not affect brain function or processing. Emergence is often seen as a way to sneak the mind in without having to explain how it works. In addition, emergence is inconsistent with experimental science explanations of the brain’s machinations. Emergence is not a mystical ghost in the machine, however. It is a ubiquitous phenomenon in nature. The job of the neuroscientist is to understand the relationship between one level of organization and another, not to deny they exist. Viewing the organization of the brain as being multileveled, and those levels as having emergent properties, has far-reaching implications for our understanding of brain function. Describing a property as emergent, however, does not explain that property or how it came to be. Instead, it allows us to identify the appropriate level of inquiry. Indeed, the central focus of modern mind–brain research should be to understand how the levels interact.

Conscious thought may be an emergent property, and concentrating on the firing of neurons might not tell us all we need to know to understand that phenomenon. Neuroscience has assumed that we can derive the macro story from the micro story. Neural reductionists hold that every mental state has a one-to-one relationship with some as yet undiscovered neural state. Can we take from neurophysiology what we know about neurons and neurotransmitters and come up with a deterministic model to predict conscious thoughts or psychology? Brandeis University neuroscientist Eve Marder’s work with spiny lobsters suggests this approach would not work (Prinz et al., 2004).

FIGURE 14.15 100,000 to 200,000 networks with very different cellular and synaptic properties generate the typical pyloric rhythm.
(a, b) Shown are the voltage traces from two model pyloric networks, which are very similar, even though they are produced by circuits with very different membranes and synaptic properties. The permeabilities and conductances for various ions differ among the circuits.

Multiple Realizability

The spiny lobster has a simple nervous system. Marder has been studying the neural underpinnings of the motility patterns of the lobster’s gut (Figure 14.14). She has isolated the entire neural network and has mapped out every single neuron and synapse. She has modeled the synapse dynamics to the level of neurotransmitter effects. From a neural reductionist perspective, she should be able to piece together all her information and describe the exact neural pattern of synapses and neurotransmitters that results in the function of the lobster gut. Her laboratory simulated the more than 20 million possible network combinations of synapse strengths and neuron properties for this relatively simple gut nervous system. After modeling all those timing combinations, Marder found that about 1 % to 2 % of them could lead to the appropriate dynamics that would create the motility pattern observed in nature. Even though it is a small percentage, it still turns out that this very simple nervous system has 100,000 to 200,000 different tunings that will result in exactly the same gut behavior at any given moment. That is, normal pyloric rhythms were generated by networks with very different cellular and synaptic properties (Figure 14.15). The idea that there are many ways to implement a system to produce one behavior is known as multiple realizability. In a hugely complex system such as the human brain, how many possible tunings might there be for a single behavior? Can single-unit recordings and molecular approaches alone ever reveal what is going on to produce human behavior? This is a profound problem for the reductionist neuroscientist, because Marder’s work shows that analyzing nerve circuits may be able to inform how the thing could work but not how it actually does work. Neuroscientists will have to figure out how, and at what level, to approach the nervous system to learn the deterministic rules for understanding it. It doesn’t appear that investigating one level, however, will tell us all we need to know to predict how another level operates.

Nobel Prize–winning physicist Phillip Anderson (1972), in his seminal paper More Is Different, reiterated the idea that we can’t get the macro story from the micro story:

The main fallacy in this kind of thinking is that the reductionist hypothesis does not by any means imply a “constructionist” one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. In fact, the more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science, much less to those of society.

He later admonishes biologists,

The arrogance of the particle physicist and his intensive research may be behind us (the discoverer of the positron said “the rest is chemistry”), but we have yet to recover from that of some molecular biologists, who seem determined to try to reduce everything about the human organism to “only” chemistry, from the common cold and all mental disease to the religious instinct. Surely there are more levels of organization between human ethnology and DNA than there are between DNA and quantum electrodynamics, and each level can require a whole new conceptual structure.

Can Mental States Affect Brain Processing?

Let’s pull back from all this theory for the moment and remember what the brain is for. The brain is a decision-making device, guided by experience, that gathers and computes information in real time to inform its decisions. If the brain is a decision-making device and gathers information to inform those decisions, then can a mental state such as a belief, which is the result of some experience or some social interaction, affect or constrain the brain, and by so doing, influence its future mental states and behaviors?

Kathleen Vohs, a psychology professor at the Carlson School of Management in Minnesota, and Jonathan Schooler, a psychology professor at the University of California at Santa Barbara, have shown in a clever experiment that people behave better when they believe they have free will. An earlier survey of people in 36 countries had reported that more than 70 % agreed their life was in their own hands. Other studies had shown that invoking a sense of personal accountability could change behavior (Harmon-Jones & Mills, 1999; Mueller & Dweek, 1998). Vohs and Schooler set about to see empirically whether people behave better when they have a belief that they are free to function. In their study, college students, before taking a test, were given a series of sentences to think about that had either a deterministic bias, such as “Ultimately, we are biological computers—designed by evolution, built through genetics, and programmed by the environment,” or a passage about free will, such as “I am able to override the genetic and environmental factors that sometimes influence my behavior.” Then the students were given a computerized test. They were told that due to a glitch in the software, the answer to each question would pop up automatically. To prevent this from happening, they were asked to press a particular computer key. Thus it took extra effort not to cheat. What happened? The students who read the determinist sentences were more likely to cheat than those who had read the sentences about free will. In essence, a mental state affected behavior. Vohs and Schooler (2008) suggested that disbelief in free will produces a subtle cue that exerting effort is futile, thus granting permission not to bother.

People prefer not to bother because bothering, in the form of self-control, requires exertion and depletes energy (Gailliot et al., 2007). Florida State University social psychologists Roy Baumeister and colleagues (2009) found that reading deterministic passages also resulted in more aggressive and less helpful behavior toward others. They suggest that a belief in free will may be crucial for motivating people to control their automatic impulses to act selfishly, and a significant amount of self-control and mental energy are required to override selfish impulses and restrain aggressive impulses. The mental state supporting the idea of voluntary actions had an effect on the subsequent action decision. It seems that not only do we believe we control our actions, but it is good for everyone to believe it. Although the notion that a belief affects behavior seems elementary to the man on the street, it is firmly denied by most neuroscientists. This view implies top-down causation, and in a neural reductionist world, a mental state cannot affect the determinist physical brain. But once again, the physicists have a warning for us. McGill University physicist Mario Bunge reminds us to take a more holistic approach:

[we] should supplement every bottom-up analysis with a top-down analysis, because the whole constrains the parts: just think of the strains in a component of a metallic structure, or the stress in a member of a social system, by virtue of their interactions with other constituents of the same system. (Bunge, M., 2010, page 74)

Can a thought constrain the very brain that produced it? Does the whole constrain its parts? The classic puzzle is usually put this way (Figure 14.16): There is a physical state, P1, at time 1, which produces a mental state, M1. Then after a bit of time, now time 2, there is another physical state, P2, which produces another mental state, M2. How do we get from M1 to M2? This is the conundrum. We know that mental states are produced from processes in the brain, so that M1 does not directly generate M2 without involving the brain. If we just go from P1 to P2 and then to M2, then our mental life is doing no work and we are truly just along for the ride. No one really likes that notion. The tough question is, does M1, in some downward-constraining process, guide P2 and thus affect M2? Theoretical biologist David Krakauer at the University of Wisconsin helps us think about this by pointing out that when we program a computer,

we interface with a complex physical system that performs computational work. We do not program at the level of electrons, Micro B, but at a level of a higher effective theory, Macro A (for example, computer programming languages) that is then compiled down, without loss of information, into the microscopic physics. Thus A causes B. Of course, A is physically made from B, and all the steps of the compilation are just B with B physics. But from our perspective, we can view some collective B behavior in terms of A processes.... The deeper point is that without these higher levels, there would be no possibility of communication, as we would have to specify every particle we wish to move in the utterance, rather than have the mind-compiler do the work. (Gazzaniga, 2011, pp. 139–140)

FIGURE 14.16 Rethinking causality.
Details of this figure are in the text.

From this perspective, to control this teeming, seething system, emergence of another layer is necessary. The overall idea is that we have a variety of hierarchical emerging systems erupting from the level of particle physics to atomic physics to chemistry to biochemistry, to cell biology to physiology, emerging into mental processes. The deep challenge of science is to understand how all these different layers interact.

Howard Pattee (2010) has found a good biological example of upward and downward causation in the genotype-phenotype mapping of gene replication. Genotype-phenotype mapping

requires the gene to describe the sequence of parts forming enzymes, and that description, in turn, requires the enzymes to read the description... In its simplest logical form, the parts represented by symbols (codons) are, in part, controlling the construction of the whole (enzymes), but the whole is, in part, controlling the identification of the parts (translation) and the construction itself (protein synthesis).

When the brain is viewed as a layered system (see Doyle & Csete, 2011), we realize the reasoning trap we can easily fall into when we consider Libet’s findings (discussed earlier) that neural events associated with a physical response occur way before a person is consciously aware of even wanting to will an act. We are mixing up two organization levels: micro level B with macro level A. We are using macro-level organization principles and applying them to the micro level. What difference does it make if brain activity goes on before we are consciously aware of something? Consciousness is a different level of organization on its own time scale from neural events, and that time scale is current with respect to it. There is no question that we humans enjoy mental states that arise from our underlying neuronal, cell-to-cell interactions. Mental states do not exist without those interactions but, as argued earlier, cannot be defined or understood solely by knowing the cellular interactions. These mental states that emerge from our neural actions, such as beliefs, thoughts, and desires, in turn constrain the very brain activity that gave rise to them. Mental states can and do influence our decisions to act one way or another.

Doyle describes the conundrum of the interaction between the layers as follows:

The standard problem is illustrated with hardware and software; software depends on hardware to work, but is also in some sense more “fundamental” in that it is what delivers function. So what causes what? Nothing is mysterious here, but using the language of “cause” seems to muddle it. We should probably come up with new and appropriate language rather than try to get into some Aristotelian categories.

Understanding this nexus and finding the right language to describe it represents, as Doyle says, “the hardest and most unique problem in science” (Gazzaniga, 2011, p. 107).

The freedom, control, or restraint that is represented in a choice not to eat the jelly doughnut and not to cheat on the test comes from some sort of interaction between a mental-layer belief and the neuronal layer. Neither layer functions without the participation of the other. The course of action taken appears to us as a matter of “choice,” but the fact is, the mental state that was manifested is the result of a particular emergent mental state being selected from numerous possible mental states by the complex and interacting surrounding milieu. This activity is known as symmetry breaking. In symmetry breaking, small fluctuations acting on a system cross a critical point and determine which of several equally likely outcomes will occur. A well-known example is a ball sitting at the top of a symmetrical hill, where any disturbance will cause it to roll off in any direction, thus breaking its symmetry and causing a particular outcome. A well-used example from a biological system is a hungry donkey standing equidistant from two piles of hay. At some point, he heads toward one. The laws of the system are invariant, but the background of the system is not invariant, making the system unpredictable. That mental state is automatic, deterministic, modularized, and driven not by one physical system at any one time but by hundreds, thousands, perhaps millions. Moreover, it is made up of complementary components arising from within and without. That is how the brain works. What is going on is the match between ever-present multiple mental states and the impinging contextual forces it functions within. Our interpreter then claims we freely made a choice.

Humans strive to make better decisions to cope and adapt to the world we live in. That is what our brains are for and what they do. Each brain makes decisions based on experience, innate biases, and much more. Free will, if it means anything, is found in developing more options for our brain to use as it matches choices with the contexts we find ourselves in. As we move through time and space with fresh sensations bombarding us, we are constantly generating new thoughts, ideas, and beliefs. All of these mental states provide a rich array of possible actions for us. The couch potato simply does not have the same array as the explorer. New experience provides the window into more choices, and that is what freedom truly means.

The Layer Beyond the Brain

Viewing the mind–brain system as being layered allows us to begin to understand how the system actually works, and how beliefs and mental states play their role and stay part of our determined system. With that understanding comes the insight that layers exist both below the mind– brain layers and above them as well. Indeed, there is a social layer; and in the context of interactions with that layer, we can begin to understand concepts such as personal responsibility and freedom. Mario Bunge (2010) tells us that “we must place the thing of interest in its context instead of treating it as a solitary individual.” The realization that we can’t look at the behavior of just one brain has come slowly to neuroscience and psychology. Asif Ghanzanfar at Princeton University, who studies vocalization in both macaques and humans, makes the point that during vocalization a dynamic relationship is going on that involves different parts of the brain, and another dynamic relationship is going on with the other animal that is listening. The vocalizations of one monkey modulate the brain processes going on in the other monkey. This behavior is true for humans also. Uri Hasson and his colleagues at Princeton (Stephens et al., 2010) measured the brain activity of a pair of conversing participants with fMRI. They found that the listener’s brain activity mirrored the speaker’s, and sometimes, some areas of the brain even showed predictive anticipatory responses. When there were such anticipatory responses, greater understanding resulted. The behavior of one person can affect another person’s behavior. The point is that we now understand that we have to look at the whole picture, not just one brain in isolation.

When it comes to the interplay between brains, having a deterministic brain is a moot point. At this social level of analysis we are up a couple of layers of organization beyond basic brain function. The social layer is where we should place such concepts as following rules and personal responsibility. Being personally responsible is a social rule, not a brain mechanism, and it is found in the space between human brains, in the interactions between people. Accountability makes no sense in a world made up of one brain. When more than one human brain interacts, a new set of rules comes into play, and new properties—such as personal responsibility—begin to emerge.

Just as a mental state can constrain the brain that produces it, so the social group constrains the individuals that shape the type of social group that evolves. For example, among the pigtail macaque monkeys, Jessica Flack has found that a few powerful individuals police the activity of the group members (Flack et al., 2005). The presence of such individuals can prevent conflicts from occurring, but if that tactic is not fully successful, those individuals can reduce the intensity of conflicts or terminate them and prevent them from spreading. When the policing macaques are temporarily removed, conflict increases. Their presence also facilitates active socio-positive interactions among group members. A group of macaques could foster either a harmonious, productive society or a divided, insecure grouping of cliques, depending on the organization of its individuals. The presence of a policeman “influences large-scale social organization and facilitates levels of social cohesion and integration that might otherwise be impossible.” Flack concludes, “This means that power structure, by making effective conflict management possible, influences social network structure and therefore feeds back down to the individual level to constrain individual behaviour [italics added].” The same thing happens when you see the highway patrolman in your rear-view mirror coming down the on ramp: You check your speedometer and slow down. Individual behavior is not solely the product of an isolated deterministic brain, but is affected by the social group.


TAKE-HOME MESSAGES


The Law

As we pointed out at the beginning of the chapter, people, that is, a group of interacting brains, form a society and shape the rules that they decide to live by. Our legal systems elaborate rights and responsibilities and serve as a social mediator of dealings between people. In most modern-day societies, the laws made by these systems are enforced through a set of institutions, as are the consequences of breaking those laws.

Currently, American law holds people responsible for their criminal actions unless they acted under severe duress (a gun pointed at your child’s head, for instance) or have suffered a serious defect in rationality (such as not being able to tell right from wrong). In the United States, the consequences for breaking those laws are based on a system of retributive justice, where a person is held accountable for his crime and is meted out punishment in the form of his “just deserts.”

Is this the way things should be? Do we want to hold the person accountable, or do we want to forgive him because of the determinist dimension of brain function?

Although only about 3 % of criminal cases actually go to trial (most are plea bargained), neuroscience has an enormous amount to say about the goings on once we step into the courtroom. It can provide evidence that there is unconscious bias in the judge, jury, prosecutors, and defense attorneys. It can tell us about the reliability (and unreliability) of memory and perception, which has implications for eyewitness testimony (see “How the Brain Works: Eyewitness Testimony”). It can inform us about the reliability (and unreliability) of lie detecting. These factors all contribute to establishing the guilt or innocence of a defendant. Neuroscience can even tell us about our motivations for punishment. Now neuroscience is being asked to determine the presence of diminished responsibility in a defendant, predict future behavior, and determine who will respond to what type of treatment. From the viewpoint of this chapter, we are interested in responsibility and motivations for punishment.

Responsibility

The law looks at the brain in this way:

This line of reasoning is used for exculpability.

Judges are the ones to decide what is admissible as evidence. In the past few years, judges, untrained in science, have allowed brains scans to be used as evidence to explain why someone acted in a particular way (and thus to claim diminished responsibility). Can these scans actually explain our actions? For the following reasons, neuroscientists are not convinced that they can.

FIGURE 14.17 Individuals use different brain regions to perform the same episodic retrieval task.
Individual maps of axial views for each of the nine participants are shown with the significant activations associated with episodic retrieval that contributed to the group activation map. The most significant voxels for each participant and for the group are circled in red.

  1. A brain scan merely records that in this particular area, if you average together several brains, such and such occurs. For instance, Michael Miller and his colleagues scanned the brains of 20 people, morphed all the separate brain scans into one, and added all the signals onto that averaged morphed brain. The regions where the signals were consistently present indicated that the area could be reliably identified as being active for that task across individuals. On the group map for a recognition memory task in which a participant remembers something seen previously, the average result of 16 participants shows that the left frontal areas are heavily involved in this type of memory task. When you look at the individual maps, however, four out of the first nine participants did not have activation in that area (Figure 14.17; Miller et al., 2002). So how can you apply group patterns to an individual? It is impossible to point to a particular spot on a brain scan and state with 100 % accuracy that a particular thought or behavior arises from activity in that area.

HOW THE BRAIN WORKS

Eyewitness Testimony

Every prosecutor in a criminal case knows that an eyewitness account is one of the most compelling types of evidence for establishing guilt. But is this type of testimony to be trusted? Elizabeth Loftus and her colleagues (Loftus & Greene, 1980; Loftus et al., 1978) illustrated the difficulty of relying on the recall of witnesses by showing participants color slides detailing an accident and, in a later test session, asking them what they saw. One of the slides showed a car at an intersection before it turned and hit a pedestrian. Half of the participants viewed a red stop sign; the other half, a red yield sign. Participants then answered questions about the slides: Half were presented with questions referring to the correct sign; the other half were asked questions referring to the incorrect sign. For example, a participant who was shown a yield sign might have been asked, “When the car came to the stop sign, did the driver stop?”

During subsequent recognition tests of whether a certain slide was what they had previously seen, 75% of the participants correctly recognized a previously seen slide if the correct sign had also been mentioned in the questioning session. But when participants previously had been questioned with the wrong sign being mentioned, only 41% correctly recognized the slides as previously seen or not seen. These findings indicate that recollections of an event can be influenced by misleading statements made during questioning.

Misinformation about things as obvious as hair color and the presence of a mustache can lead participants to wrongly identify people they have seen previously. What does this say about the suggestibility of witnesses and the influence of misinformation on recall? Do witnesses really know the correct information but later fail to distinguish between their own memories and information supplied by another person? One line of thinking is that perhaps the information was not encoded initially, and when forced to guess, the participant provides the information given by someone else.

Not just adults are eyewitnesses in court cases. Children often are asked to testify as witnesses. Given that adults with fully developed memories have difficulty recalling what they have seen, how do young children with potentially underdeveloped memory systems behave under the pressures of authorities and courtrooms? This is a controversial issue in situations such as child abuse or sexual abuse in which children are eyewitnesses to crimes against themselves, and may be the only witness.

The question of how well children remember and report things that they have experienced is of special concern when the event may be traumatic. One effective way to study such conditions is to use events that are traumatic for children involving contact with others that can be verified. Physicians sometimes must perform genital examinations on children, which may include painful medical procedures. The children’s memories of these events can be examined systematically. In one study, half of 72 girls between the ages of 5 and 7 years were given a genital examination as part of necessary medical care, and half were not. Children who received the examination were unlikely to report having been touched in their genital region during free recall or when using anatomically detailed dolls. Only when asked leading questions did most of the girls reveal that they had been so examined. The control group made no false reports during free recall or with the dolls, but with leading questions, three children made false reports.

Psychologist Gail Goodman and her colleagues (1994) at the University of California, Davis, emphasized that one of the most important predictors of accurate memory performance is age. Memory performance for traumatic events is significantly worse in children 3 to 4 years old than in older children. Dr. Goodman also noted, however, that other factors influence memory accuracy in children. Such factors include how well the traumatic episode is actually understood, the degree of parental emotional support and communication, and the children’s emotional (positive versus negative) feelings.

The goal of this research is to determine the validity of children’s reports on events—including negative ones such as abuse—that may have happened to them, and how they might invent stories. Of special interest is whether leading questions can induce children to fabricate testimony. We need to know this when interpreting children’s testimony that involves other persons, such as therapists or members of the legal system. From the cognitive neuroscience perspective, it is important to know whether the neural signature of real and false memories might be different (see How the Brain Works: False Memories and the Medial Temporal Lobes on p. 409).


FIGURE 14.18 Regions of activation to an episodic retrieval task vary in the brain of the same individual tested at different times.
Each block contains the brain representation from an individual participant when performing the same word memory task on two separate occasions. The significant activations during the first session are compared to the significant activations for the same individual during the second session. The date when the session took place is noted on the left.

  1. There are also variations in how our brains are connected and how they process information. For instance, when you are asked to name an object that is upside down, you use two processes: one process is present in the right hemisphere, which rotates an object in space; another process is in the left hemisphere, which names an object. So when viewing an upside-down boat, before you can name it, you first rotate it right-side-up in your right hemisphere. Next you send the rotated image to the left hemisphere, where the left hemisphere names the object, and then you say it (“Ah, boat”). Some people are fast at this and some are slow. The people who are fast use one part of their corpus callosum to transfer the information to their speech center, and the slow people use a totally different part. Using diffusion tensor imaging (DTI), researchers have found anatomical differences that could explain this phenomenon. It turns out that people vary tremendously in the number of fibers present in different parts of their callosum and in what routes are used to process this problem (Putnam et al., 2010). Capturing all this variation against or for a particular case in a legal setting may prove impossible.
  2. The mind, emotions, and the way we think constantly change. What is measured in the brain at the time of scanning doesn’t reflect what was happening at the time of a crime. For instance, when Miller and his colleagues brought the participants in the memory experiment back to repeat exactly the same tasks, their brain activity patterns differed (Figure 14.18).
  3. Brains are sensitive to many factors that can alter scans: caffeine, tobacco, alcohol, drugs, fatigue, strategy, menstrual cycle, concomitant disease, nutritional state, and so forth.
  4. Performance is not consistent. People do better or worse at any task from day to day.
  5. Images of the brain are prejudicial. Studies (Weisberg et al., 2008) have shown that when adults read the explanations of psychological phenomena, the explanations are more positively evaluated and considered important if a brain scan is shown in the material they read, even when the images have nothing to do with the explanations. In fact, bad explanations are more accepted with the presence of a brain scan.

Consider one case where brain scans were admitted as evidence. Simon Pirela had received two death sentences for two separate first-degree murder convictions in 1983. In 2004, twenty-one years later, brain scans were allowed as evidence and convinced one jury in a resentencing hearing that Pirela was not eligible for the death penalty, because he suffered from frontal lobe aberrations that diminished his capacity to function normally. In a separate appeal hearing to vacate the second death sentence, however, exactly the same scans were used to make a different claim. This time the scans were offered as evidence that Pirela was mentally retarded. Combined with testimony from a neuropsychologist, the scans were found “quite convincing” by the appellate judge. The same scans were accepted as evidence for two different diagnoses (Staff working paper, 2004).

When presented with the abnormal brain story, the law makes several false assumptions with no scientific basis. It assumes that an abnormal brain scan is an indicator of abnormal behavior. It does not follow that a person with an abnormal brain scan has abnormal behavior. This was a trap that orthopedic doctors fell into when MRI was first available to help diagnose back pain. It took them a while to realize that an abnormal disc shown on a lumbar MRI is not necessarily a problem. In fact, in a study of 98 participants with no history of back pain, 64 % had abnormal disc protrusions on a lumbar MRI (Jensen et al., 1994). You can’t look at an abnormal MRI scan of the lower back and predict whether the person has pain, just as you can’t look at a brain scan and predict whether the person has abnormal behavior. Another erroneous assumption is that a person with an abnormal brain who does have abnormal behavior is automatically incapable of responsible behavior. Responsibility is not located in the brain. The brain has no area or network for responsibility. As previously noted, the way to think about responsibility is that it is an interaction between people, a social contract.

An abnormal brain does not mean that the person cannot follow rules, although with certain very specific lesions it may. An abnormal brain also does not necessarily mean that the person is more violent. People who have acquired left frontal lobe lesions may act oddly, but their violence rate only increases from the base rate of 3 % to between 11 % and 13 %. A frontal lobe lesion in itself is not a predictor of violent behavior. In the case of an abnormal neurotransmitter disorder such as schizophrenia, there is a higher incidence of arrest for drug-related issues, but there is no higher incidence of violent behavior in people with schizophrenia while they are taking their medication and only a very small increased incidence in those who are not. They still understand rules and obey them; for instance, they stop at traffic lights and pay cashiers. It is not true that just because you have schizophrenia, your base rate of violent behavior goes up and you are vastly more likely to commit a crime. If the court system concludes that having frontal lobe lesions or schizophrenia can exculpate a person for their behavior, that decision can result in two possible scenarios. Anyone with a frontal lobe lesion or schizophrenia has carte blanche for any behavior. Or, to take the opposite tack (which is based on the same reasoning that they cannot control their behavior), all people with frontal lobe lesions or schizophrenia should be locked up as a preventive measure. So in thinking about these things, we have to be careful that our best intentions aren’t used inappropriately.

In the Simon Pirella case just discussed, the reason for seeking the diagnosis of mental retardation was based on a 2002 Supreme Court ruling, which declared that executing someone with mental retardation would be cruel and unusual punishment; as such, it was a violation of the 8th Amendment of the U.S. Constitution. Chief Justice Scalia summarized this case (Atkins v. Virginia):

After spending the day drinking alcohol and smoking marijuana, petitioner Daryl Renard Atkins and a partner in crime drove to a convenience store, intending to rob a customer. Their victim was Eric Nesbitt, an airman from Langley Air Force Base, whom they abducted, drove to a nearby automated teller machine, and forced to withdraw $200. They then drove him to a deserted area, ignoring his pleas to leave him unharmed. According to the co-conspirator, whose testimony the jury evidently credited, Atkins ordered Nesbitt out of the vehicle and, after he had taken only a few steps, shot him one, two, three, four, five, six, seven, eight times in the thorax, chest, abdomen, arms, and legs.

The jury convicted Atkins of capital murder. At resentencing... the jury heard extensive evidence of petitioner’s alleged mental retardation. A psychologist testified that petitioner was mildly mentally retarded with an IQ of 59, that he was a “slow learne[r],”..., who showed a “lack of success in pretty much every domain of his life,”..., and that he had an “impaired” capacity to appreciate the criminality of his conduct and to conform his conduct to the law,... Petitioner’s family members offered additional evidence in support of his mental retardation claim.... The State contested the evidence of retardation and presented testimony of a psychologist who found “absolutely no evidence other than the IQ score... indicating that [petitioner] was in the least bit mentally retarded” and concluded that petitioner was “of average intelligence, at least.”...

The jury also heard testimony about petitioner’s 16 prior felony convictions for robbery, attempted robbery, abduction, use of a firearm, and maiming. ... The victims of these offenses provided graphic depictions of petitioner’s violent tendencies: He hit one over the head with a beer bottle...; he slapped a gun across another victim’s face, clubbed her in the head with it, knocked her to the ground, and then helped her up, only to shoot her in the stomach, id., ... The jury sentenced petitioner to death. The Supreme Court of Virginia affirmed petitioner’s sentence.

The three main justifications for capital punishment are deterrence, retribution, and incapacitation. Justice Stevens, writing for the majority of the Court, reasoned that two of the main justifications, deterrence and retribution, could not be appreciated by the defendant, who suffered mental retardation, and therefore, the sentence imposed cruel and unusual punishment. The legal decision was delivered in terms of existing beliefs about the purpose of punishment in the law. It was not based on the science of brain function—namely whether the defendant, because of his brain abnormality, could or could not form intentions, learn from experience, and the like. The decision also makes the assumption that anyone suffering any degree of “mental retardation” has no capacity for understanding the just deserts for a crime or what the society considers right or wrong. Was there any evidence on which to base this assumption? Not from the defendant’s behavior.

In the case just described, the defendant was able to make a plan and take what was necessary to implement the plan. He was capable of learning and learned to drive a car following the rules of the road; understood that coercion was necessary to get money from a stranger and how to coerce; understood that shooting the victim was breaking a social rule and should not be done in public or within the public hearing; and was able to inhibit his actions until they were in an out-of-the-way location. His behavior more likely should have led to the assumption that he could follow rules, form intentions and plans, learn from experience, and inhibit his actions; further, he did have guilty intent, and did understand that there could be retributive consequences that he did not want to undergo, and hence tried to hide his actions. Whether he was retarded or not had no effect on these aspects of brain function.

Guilty—Now What?

However complicated the court system may be, proceedings that arrive at a verdict are the easy part. Most of the defendants who get to trial or plead guilty are the agents of the crime. After a defendant has been found guilty, next comes the sentencing. That is the hard part. What do you do with guilty people who have intentionally planned and committed known, morally wrong actions that harm others? The judge looks at all the mitigating and contributing factors (age, previous criminal record, severity of the crime, negligence versus intention, unforeseeable versus foreseeable harm, etc.), as well as the sentencing guidelines, and then makes a decision. Should the offender be punished? If so, should the goal of punishment be mindful of individual rights based on retribution, mindful of the good of society with reform and deterrence in mind, or mindful of the victim with compensation? This decision is affected by the judge’s own beliefs of justice, which come in three forms: retributive justice, utilitarian justice, and restorative justice.

Retributive justice is geared toward punishing the individual criminal in proportion to the crime that was committed. Thus, its goal is extending just deserts to the criminal: an eye for an eye. The crucial variable is the degree of moral outrage the crime engenders, not the benefits to society resulting from the punishment. Therefore, a person does not get a life sentence for stealing a piece of pizza, nor does anyone get a month’s probation for molesting a child. The punishment is focused solely on what the individual deserves for his crime, and nothing more or less. It appeals to the intuitive sense of fairness whereby every individual is equal and is punished equally. You cannot be punished for crimes you have not committed. No matter who you are, you should receive the same punishment for the same crime. You do not get a harsher sentence because you are or are not famous, because you are black or white or brown. The general welfare of society as a whole is not part of a calculation. Retributive justice is backward looking, and its only concern is to punish the criminal for a past action. It does not punish as a deterrent to others, nor to reform the offender, nor to compensate the victim. These outcomes may result as byproducts, but they are not the goal. It punishes to harm the offender, just as the victim was harmed. When Polly Klaas’s father said, “Richard Allen Davis deserves to die for what he did to my child,” he was speaking from the perspective of retributive justice.

Utilitarian justice (consequentialism), on the other hand, is forward looking. It is concerned about the greater future good for society that may result from punishing an individual offender. There are three types of utilitarian punishment. The first specifically deters the offender (or others who will learn by example) in the future, perhaps by fines, prison time, or community service. You speed past a school zone and get a ticket for $500. You might think twice about it next time and slow down. The second type of utilitarian justice incapacitates the offender. Incapacitation can be achieved geographically, such as when the British sent their debtors to Australia; by long prison sentences or banishment, which includes disbarment for lawyers and other such licensing losses; or by physical means, such as castration for rapists, capital punishment, severing the hands of thieves, and so forth. This is what Polly’s father had in mind when he said, “It doesn’t bring our daughter back into our lives, but it gets one monster off the streets.” The third type of utilitarian justice is rehabilitation through treatment or education. The method chosen is decided by the probability of recidivism, degree of impulsiveness, criminal record, ethics (can treatment be forced on someone who is unwilling to undergo it?), and so forth, or by prescribed sentencing standards. This is another area where neuroscience will have something to contribute. Prediction of future criminal behavior is pertinent to utilitarian sentencing decisions, whether treatment, probation, involuntary commitment, or detention is chosen. Neural markers could be used to help identify an individual as a psychopath, sexual predator, impulsive, and so forth, in conjunction with other evidence to make predictions of future behavior. Obviously the reliability of such predictions is important, as is deciding what level of certainty about these determinations is acceptable. Because utilitarian justice punishes for uncommitted future crimes, its use can result in either decreasing or increasing harmful errors.

The goal of utilitarian justice (the greater good) may sound good on paper, but some of its aspects go against our sense of fairness. Utilitarian justice would permit harsher punishment for a famous person or the perpetrator of a highly publicized crime, because the publicity might deter many future crimes and thus benefit society. To increase the deterrence effect, harsh sentences for common, but minor, offenses are also allowed. For instance, some utilitarians advocate prison sentences (not just overnight in the local lockup) for first-time speeding and drunk driving offenses. No doubt you would think twice when you set your cruise control over the speed limit, if being caught resulted in a year in prison. This practice could save more innocent lives than punishing convicted murderers. The most common crime in the United States is shoplifting, which costs retailers about $13 billion to $35 billion a year. The crime has many hidden costs including higher prices for consumers, lost taxes to the community, and extra burden on police and courts. A harsh sentence for shoplifting could deter many who contemplate slipping a lipstick into their pocket and reduce the overall cost of goods for everyone. The extreme utilitarian case can be made that the punished need not even be guilty, just thought to be guilty by the public. This is why some people object to utilitarian justice: It can violate an individual’s rights, and it may not seem “fair.”

In British law, since the Norman invasion of England in 1066, crimes have been considered to be injuries against the state, rather than against an individual. American law has also taken this stance. The victim has no part in the justice system, thus ensuring neutrality in criminal proceedings and avoiding vengeful and unfair retaliation. Restorative justice, however, looks at crimes as having been committed against a person rather than against the state. Restorative justice holds the offender directly accountable to the victim and the affected community. It requires the offender to make things whole again to the extent possible, allows the victim a say in the corrections process, and encourages the community to hold offenders accountable, support victims, and provide opportunities for offenders to reintegrate themselves into the community. Victims of crimes are often enveloped in fear, which adversely affects the rest of their lives. For crimes of lesser magnitude, often a face-to-face, sincere apology and reparation are enough to relieve the victim’s fear and anger. Restorative justice, however, may not be possible for more serious crimes. Would an apology be judged satisfactory by the parents of Polly Klaas or the voters who passed the three-strikes laws?

Born to Judge

When people are asked to label themselves as retributivists or deterrists, their answers vary widely. These individual differences seem to evaporate, however, when people are asked to assign hypothetical punishment for an offense. The vast majority, 97 %, seek out information relevant to a retributive perspective and not to the utilitarian perspective (Carlsmith, 2006). They are highly sensitive to the severity of the offense and ignore the likelihood that the person would offend again. They punish for the harm done (retribution), not for the harm that might be done in the future (deterrence). When asked to punish only from the utilitarian perspective and to ignore retributive factors, people still used the severity of the crime to guide their judgments (Darley et al., 2000). Yet, when asked to allocate resources for catching offenders or preventing crime, they highly supported the utilitarian approach of preventing crime. So although people endorse the utilitarian theory of reducing crime, they don’t want to do it through unjust punishment. They want to give a person what she deserves, but only after she deserves it. People want to be fair (Carlsmith & Darley, 2008).

Where does this retributive sense of fairness come from? We were born with it, along with a sense of reciprocity and punishment. When 2½-year-olds are asked to distribute treats to animated puppets, they will do so evenly (Sloane & Baillargeon, 2010). Even 16-monthold infants have a keen sense of fairness, exhibited by their preference for cartoon characters that divide prizes equally (Geraci & Surian, 2010). We also come wired for reciprocity, but only within our social group. Toddlers expect members of a group to play and share toys and are surprised when it doesn’t happen. They are also surprised, however, when sharing happens between members of different groups. They are not surprised when it does not (He & Baillargeon, 2010). Toddlers recognize moral transgressors and react negatively to them. Children from 1½ to 2 years old help, comfort, and share with a victim of a moral transgression, even in the absence of overt emotional cues. Moral transgressors, on the other hand, incite the infants to protest vocally, and they are less inclined to help, comfort, or share with them (Vaish et al., 2010). Young children also understand intentionality and judge intentional violations of rules as “naughty,” but they do not feel that way about accidental violations (Harris & Nunez, 1996). We feel these urges all the time, and we try to have big theories about them, but we are just born that way.

Knowing about these innate tendencies helps us to understand that although people say they endorse utilitarian policies in the abstract, they invoke retributivist ones in practice (Carlsmith, 2008). This lack of insight leads to fickle legislation. For instance, 72 % of the voters of California enacted the three-strikes law that we spoke of at the beginning of the chapter, thus taking a utilitarian approach. A few years later, when they realized that this could mean an “unfair” life sentence for stealing a piece of pizza and sensed that the law was unfair from a retributivist perspective, voter support dropped to less than 50 %. Because of this highly intuitive just-deserts impulse, it is doubtful that citizens will be willing to allow a purely restorative, no-punishment treatment for serious crimes.

We aren’t the first to be wrestling with the ideas of retributive and utilitarian justice. Aristotle argued that justice based on fair treatment of the individual leads to a fair society. Plato, looking at the big picture, thought fairness to society was of primary importance and that individual cases should be judged in order to achieve that end. These two ways of thinking should remind you of the trolley problem in Chapter 13: the emotional situation and the more abstract situation. Facing the individual offender in a courtroom and deciding whether to punish is an emotional proposition, and it elicits an intuitive emotional reaction: “Throw the book at ’em!” or, “Poor guy, he didn’t mean to do it, let him off easy!” How would you feel if you were sitting across the courtroom from the person who killed or molested your child? In an fMRI study done while participants were judging responsibility and assigning punishment in hypothetical cases, brain regions associated with emotion were activated during the punishment judgment—and the more activity these regions showed, the greater the punishment (Buckholtz et al., 2008). While participants were making third-party legal decisions, however, a region of the right dorsolateral prefrontal cortex was recruited, the same region that is recruited when judgments about punishments are made in the Ultimatum economic game. These researchers suggest that “our modern legal system may have evolved by building on preexisting cognitive mechanisms that support fairness-related behaviors in dyadic interactions.” If an evolutionary link to relations between individuals in socially significant situations (for example, mates) is true, it makes sense that when faced with an individual, we resort to fairness judgments rather than utilitarian justice. Faced with the abstract questions of public policy, however, we leave the emotional reaction behind and can resort to the more abstract utilitarian thinking.

What’s a Judge to Do?

If a judge believes that people are personally responsible for their behavior, then either retributive punishment or restorative justice makes sense. If the judge believes that deterrence is effective, or that punishment can change bad behavior into good, or that some people are irredeemable, then utilitarian punishment makes sense. If the judge has a determinist stance, there is still a retributist or utilitarian decision to be made. From the retributist perspective, his focus of concern may be (a) for the offender’s individual rights and because the offender had no control over his determined behavior, he or she should not be punished (a retributist attitude) but perhaps should be treated (but not against their will?) if possible; or (b) for the victim’s rights of restitution and any deterministic retributive feelings the victim might have; or the judge could go the utilitarian route and decide to sentence (c) for the greater good of society (it may not be the offenders’ fault, but get ’em off the streets).

Boalt law professor Sanford Kadish (1987) sums up the stance taken by many hard-core determinists:

To blame a person is to express moral criticism, and if the person’s action does not deserve criticism, blaming him is a kind of falsehood, and is, to the extent the person is injured by being blamed, unjust to him.

In essence, he is saying that criminals are not responsible for the actions committed by their determinist brain; thus, they should not be blamed or punished. What do you do with them? Don’t hold them accountable for their actions and turn them back out on the streets? Forgive and forget? Is forgiveness a viable concept? Is it possible to run a civilized society where forgiveness trumps accountability and punishment?

Crime and No Punishment?

Unlike any other species, we humans have evolved to cooperate on a massive scale with unrelated others. There are now 6.7 billion of us, more than twice the world population of 1950. The amazing thing is that we as a species are becoming less violent and get along rather well, contrary to what you may hear on the evening news (Pinker, 2011). The troublemakers, although still very much of a problem, are actually few and far between, perhaps 5 % of the population. Where did this cooperation come from? Our relatives, the chimpanzees, are not known for their cooperation. They cooperate only in certain competitive situations and only with certain individuals. This behavior stands in marked contrast with humans, who are largely cooperative. Otherwise, how would an alphabet or a system of numbers have come about, or towns and cities been built? Brian Hare and Michael Tomasello have suggested that the social behavior of chimps is constrained by their temperament and that a different temperament was necessary for the development of more complex forms of social cognition. To develop the level of cooperation necessary to live in very large social groups, humans had to become less aggressive and less competitive than their ancestors. They had to evolve a different temperament. How did this come about?

Taming the Wild Beast

Hare and Tomasello hypothesize that humans may have undergone a self-domestication process in which overly aggressive or despotic others were either ostracized or killed by the group, eliminating them from the gene pool. The individuals who had systems that controlled emotional reactivity, such as aggression, were more successful and reproduced. “It is only after the human temperament evolved that variation in more complex forms of communicative and cooperative behaviors could have been shaped by evolution into the unique forms of cooperative cognition present in our species today” (Hare & Tomasello, 2005). This emotional reactivity hypothesis grew out of work done by a Russian geneticist, Dmitry Belyaev. In 1959, he began domesticating foxes at the Institute of Cytology and Genetics in Novosibirsk, Siberia, using only one criterion for his breeding selection process. He picked the young foxes that came the closest to his outstretched hand: Thus he was selecting for fearless and nonaggressive behavior toward humans (Figure 14.19).

FIGURE 14.19 Aggressive and tame foxes.
(top) Fox displaying aggressive behavior. (bottom) Tame foxes.

After only a few years, the physiological, morphological, and behavioral by-products of this selection process were similar to what is seen in domestic dogs. The female foxes have higher serotonin levels (known to decrease some types of aggressive behavior) and an alteration in the levels of many of the chemicals in the brain that regulate stress and aggressive behavior (Belyaev, 1979). Some of the foxes have floppy ears, upturned tails, and piebald colorations like those seen in border collies (Figure 14.20). In fact, some of the same morphological changes have occurred in domesticated animals of many species (Figure 14.21). The domesticated foxes will also wag their tails and respond as well as domestic dogs to the human communicative gestures of pointing and gazing (Hare et al., 2005).

All these characteristics have been linked to the gene associated with fear inhibition. It seems that sociocognitive evolution has occurred in the experimental foxes as a correlated by-product of selection on systems mediating fear and aggression. Dog domestication is thought to have occurred naturally by a similar process. Wild dogs that were less fearful of humans were the ones that approached their camps, scavenged food, stuck around, and reproduced. Hare and his colleagues (2012) suggest that a similar process has been at work on bonobos. Unlike their chimpanzee ancestors, bonobos will share food with unfamiliar conspecifics (Hare & Kwetuenda, 2010) and will spontaneously cooperate on a novel instrumental task (for a food reward). Like dogs but unlike chimps, bonobos are responsive to human gaze direction (Hermann et al., 2010). The researchers suggest that because of their geographical location, bonobos had less competition among themselves for food than did chimpanzees and may have undergone a similar self-domestication process.

FIGURE 14.20 Morphological markers of domestication in foxes correlate with domesticated behaviors.
Domesticated foxes show an unusually high incidence of floppy ears, shortened legs and tails, curled-up tails, and a star blaze. In Table 14.1, the rates of some common changes are compared. The increased incidence of the “star” depigmentation pattern and doglike tail characteristics was most marked.

Henrike Moll and Michael Tomasello (2007) have suggested that certain aspects of cognition, which they consider to be unique to humans (the cognitive skills of shared goals, joint attention, joint intentions, and cooperative communication), were driven by or were constituted of social cooperation. This cooperation is needed to create such things as complex technologies, cultural institutions, and systems of symbols. Unlike any other species, humans cooperate with non-kin. This type of cooperation has been difficult to explain from an evolutionary standpoint, because cooperating individuals incur costs to themselves that benefit non-kin. This type of altruistic behavior doesn’t make sense at the individual level. How can that be a strategy for success? Robert Trivers (1971; 2011) was the first to explain how altruistic behavior could be a successful strategy. As Steve Pinker (2012) succinctly puts it,

It can be explained by an expectation of reciprocity and a concern with reputation. People punish those that are most likely to exploit them, choose to interact with partners who are least likely to free-ride, and cooperate and punish more, and free-ride less, when their reputations are on the line.

FIGURE 14.21 Morphological markers of domestication are seen in different animal families and orders. White spotting (star depigmentation) on the head (top row) and floppy ears (bottom row): A: Horse (Equus caballus); B: Cow (Bos taurus); C: Pig (Sus scrofa domestica); D: Sheep (Ovis); E: Dog (Canis familiaris); F: Rabbit (Oryctolagus cunticulus).

table 14.1 Frequency Changes of Morphological Characteristics during “Domestication” of Foxes
Characteristic Animals per 100,000 with Trait
Domesticated Population Nondomesticated Population Increase in Frequency (percent)
Depigmentation (Star) 12,400 710 +1,646
Brown mottling 450 86 +423
Gray hairs 500 100 +400
Floppy ears 230 170 +35
Short tail 140 2 +6,900
Tail rolled in circle 9,400 830 +1,033

One way cooperation can arise is through the punishment of non-cooperators. Both theoretical models and experimental evidence show that in the absence of punishment, cooperation cannot sustain itself in the presence of free-riders and collapses. Free-riding individuals are those who do not cooperate or contribute, but exploit the efforts of others: They incur no costs and produce no benefits. For example, in the Public Goods game, each participant is given a number of tokens. Each decides how many tokens they will secretly put into a communal pot. They get to keep the rest. The experimenter figures the total of communal tokens, multiplies by a factor greater than one but fewer than the number of participants, and divides it evenly among the participants. The optimum strategy for the group would be for each person to contribute the maximum, whereas the optimum strategy for the individual is to be a free-rider: give none, and get a cut from all the people who donated. Obviously, free-riders fare better and are more successful because they do not pay the cost of contributing. Over multiple rounds, if punishment is not allowed, free-riding takes over and the public contribution dwindles to zero. If punishment is allowed, however, cooperation increases (Figure 14.22; Fehr & Gächter, 1999, 2002). Thus, for cooperation to survive, the free-riders must be punished.

The interesting thing is that people will punish at a cost to themselves. In defiance of seemingly rational behavior, we humans engage in punishment even in one-time encounters. For instance, in the Ultimatum economic game, first conducted by Ariel Rubinstein (1982) and repeated in various forms, people will punish non-cooperators at personal cost, even in a one-shot game. In this game, two people are allowed only one turn. One person is given some money, say $20, and he has to split it with the other player, but he determines the percentage split. The other player determines if she will accept the amount that has been offered or not. Both players get to keep whatever amount of money is settled upon. If the player who is offered the money refuses the offer, however, then neither gets any. In a rational world, the player who offers the money need only offer a penny and the person who gets offered the money should take any offer, because that is the only way she will come out ahead. That, however, is not how people react. People tend to offer an even split, and people will accept the money only if they think it is a fair offer, ranging from at least $6 to $8. Anything less than that, and 80 % of people will consider the deal unfair and refuse. In so doing, however, they actually incur a loss themselves. Punishing costs the punisher. Why, in oneshot encounters, are people overly generous? And why do people who have received lowball offers punish at a cost to themselves?

FIGURE 14.22 Cooperation is greater when punishment is an option.
(a) Participants have the opportunity to punish the other group members during the first six periods, but not in the second six periods. In (b), the reverse is true. During the first six periods, punishment of other group is not allowed; but during the second six periods, it is. In both cases, cooperation increases when there is an opportunity to punish non-cooperators.

Evolutionary psychologists Andrew Delton and Max Krasnow, along with Leda Cosmides and John Tooby (Delton et al., 2011), explain initial cooperation by pointing out that individuals engaging in one-shot encounters must balance the cost of mistaking a one-shot interaction for a repeated interaction against the far greater cost of mistaking a repeated interaction for a one-shot interaction. You shake your head in wonder at the mechanic who overcharges, mistaking you for a tourist rather than a local resident. Now you will never return to his shop, but worse, you will tell all your friends about his overcharging. He has lost your repeat business and any word-of-mouth referrals, which is costing him much more than the amount that he padded his bill. The researchers used computer simulations to show that when neural decision systems for regulating dyadic reciprocity are selected for, generosity in one-shot encounters is the necessary byproduct under conditions of uncertainty.

What’s going on in this less than rational brain? Ernst Fehr and his colleagues (Knoch et al., 2007) used transcranial electric stimulation to disrupt brain functioning in the prefrontal cortex. They found that when the function of the right dorsolateral prefrontal cortex was disrupted, people would accept lower offers while still judging them to be unfair. They also found that suppression of this area increased selfish responses to unfair offers, suggesting that this area normally inhibits self-interest (taking any offer) and reduces the impact of the selfish urges on the decision-making processes. Thus, the right dorsolateral prefrontal cortex plays a key role in implementing behaviors that are fair.

Dominique de Quervain and her colleagues (2004) hypothesized that people derive satisfaction from punishing norm violations. They found evidence for this view by using positron emission tomography (PET). Their participants’ brains underwent a PET scan while they learned about the abuse of trust in a game partner and determined if they were going to punish him and what the punishment would be—either monetarily real or simply symbolic. Real punishment activated the dorsal striatum, which has been implicated in the processing of goal-oriented rewards. Participants with stronger activations were willing to incur greater costs (lose money themselves) in order to punish. Meanwhile, symbolic punishment did not activate the dorsal striatum. We may punish because we are wired to do so, as a result of a selective process that enhanced human survival through cooperation.

Can you have cooperation and accountability without punishment? Clearly, our genome thinks punishment is important. Can we or should we rise above it? If we don’t punish the offenders, will the non-cooperators take over and cooperative society fall apart?

Ultimately, if responsibility is a contract between two people rather than a property of a brain, then determinism has no meaning in this context. Human nature remains constant, but out in the social world, behavior can change. When multiple rounds of the Ultimatum game are played, punishing unfair offers results in a change in behavior: Fair offers increase, resulting in more cooperation. You may want to act selfishly and offer only $1 in the Ultimatum game so you can go home with $19, but you learned the hard way that it doesn’t work. One person’s behavior can affect that of another person. When the highway patrolman comes down the on ramp, you see him, check your speedometer, and slow down. Your doctor tells you that if you keep eating the way you do, you will develop diabetes, so you change your diet. You overcharge your customers, and they don’t come back. We have to look at the whole picture, a brain in the midst of and interacting with other brains, not just one brain in isolation.

No matter what their condition, most humans can follow rules. Criminals can follow the rules. They don’t commit their crimes in front of policemen. They are able to inhibit their intentions when the cop walks by. They have made a choice based on their experience. This is what makes us responsible agents, or not—and yes, free.


TAKE-HOME MESSAGES


 

Summary

Explaining how the brain enables human conscious experience remains a great mystery. Scientists continue to gain more and more knowledge of how parts of the brain are responsible for mental and perceptual activities. We now know that the brain has thousands, if not millions, of processing units working independently, automatically, and without our being aware. Only a small fraction of what our brain processes makes it to our conscious awareness. Though great advances are being made in studying the content of conscious experience, we have little understanding of its subjective qualities. An aspect of subjective experience is that we feel unified and in control of our actions, not a pawn of thousands of separate processing units. This is the result of a processing system called the interpreter, which takes all the internal and external information that bombards the brain, asks how one thing relates to another, looks for cause and effect, and weaves it all together into a story that makes sense. One of those stories that it weaves is that each person is a single, unified entity in control of her actions.

As we learn more and more about neural processing, we have come to understand that all that we think and do is the result of interactions of cells, matter that is subject to physical laws and interactions. This knowledge has led many to take a determinist stance and infer that we are along for the ride and have no conscious control over our behavior. This view leads to the conclusion that, therefore, no one is responsible for their actions and should not be punished for actions that are antisocial. The notion that we can look at behavior and understand it only by looking at neural processing has been challenged by evidence for multiple realizability. Neuroscientists are beginning to understand that the brain is more complicated than we had originally, naively thought. The brain should be viewed as a complex system with multiple levels of organization, ranging from the level of neurons to mental states to brains interacting with other brains. Determinists are mixing up their organizational layers and applying the reasoning that the laws governing the behavior of neurons are the same as those governing the behavior of people in social situations. This is ground that the physicists covered in the last century, ground the geneticists have more recently trod, and we neuroscientists need to embrace the understanding that other disciplines have worked hard to garner. It may be that consciousness is an emergent property. If we don’t start trying to understand the different layers of brain organization and how they interact, we may never get a handle on consciousness. We need to learn from the experiences of our colleagues in the other scientific disciplines, and those in neuroscience do know that such learning is possible. That is what our brains do. Let’s use them.

Key Terms

access-consciousness (p. 616)

backward referral hypothesis (p. 619)

blindsight (p. 610)

chaotic system (p. 625)

complex system (p. 626)

consciousness (p. 609)

dualism (p. 609)

emergence (p. 626)

free-rider (p. 641)

interpreter (p. 620)

materialism (p. 609)

microstimulation (p. 618)

multiple realizability (p. 628)

qualia (p. 609)

quantum theory (p. 625)

restorative justice (p. 637)

retributive justice (p. 636)

self-knowledge (p. 609)

sentience (p. 609)

subliminal perception (p. 614)

symmetry breaking (p. 630)

utilitarian justice (p. 636)

Thought Questions

  1. Do you think you have free will? If not, do you believe our current justice system is too harsh?
  2. Given that the key premise of dualistic theories of consciousness is that conscious experience is beyond the realm of physical sciences, how can you reconcile this view with scientific investigation? If consciousness is nonphysical, then presumably it cannot be measured. If it cannot be measured, how can it be studied?
  3. Should we toss out both dualism and materialism, ignoring the notion that consciousness is made up of many hierarchical components, and start over? Discuss your answer.
  4. Because blindsight participants have deficits in visual awareness, they are often held up as archetypal cases for consciousness investigations. What is wrong with this approach? Can studying unconsciousness in the damaged brain really tell us anything about consciousness in the intact, healthy brain? Explain your answer.
  5. Can Libet be right? Do we actually live 500 ms in the past? If so, do we really control our actions, or are we just reacting and then interpreting our behavior afterward? How does this fit in with Gazzaniga’s views on the left-brain interpreter that has been demonstrated in split-brain patients (see Chapter 4)?

Suggested Reading

Churchland, P. (1988). Matter and consciousness. Cambridge, MA: MIT Press.

Damasio, A. (2000). The feeling of what happens: Body, emotion, and the making of consciousness. New York: Harcourt Brace.

Dennett, D. C. (1991). Consciousness explained. Boston: Little, Brown.

Gazzaniga, M. S. (2011). Who’s in Charge? New York: Harper Collins.

Koch, C. (2012). Consciousness: Confessions of a romantic reductionist. Cambridge, MA: MIT Press.

Libet, B. (1996). Neural processes in the production of conscious experience. In M. Velmans (Ed.), The science of consciousness (pp. 96–117). London: Routledge.

Premack, D., and Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral Brain Science, 1, 515–526.