You are on page 1of 279

THE PROBLEM OF CONSCIOUSNESS: MENTAL APPEARANCE AND MENTAL REALITY

by JOSH WEISBERG

A dissertation submitted to the Graduate Faculty in Philosophy in partial fulfillment of the requirements for the degree of Doctor of Philosophy, The City University of New York 2007

2007 JOSH WEISBERG All Rights Reserved

- ii -

This manuscript has been read and accepted for the Graduate Faculty in Philosophy in satisfaction of the dissertation requirement for the degree of Doctor of Philosophy.

_________Michael Levin____________

________________ Date

________________________________ Chair of Examining Committee _______John D. Greenwood_________

________________ Date

________________________________ Executive Officer

____David M. Rosenthal________________ ____Michael Devitt_____________________ ____Douglas Lackey___________________ Supervisory Committee THE CITY UNIVERSITY OF NEW YORK

- iii -

Abstract THE PROBLEM OF CONSCIOUSNESS: MENTAL APPEARANCE AND MENTAL REALITY by Josh Weisberg Adviser: Professor David M. Rosenthal

Consciousness is widely seen as posing a special explanatory problem for science. The problem is rooted in the apparent gulf between consciousness as it appears from the first-person perspective and consciousness as it is characterized in scientific theory. From the first-person perspective it seems that we directly access intrinsic qualities of conscious experience, qualities immediately known, but isolated and indescribable. But scientific theory seems ill-equipped to explain such qualities. That is how things appear from the first-person perspective. However, in this dissertation I argue that we have no reason to accept that these appearances reflect the underlying nature of the conscious mind. I begin by examining attempts by David Chalmers, Joseph Levine, and Ned Block to characterize the problem of consciousness. I argue that all three fail to establish anything more than the claim that consciousness appears to have intrinsic qualities. If we can explain these appearances without endorsing the reality of intrinsic qualities, the way is open to a satisfying materialist theory of consciousness.

- iv -

The key, I argue, is to provide a feasible model of first-person access. I consider a popular model of first-person access, the "phenomenal concepts" approach, but I argue that this model either fails to explain the appearances or it posits an undischarged mysterious element, undermining the proposed explanation. I then defend a model of first-person access that avoids these pitfalls. I propose that we access our conscious states by way of an automatically applied nonconscious theory. We are unaware of the theory's application, so the access seems direct. Further, we are unaware of the rich relational descriptions the theory employs. It therefore seems to us that we are accessing intrinsic, indescribable qualities. I then present empirical evidence for my model, including the phenomenon of "expert perception" in chess, music, wine tasting, and the appreciation of beauty; and a range of data suggesting that introspection is not a reliable guide to the underlying nature of the conscious mind. I conclude by countering the knowledge argument with a version of the "ability hypothesis," in order to close off any lingering philosophical worries concerning the problem of consciousness.

-v-

Acknowledgments: Portions of this dissertation were presented at the Association for the Scientific Study of Consciousness, Towards a Science of Consciousness, The Society for Philosophy and Psychology, The New Jersey Regional Philosophy Association, and the CUNY Graduate Center Cognitive Science Symposium and Discussion Group. My sincere thanks to audiences at those venues for helpful feedback and comments. This work has benefited from the help and support of a number of people. First, my fellow students and colleagues who offered helpful and constructive criticism throughout: Jared Blank, Gregg Caruso, Jim Hitt, Uriah Kriegel, Pete Mandik, Roblin Meeks, Doug Meehan, Bill Seeley, and Liz Vlahos. Thanks also to a number of professors who gave time and effort in support of this project: Martin Davies, Michael Devitt, John Greenwood, Doug Lackey, Michael Levin, and Barbara Montero. Special thanks to my thesis adviser, David Rosenthal. Without his help and effort throughout my time in graduate school, I would not be the philosopher I am today. Finally, thanks and love to my friends and family who stuck with me through the long years, especially my wife Ashley Hope. Without her wise counsel and timely motivation this project would never have seen completion.

This dissertation is dedicated to my loving parents, Bob and Judy Weisberg.

- vi -

THE PROBLEM OF CONSCIOUSNESS: MENTAL APPEARANCE AND MENTAL REALITY Table of Contents CHAPTER 1: FIXING THE DATA, AND CHALMERS'S HARD PROBLEM.........................................................................................1 1.1 Fixing the Data ...........................................................................................4 1.1.1 The Need for a Neutral Method ...........................................................4 1.1.2 Does the Commonsense Approach Really Capture the Data? ..........11 1.2 Chalmers's Hard Problem of Consciousness ...........................................24 1.2.1 A Model of Reductive Explanation .....................................................24 1.2.2 The Hard Problem .............................................................................33 1.2.3 Criticisms of Chalmers's view ............................................................41 CHAPTER 2: LEVINE AND BLOCK .................................................................52 2.1 Levine's Explanatory Gap.........................................................................52 2.1.1 Levine's Defense of Materialism ........................................................52 2.1.2 The Explanatory Gap .........................................................................57 2.2 Block's Phenomenal Consciousness........................................................69 2.2.1 Kinds of Consciousness.....................................................................69 2.2.2 Block's Identity Thesis and the Harder Problem of Consciousness ...84 2.2.3 Block and Levine ...............................................................................89 2.3 Conclusion: What's the Problem?............................................................96 CHAPTER 3: THE APPEARANCES TO BE EXPLAINED AND THE PHENOMENAL CONCEPTS APPROACH ................................................99 3.1 What are the Appearances?.....................................................................99 3.1.1 Three Features of Conscious Experience..........................................99 3.1.2 Immediacy .......................................................................................102 3.1.3 Independence ..................................................................................105 3.1.4 Indescribability .................................................................................109 3.1.5 Are These Really the Appearances that Create the Problem? ........112 3.2 The Phenomenal Concepts Approach....................................................115 3.2.1 What is the Approach?.....................................................................115 3.2.2 Loar's Recognitional Concepts of Experience .................................123 3.2.3 Papineau's Quotation Model ............................................................128 3.2.4 Perry's Humean Ideas .....................................................................133 3.3 Criticisms of the P-concepts Approach...................................................139 CHAPTER 4: A MODEL OF FIRST-PERSON ACCESS ................................153 4.1 Descriptions and the Doctrine of Cartesian Modes of Presentation .......153 4.1.1 What are Descriptive Concepts? .....................................................153 4.1.2 The Doctrine of Cartesian Modes of Presentation ...........................156 4.2 Expert Perception...................................................................................166 4.2.1 Chess and Chunking .......................................................................167

- vii -

4.2.2 Music, Wine, Chick Sexing, and Attractiveness ...............................171 4.2.3 Expert Qualia and Dreyfus...............................................................180 4.3..A Descriptivist Model of First-Person Access .........................................184 4.3.1 Conditions of Adequacy ...................................................................184 4.3.2 Characterizing the Model .................................................................186 4.3.3 Empirical Evidence ..........................................................................199 CHAPTER 5: EMPIRICAL EVIDENCE, AND THE KNOWLEDGE ARGUMENT.....................................................................202 5.1 Evidence Against the Accuracy of Introspection.....................................203 5.1.1 Common Sense and the Accuracy of Introspection .........................203 5.1.2 Accepting the Evidence ...................................................................210 5.1.3 The Empirical Results ......................................................................214 5.1.4 The Effect of the Empirical Results ..................................................228 5.2 Empirical Evidence for a Descriptivist Model of First-Person Access.....233 5.2.1 Chess Expertise...............................................................................233 5.2.2 Wine Tasting Expertise ....................................................................242 5.3 The Knowledge Argument ......................................................................250 REFERENCES .................................................................................................259

- viii -

CHAPTER 1: FIXING THE DATA, AND CHALMERS'S HARD PROBLEM

It is widely accepted that consciousness poses a special explanatory problem for science. In the 17th century, near the dawn of modern era, John Locke expressed what has become a well-entrenched sentiment concerning the mysterious nature of the conscious mind. He wrote 'Tis evident that the bulk, figure, and motion of several Bodies about us, produce in us several Sensations, as of Colours, Sounds, Tastes, Smells, Pleasure and Pain, etc. These mechanical Affections of Bodies, having no affinity at all with those Ideas, they produce in us, ... we can have no distinct knowledge of such Operations beyond our Experience; and can reason no otherwise about them, then as effects produced by the appointment of an infinitely Wise Agent, which perfectly surpasses our Comprehensions.1 The quantifiable features of reality (bulk, figure, motion) seem utterly distinct from the conscious ideas we experience. Locke despaired of ever grasping their interconnection. Locke's contemporary Samuel Johnson concurred, writing, "Matter can differ from matter only in form, bulk, density, motion and direction of motion: to which of these, however varied or combined, can consciousness be annexed?"2 In the latter part of the 19th century, even as modern science moved from triumph to triumph, biologist Thomas Huxley wrote

1 2

Locke, Essay, IV, iii, 28, 558-559. Quoted in Minsky 1985, 19.

-1-

2 How it is that anything so remarkable as a state of consciousness comes about as a result of irritating nerve tissue, is just as unaccountable as the appearance of Djin when Aladdin rubbed his lamp (quoted in Tye 1995, 15). While other facets of nature were illuminated, consciousness remained in the dark. The 150 years since Huxley have seen the establishment of scientific psychology and neuroscience, of relativity theory and quantum mechanics, and the advent of biochemistry and molecular genetics. But consciousness apparently remains as perplexing as ever. In a recent work, David Chalmers writes that Consciousness is the biggest mystery. It may be the last outstanding obstacle to our quest for a scientific understanding of the universe. ... We have good reason to believe that consciousness arises from physical systems such as brains, but we have little idea how it arises or why it exists at all. ... We do not just lack a detailed theory; we are entirely in the dark about how consciousness fits into the natural order (Chalmers 1996, xi). In a similar vein, Colin McGinn writes We know that brains are the de facto causal basis of consciousness, but we have, it seems, no understanding whatever of how this can be so. It strikes us as miraculous, eerie, even faintly comic. Somehow, we feel, the water of the physical brain is turned into the wine of consciousness,

3 but we draw a total blank on the nature of this conversion (McGinn 1989, 529). And with characteristic bluntness Jerry Fodor adds, "Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious" (Fodor 1992, 5). What is it that inspires such pessimism about explaining consciousness, despite the impressive scientific progress of the last half-millennium? Perhaps we are too close to the subject matter to see it clearly. It seems that we know our conscious minds differently then we know everything else. Our conscious minds are directly and immediately available to us--nothing seems nearer to us, or better known. Furthermore, consciousness makes up the experiential bedrock of our world. Everything else is known through it, by reasoning from the evidence that it brings. Descartes went so far as to hold that we cease to exist without consciousness. There is seemingly no way to step back and peer at the conscious mind from an objective distance. But such distance is apparently required by science. Modern science employs quantitative, mechanistic theories whose precision, scope, and predictive power leave little doubt about their primacy in explaining our world. To the extent that a phenomenon is brought under scientific theory, we count it as explained. But consciousness seems to evade the systematic web of science. From the inside, consciousness appears basic, indivisible, and unquantifiable. We grasp consciousness more directly and with a greater feeling of certainty

4 than we grasp anything else. But what we grasp doesn't seem to fit into the orderly world of science. This is how things appear. But what reason is there to accept this radical dichotomy? Is it really so obvious that consciousness cannot fit into our scientific world view? And can argument and evidence be brought to bear on this issue, or are we simply at the mercy of brute intuition? Recently, a number of researchers have worked to clearly demarcate the problem of consciousness, and to defend the idea that there is more to the problem then mere intuition-mongering. In what follows, I will present and criticize the efforts of David Chalmers, Joseph Levine, and Ned Block, with the aim of distilling out the core of what must be explained by a theory of consciousness. Then I will develop and defend a theory of the access that we have to our conscious states, a theory fully amenable to a scientific explanation of consciousness. But to begin, I will make some important claims concerning how we should go about pinning-down the data that a theory of consciousness must explain. This preliminary step is crucial if we are to avoid the temptation to either operationalize away the alleged problem or to cloak the explanatory data in unnecessary mystery.

1.1 Fixing the Data 1.1.1 The Need for a Neutral Method Many important issues concerning the possibility of an explanation of consciousness turn on how we pick out the phenomenon we are trying to explain. Therefore, it is of utmost importance that we find a neutral way to fix the data a

5 theory of consciousness must explain, one that both accurately captures just what we wish to elucidate and avoids cloaking consciousness in terms that antecedently beg the question against one view or another. There are difficulties inherent in this process, however. On the one hand, we run the risk of defining away our problem, of characterizing consciousness illicitly in terms that aid scientific explanation at the expense of the very features that first attracted our interest. On the other, we are in danger of characterizing consciousness as that which is by nature inexplicable and beyond our intellectual reach. While this may become apparent at the end of our inquiry, building it into the definition of the phenomenon eliminates the possibility of progress at the outset. To paraphrase David Chalmers, we could define "world peace" as a ham sandwich, and it would be much easier to achieve world peace, though perhaps less than satisfying in a geopolitical sense. But by the same token, we could define "ham sandwich" as world peace, and go unnecessarily hungry at lunchtime. The trick, therefore, is to find a way to fix the data in neutral terms, and to avoid prejudicial misdescription of our subject matter. A neutral method of data fixing has several requirements. One, it must accurately capture the phenomenon in question. Two, it must do so in language that is as neutral as possible, to avoid begging important theoretical questions. And three, it must refrain, as much as possible, from delivering pronouncements on the underlying nature of the phenomenon in question. What we desire is a characterization of that which must be explained, not of the underlying processes or materials that account for the explanandum. The last constraint should be

6 handled with care in order to skirt the problem of operationalizing away the subject in pursuit of scientific theory. But it also important to keep in mind that what we want is a cataloging of how things appear at the outset of theorizing-how things seem to be before we apply this or that theory to explain the data. This requirement dictates that claims about the underlying nature of the phenomenon should come at the end of a chain of reasoning, not at the beginning. If the subject matter really does defy scientific explanation, this will become apparent in the course of theory-building and empirical research. Thus, as much as possible, we want a pretheoretic characterization of the data, a description of how things appear prior to explicit theorizing. This is the initial step in scientific theorizing generally. Among other things, Galileo's principle of inertia explains why, pretheoretically, it appears that we and the objects around us are not in rapid motion, despite the fact that the earth is moving rapidly. This requires establishing how things appear at the start of theorizing. In long-established scientific research programs, the data to be explained is often explicitly the product of prior scientific theory. But at the outset (and often in the final analysis as well) we require an explanation of the pretheoretic appearances. How should we go about pinning down the appearances? Mental state terms owe their meaning to their place in our everyday practices of predicting and explaining each other's (and our own) behavior. We all employ the terms "conscious," "sensation," "thought," etc. in everyday discourse, and we recognize how to apply the terms to ourselves and others. This is where our mental terms get their initial life, and this is where the

7 factors fixing our intuitions about the subject-matter are ensconced and codified. Indeed, employing terms like "thought" and "sensation" outside of common usage invokes the very confusion that the proponents of a problem of consciousness warn against. The sort of "bait and switch" they deride arguably involves starting with the folk-meaning of a term, and offering a theory of something else. Thus, we should look to our common, everyday usage of mental state terms in order to pin down the data that must be explained. David Lewis defends the idea that we should look to our commonsense characterization to fix the initial data. He writes, We have a very extensive shared understanding of how we work mentally. Think of it as a theory: folk psychology... Folk psychology has evolved over thousands of years of close observation of one another. It is not the last word on psychology, but we should be confident that so far as it goes-and it does go far--it is largely right (1994, 298). Folk psychology has the virtue of being free from abstract or technical language; it is in everyday terms which are unlikely to carry heavy-duty metaphysical commitments and presuppositions. For the same reason, it will not present a characterization of the data in language tilted towards a particular scientific view, or in terms operationalized to fit a specific research agenda. Finally, it arguably respects our first-person access to the mind. Folk psychology certainly allows that we know our own minds in a different way from which we know the mind's of others. Further, it allows that we have introspective access to our conscious states, and that this is part of what fixes our commonsense terms for the mind.

8 That is not to say that folk psychology endorses an infallibility claim about the mind; arguably, this is much stronger than the dictates of commonsense (see below). But there is no barrier to including introspective data in a folk characterization of the mental, and to granting that we know our own minds in a distinctive way. But haven't I already gone back on my claim that we need a pretheoretic characterization of the data? If folk psychology itself is a theory, as Lewis urges, won't its "hidden presuppositions" infect the data as much as any other theory? I will address this point in more detail below, when I consider worries along these lines presented by Paul Churchland. But already a case can be made that this is the most neutral characterization we are going to get. Saying that folk beliefs about a particular domain make up a theory is to say that the beliefs are geared towards explaining a limited domain of facts, and that they provide the background framework that lets us see certain types of claims as explanatory. Saying that Mary went to the fridge because she wanted a beer is explanatory because it falls under the general rule that people who want beer and believe there is beer in the fridge will usually get a beer. The key point is that these are everyday beliefs, and not the product of an abstract philosophical or scientific theory. These beliefs constitute what people think about the mind in everyday contexts. And what's more, there seems to be no better place to go for the desired neutrality. Even careful attempts to "bracket" off the contributions of folk psychology from the way mental phenomena appear to us require a characterization of just what folk psychology adds, so that we appropriately

9 cordon off its influence. Further, such a characterization requires a prior theory about how the mind works, in order to justify the claim that we can bracket things in this manner at all. But that is precisely what is at issue here. This sort of justifying explanation will have to say what the mind is, what consciousness is, etc. such that it could be bracketed. And how is that to be done, prior to bracketing? Common sense may involve theory in the mild way sketched by Lewis, but it does not make such substantial claims about the underlying workings of the mind. Thus, it is neutral in the sense we require. In what follows, I will at times use the term "pretheoretic," but it will always refer to our folkpsychological characterization of the mind. The commonsense approach does have the advantage over pure introspective approaches of being publicly checkable, after a fashion. A pure introspective method is one where a subject simply "peers" inward, and characterizes the data according to what they introspect. However, if there is disagreement over what is introspected, there is no higher court of appeal; we are stuck with the contradictory results, and can't proceed further (witness the disagreements by the early introspective psychologists over "imageless thoughts"). The commonsense approach, on the other hand, requires at least that the characterization fit with what most people would assent to if prompted. Thus, the other members of the folk community provide a check upon the characterization: if the claim is too controversial, it will not make it into the common usage that defines folk psychology.

10 Furthermore, folk psychology allows for third-person correction of first person ascriptions, at least in certain cases. For example, it occasionally happens that we may be in a mental state, perhaps a desire or emotional state, and fail to recognize it ourselves. We may in fact deny that we are that state. But it may be apparent to others that we desire X or are angry about Y, despite our protestations. And we may later come to agree. There are even times when we can be corrected about our self-attributions of sensations, like pain. This is most obvious when we suggest to a frightened child that they are not hurt, despite the presence of blood or because of a scary fall. As adults, we are open to the suggestions of doctors or dentists (at least on occasion!) when they assure us that a procedure really doesn't hurt, and instead, in our anxious state, we've misinterpreted a stimulus as painful. This may not happen often, but it is familiar enough, and it certainly does not elicit a sense that such scenarios are impossible or contradictory. Thus, folk psychology disavows Cartesian infallibility and transparency concerning the mind. Again, this is not to rule such claims out of bounds; instead, it suggests that, according to the commonsense approach, such claims require further argument. Thus, the commonsense approach provides a neutral means of fixing the data, one that captures our "pretheoretic" intuitions concerning the mind, but also allows for a degree of crosschecking and public correction. Further, it delivers mental terms in language that is as independent of theoretical commitment as possible. It is not (at least overtly) weighed down with metaphysical claims and

11 presuppositions, and it does not operationalize terms illicitly in the service of a particular theory.

1.1.2 Does the Commonsense Approach Really Capture the Data? However, the commonsense approach as I have laid it out already is open to criticism, from more than one direction. The first line of criticism holds that the approach does not take the first-person perspective seriously enough and thereby fails to properly characterize the conscious mind. The second line of complaint holds that the view privileges the prejudices and confusions of common sense, and thereby limits the possibility of a scientific theory of consciousness. I will address the concerns in turn. The first worry holds that the commonsense approach puts too much weight on third-person prediction and explanation of behavior. After all, the main role of folk psychology in our lives as social animals is to predict and explain the behaviors of our conspecifics. This accounts for the utility and ubiquity of folk psychology; however, it seems to downplay our first-person access to the "feel" of conscious states. Indeed, one of the main defenders of the approach, David Lewis, argues the folk psychology generally defines mental states in functional terms. Some argue that this leaves no room for consciousness.3 Furthermore, the approach has a close affinity with the method prescribed by Daniel Dennett for fixing the data, the "heterophenomenological approach" (Dennett 1991, chapter 4). That view dictates that we consider the reports of
See Lewis 1972, 1980. See Block 1978 for the criticism of the view. But see also Lewis 1983a for a response.
3

12 subjects as part of the data; further, if we can explain why they make these reports, we have provided an adequate explanation of the phenomenon. They may make the reports because they are accurately describing their mental states. But they may make the reports for other reasons, and therefore we are not committed to accept the veracity of the reports, no matter what. Dennett likens the task of the consciousness researcher to a field anthropologist collecting data on a forest religion: we take the indigenous subjects' reports as canonical on what they take the religion to be, but we withhold judgment on the truth of their reports of gods and spirits. They are certainly correct that this is how their religion is practiced, but it may well turn out that there is no omnipotent forest god, despite the sincere reports and beliefs of the subjects. The situation is the same in the case of the conscious mind. All reports must be independently corroborated before they are accepted as indications of real phenomena. Objectors to Dennett's view contend that it unfairly begs the question against the existence of phenomenal consciousness (Block 1992; Chalmers, website). If a phenomenon cannot be establish without third-person confirmation, it will fail to qualify as a legitimate object of scientific study and its very existence will be called into doubt. Furthermore, the view takes subject's reports as the data to be explained, rather than taking the reports as evidence for conscious states which are to be explained. Chalmers, in responding to a piece of Dennett's, argues instead for a method of data fixing defended by Max Velmans (1996, 2000). Velmans argues for what Chalmers terms a "critical phenomenology" in which we

13 accept verbal reports as a prima facie guide to a subject's conscious experience, except where there are specific reasons to doubt their reliability... On this view, we're not interested so much in reports as data to be explained in their own rights (though we might be in part interested in that). Rather, we are interested in them as a (fallible) guide to the firstperson data about consciousness that we really want to be explained (Chalmers, website). A similar complaint is lodged by Charles Siewert against Dennett's position (Siewert 1998). He argues that first-person reports have a "distinctive warrant" which renders them, as it were, innocent until proven guilty. He runs through a number of cases that many take as empirical evidence that we are often wrong about the contents of our minds. He considers work in social psychology championed by Nisbett and others (e.g. Nisbett and Wilson 1977), and argues that it fails to show systematic error concerning the contents of our minds. Rather, it shows errors concerning the antecedent causes of our occurent beliefs. But this indicates a lack of causal knowledge instead of a lack of knowledge concerning our current mental states. He concludes that since we have no adequate reason to deny what we pre-epistemologically would judge about our knowledge of our minds, we are entitled to proceed on the assumption that we do have a distinctively first-person knowledge of our mind, and to rely on such knowledge in our inquiries about ourselves, in lieu of the discovery of some compelling reason not to do so (Siewert 1998, 23).

14 Thus, first-person reports require no independent corroboration in the absence of a compelling challenge, contrary to Dennett's approach. For my purposes, it is important to note that both Chalmers and Siewert take first-person reports as fallible, though prima facie acceptable in the absence of good counterevidence. Also, both seem committed to a pretheoretic or "preepistemological" status for our data-fixing claims. This is in line with the commonsense approach, which holds that folk-theoretic claims fix the data. It is clear that by "pretheoretic" and "pre-epistemological," neither theorist wishes to rule out folk psychology in Lewis's sense. But to address their central worry, there is no reason that to deny that reports are evidence for inner states or to accept that they are the sole and exclusive data a theory must explain. Indeed, folk psychology licenses this very evidential link: we take our reports and the reports of other to indicate what mental states we are in. And the folk characterizations that fix the data for a theory of consciousness are characterizations of mental states themselves, not of reports. But arguably the conflict between Dennett's heterophenomenology and the views of Chalmers and Siewert is illusory. Dennett's view also allows that reports are evidence of real states; he simply puts stress upon the idea that such reports are fallible and require some kind of independent means of confirmation if we are to step out of the trap that plagued introspectionist psychology. Later in his 1991 book, Dennett does reject the notion of consciousness defended by Chalmers, Block, and others, and this leads him to hold that all that remains to be explained is subject's reports of such states. But this is an application of the

15 method that can be challenged. In fact, one might argue that Dennett is too quick to dismiss the possibility of additional evidence for the presence of a more richly characterized phenomenon (See Rosenthal 1995; Akins 1996). Still, even if one establishes that heterophenomenology begs the question against phenomenal consciousness, I have argued that the commonsense approach is free of that bias. Finally, the commonsense method is defended by one of the early defenders of a nonreductive view of consciousness, Frank Jackson.4 Jackson in his 1998 book and in a work with David Braddon-Mitchell (1996) argues for the commonsense approach to fixing the data, while contending that other approaches that look more directly to the sciences fail to adequately capture what we wish to explain. Such approaches tend to ignore specific difficulties inherent in the subject matter, and operationalize away important phenomena. While there are responses to this claim, I only wish to stress that Jackson did not feel that such a method shortchanged consciousness or the first-person point of view. Indeed, he collaborates with Chalmers on a piece that dovetails well with the overall view (Chalmers and Jackson 2001). And it is arguable that Chalmers himself employs the commonsense approach in the first chapter of his 1996 book, where he characterizes a number of concepts of consciousness and mind.

Jackson has since rejected his anti-reductionist stance, and now holds that reduction is in principle possible, despite his earlier worries. See Chalmers and Jackson 2001; Jackson 2003.

16 If the method is acceptable to Jackson and Chalmers, it is hard to argue that it is not able to take the phenomenon of consciousness seriously.5 However, this may appear to open up the commonsense approach to a charge from the opposite direction, that the method overvalues the view of the folk at the expense of a scientific approach to the mind. Why think the folk have any real insight on the nature of the mind, or have anything useful to say about fixing the data for scientific theory of consciousness? All we get from folk psychology is a collection of loosely related intuitions, intuitions that are shaped by ignorance about the real workings of the mind. Further, the method may seem committed to a priori analyses of mental phenomenon, and such analyses have proven a failure at grounding a science of the mind. Indeed, it may appear that the commonsense approach endorses the idea that there are analytic truths about the mind which trump scientific claims. Why think a method apparently committed to these dubious notions has any role to play? Block and Stalnaker (1999) present a multifaceted attack on the idea that a priori conceptual analysis has any role to play in reductive scientific explanation. They argue that the requisite analyses are never actually
There is an additional position on fixing the data which I have bypassed. The phenomenological tradition holds that we must be trained in taking a specialized kind of attitude towards our own mental states; only then can we "bracket" the elements of our naive conception of mind and get at its true nature. Further, this view holds that mind is irreducibly subjective and distinct from the objective. Thus, commonsense would have no real insight into the proper objects of study for a theory of the mind. However, this approach has its troubles. One, it is not clear to me how we justify the ability to bracket. Why not think that the bracketing is theory laden, and therefore, the results of the investigation simply show us what we already believed? Further, the view suffers from a great difficulty in cross-checking and confirming its investigation. If the phenomena of study are irreducible in this sense, is there any reliable intersubjective check on theoretical claims? Finally, the attitude or intuition that is taken towards introspection is questionable. Why think our "inner eye" has the ability to see into the nature of the mind at all, particularly in this unfettered manner? My discussion of this issue is prompted by Francisco Varela's interesting 1996 piece "Neurophenomenology: A Methodological Remedy for the Hard Problem."
5

17 forthcoming; further, even the sketchiest attempts at place-holders fall prey to counterexamples. In addition, they argue that a number of theoretical constraints which they lump under the heading of "simplicity" demonstrate that empirical considerations are always present in data fixing, even in the simplest cases. The real reason, they argue, that we posit theoretical identities is to simplify our theories and increase their range of prediction and explanation. Further, this shapes our decisions about the extension of the terms in question. We cannot tell which moves will achieve these theoretical goals until we have empirical data. Thus, empirical considerations are always at play in data fixing, undermining the possibility of a priori analyses. But, contrary to the views of some of its proponents, the commonsense approach is not committed to a priori analyses or analytic truths. Instead, the view is committed only to analyses that characterize phenomena relative to the principles and assumptions of folk psychology. Given that our folk beliefs shift over time in the face of new data, there is no reason to believe that we know in principle that a term's meaning is fixed come what may. Even our everyday terms for mental states are open to revision, and therefore are not analytic. While at present it may seem just obvious that a particular change in the use of a term would represent a change in its meaning, we cannot know beforehand how we will react to new empirical data, especially when we take into account the holistic nature of theory confirmation (Quine 1951). We may choose to alter our weighting of the various factors that fix the meaning of a term, in order to preserve some other aspect of our folk theory. It proves to be arbitrary whether

18 to view this shift as a change in meaning or the falsification of certain empirical beliefs. There is no further source of data to decide this question in a principled manner. Thus, there is no supportable analytic/synthetic distinction, and it cannot be invoked here. The meanings of our folk psychological terms are not knowable independently of experience, and the process of ascertaining those meanings is not disconnected from the rest of what we believe, both about the mind and in other domains. However, this does not rob folk psychology of its role in fixing the data. Instead, we must take into account that we are engaged in an empirical study and that our data is only interpretable against the background of the rest of our beliefs and theories. Still, the relatively stable common knowledge of folk psychology provides the best place to fix the data for a scientific theory of consciousness. However, it may seem that our folk analyses are worked out "in the armchair" and thus are plausibly counted as a priori in the relevant sense. They appear to require no empirical support for justification. To access our folk theory, all we need to do is reflect on our own beliefs, and this requires no outside help. But why think that just because such a theory is active in our predictions and explanations of behavior that it is evident upon private reflection just what the principles of the theory are? Some may be better than others at unpacking the principles implicit in the folk views, and the process of investigation generates disagreement about the shape of the commonsense view. When difference over interpretations occurs, it is reasonably settled by canvassing a variety of subjects, and this is an empirical matter. The justification for such claims comes

19 from how well they agree with the attitudes and response of the folk, not from a nonexperiential source.6 Furthermore, our commonsense theory evolves and shifts under pressure from a variety of sources, including changing cultural fashions and customs, and scientific theories and opinions that get folded back into everyday knowledge. While the rate of alteration may be slow, it does occur over time, undermining the idea that all of our folk views on the mind are accessible a priori. Consider the case of unconscious thought. Locke, in his Essay, holds that nonconscious thought is impossible; indeed, it seems to him almost a contradiction in terms (Locke 1979/1689). In these passages, he is in part appealing to a commonsense use of the terms he has in mind; he presents little in the way of argument, and instead appeals to the seeming absurdity of the claim. This provides an indication that the idea of nonconscious thought may not have fit particularly well with commonsense views in Locke's day. But with the widespread popularity of the works of Sigmund Freud, the idea of nonconscious thoughts and desires driving our behavior became commonplace. Freud's theories, inspired by the goal of a scientific psychology, altered the folk view of the mind. For my purposes, the main point is that these attitudes about the mind fail to display the stability needed for a priori access. Such views are thus learned, and therefore must be investigated from an empirical standpoint. Our justification for the use of these terms traces back to a learnt public theory about
6

In fact, this seems to correspond well to Block's approach to characterizing consciousness. He employs a "pre-theoretic" paradigm, and then considers a variety of thought experiments to sharpen that conception. Finally, he argues that the empirical evidence for his characterization is not conclusive, but provides a number of weak but potential lines of evidence, when taken as a group and viewed in light of general theoretical principles (Block 2001). See chapter 2, below.

20 whose principles we can be in error. A priori reflection provides no justification for such claims. The commonsense approach therefore is not committed to a priori analyses or analytic truths. Instead, it relies on an empirical investigation of the principles of an acquired public theory, one that is stable enough and accessible enough to be termed common knowledge, but not one that is beyond revision and known independent of experience. In this manner, the considerations that Block and Stalnaker term "simplicity" are a central part of the method. We develop our best interpretations of the folk theory, and in order to confirm our picture, we run it up against folk responses and attitudes. But in adjudicating between rival interpretations, considerations of simplicity and the like will always be at play. Sometimes, this may be the only way to really decide between two views: one view fits in with theoretical and empirical considerations better than the other, despite the fact that both make plausible cases concerning commonsense theory. Still, this may seem to miss the force of the initial criticism. Why think that folk psychology is a useful guide to the mind, at all? Paul Churchland (1981) has argued that folk psychology is a relatively weak theory, when held up to the promise of an advanced neuroscientific approach. He argues that over time we will reject folk theory because it will be replaced by a better theoretical approach to the mind. Clumsy folk notions like "belief" and "desire" will fade away in the face of more precise and more informative neuroscientific replacements. Therefore, the practice of mining the folk view for data is of no value. We are

21 simply making precise terms in dying theory, like diehard vitalists making clear what they mean by "lan vital." But this criticism is wide of the mark. First, there is an intermediate position between blind acceptance of folk psychology and its elimination. Instead, it is reasonable to assume that the discoveries of a future neuroscience will inform and sharpen our folk conceptions like "belief" and "desire." The mere fact that certain commonsense characterizations are in error does not remove the possibility that they may be revised in the face of new data. As noted in with the case of nonconscious mental states, the folk view is certainly revisable as new theoretical information disseminates into the common view. Further, to put the point more directly, we need not think that when the ancients talked about stars, they were referring to something different from what we do, despite a belief that stars were holes in the sky (Rosenthal 1980; Stich 1996). Folk beliefs have a malleable quality in the face of new evidence; this is plausibly the case regarding beliefs about the mind. However, this is not to say that the folk view is inviolate. If there is truly compelling evidence that folk belief is radically incorrect about the mind, such a belief should be rejected. But the prior option of revision must be ruled out first, on the general theoretical principles that one should save the appearances and minimize damage to our preexisting beliefs, if possible. Further, the strong track record of folk psychology in predicting and explaining the actions of ourselves and others supports the idea that the commonsense theory must accurately capture aspects of our mental functioning. If not, what explains its success? So

22 it is much more likely that a revisionary, rather than an eliminativist, strategy will be appropriate. In addition, if there is no plausible connection between the folk view and a new theoretical claim, lingering open questions and doubts will pervade. To the extent that a theory both respects neuroscientific results and explains folk intuitions, it is to be favored. And one cannot forge this connection without a clear idea of the phenomena as viewed from the folk perspective. In the final analysis, we may be compelled to reject closely held folk principles in the face of scientific progress; nothing rules out this possibility beforehand. But we should certainly aim, at this point in inquiry, at a view that both respects and explains folk-theoretic claims and incorporates modern neuroscientific results. Indeed, one wouldn't know how to begin to evaluate the mind, from a neuroscientific or any other point of view, without a clear delineation of our uses of folk terminology. Interpreting an fMRI or a PET scan requires a background theory to underwrite the interpretation. Folk psychology, either explicitly or implicitly, provides that theory. Taken together, all of these considerations suggest that the commonsense approach is the best option for fixing that data and determining the conditions of adequacy on a theory of consciousness. It respects the importance of our commonsense notions in providing the initial meaning of our mentalistic terms, it is in a position to "take phenomenology seriously," and it is open to experimental input and theoretical change. The approach provides a

23 stable enough platform to characterize consciousness in a non-question-begging way, while leaving the door open to scientific explanation. In the following three sections of the dissertation I will examine the attempts of David Chalmers, Joseph Levine, and Ned Block to capture what is distinctively problematic about consciousness. In light of my commonsense approach to fixing the data, if a characterization deviates from our folkpsychological conception, the burden is on the theorist offering the characterization to support that proposal. If such support is lacking, the claim will be rejected as begging the question. In particular, I will focus on the distinction between the way the conscious mind appears to us from a commonsense pointof-view, and the conscious mind's underlying nature. As I stressed above, folk psychology does not license any sort of infallible access to the nature of the mind--what we are aware of from a commonsense point of view is simply how things appear to us. This is appropriate: what we desire is a characterization of the everyday appearances. To the extent that a theorist, in fixing the initial data to be explained, moves from these appearances to the underlying reality substantial argument is required. This, of course, is not to claim that no such argument can be made; rather, it is to highlight the burden of proof regarding claims about the underlying nature of consciousness.

24 1.2 Chalmers's Hard Problem of Consciousness 1.2.1 A Model of Reductive Explanation In his 1996 book The Conscious Mind and in a series of related papers (1995, 1999, 2003), David Chalmers argues that consciousness poses a problem different in kind from other scientific challenges. According to Chalmers, consciousness cannot be explained in physical terms, and therefore fails to find location in a materialist ontology. It is a brute and inexplicable feature of the world. In Chalmers's terms, consciousness poses a "hard problem" for science. Chalmers begins with an alleged folk-characterization of consciousness and then argues that, given the nature of our concepts generally, we can conclude that consciousness is explanatorily problematic. In what follows, I will reject Chalmers's concept-based argument and I will question his initial characterization of the data as well. However, at the end of the chapter I will isolate an important element that any theory of consciousness must account for. Though I argue that Chalmers is premature in proclaiming a hard problem of consciousness, his work does help to clarify the burdens on any materialist theory. But to begin I will lay out Chalmers's model of reductive explanation, which is central to his antimaterialist project. Materialism is the ontological thesis that holds that all things are ultimately physical or made up of physical parts. The position matches up well with the scientific worldview: according to the materialist picture, the basic constituents of reality are fixed by our best scientific theory, and given that our best theory deals in basic physical elements, materialism holds that at root, the universe is

25 physical. To put the point in possible worlds talk, according to materialism, any possible world that is physically just like ours is just like ours simpliciter. Of course, there are numerous entities in the world that are not explicitly mentioned in the physical sciences. To gain legitimacy in a materialist ontology, it must be shown that such entities are ultimately nothing over and above the arrangement of basic physical parts. According to Chalmers (following work by David Lewis and Frank Jackson), this is achieved by showing that statements about the presence of the entity in question are entailed by statements about physical facts, where entailment means that "it is logically impossible for the first [statement] to hold without the second" (Chalmers 1996, 36). Establishing this entailment provides what Chalmers calls a reductive explanation of the entity in question. A reductive explanation demonstrates how the presence of certain physical conditions fully settles questions about the presence of the phenomenon in question. This shows that, ontologically speaking, there is nothing more to the phenomenon than the presence of the relevant physical conditions. If no such entailment can be demonstrated, the phenomenon is not reductively explainable, and thus can't be located in a materialist ontology. It either must be eliminated, or it must, pace materialism, be accepted as a brute feature of reality.7 According to Chalmers, reductive explanations are available in principle for most macroscopic phenomena. He writes

The term "location" is taken from Jackson 1998. Jackson argues that the "location problem" can only be solved by showing how a complete description of a "macro-level" entity is entailed by the physical description of the world. See also Lewis 1983b, 1994; Horgan 1984; Kim 1998.

26 For almost every natural phenomenon above the level of microscopic physics, there seems in principle to exist a reductive explanation: that is, an explanation wholly in terms of simpler entities. In these cases, when we are given an appropriate account of lower-level processes, an explanation of the higher-level phenomenon falls out (1996, 42, emphasis in original). Paradigm examples of reductive explanation are the explanation of the properties of water in terms of the properties of H2O molecules and the explanation of heredity in terms of the properties of DNA molecules. These explanations show how complex macro-level phenomena can be explained in terms of simpler entities that ultimately connect with basic physics. Chalmers argues (again in accordance with Lewis and Jackson) that a crucial step in reductive explanation is providing a conceptual analysis of the explanandum. Conceptual analysis supplies the bridge that links facts presented in one vocabulary to facts presented in another. In reductive explanation, we first conceptually analyze the target, and then show how the lower-level facts satisfy the analysis. In this way, we can deduce the existence of the target from a description of the lower-level facts. For example, consider the case of water. Chalmers argues that there is in principle an analysis of water in terms of the way water looks and behaves, in terms of the role, broadly conceived, that it plays in our lives. A rough sketch of this analysis is "the clear, drinkable liquid that fills our lakes and rivers, comes out of the tap, freezes at 0 degrees C, etc." Chalmers offers "the watery role" as a

27 stand-in for a completed analysis. Chalmers holds that this sort of analysis is available a priori to competent speakers of a language. We arrive at the analysis by considering a variety of possible scenarios and seeing how we would apply the term in question. We do not need to perform any empirical investigation to arrive at the analysis; instead, we reason how the term applies from our armchair. Further, the analysis need not deliver strict necessary and sufficient conditions for the application of the term. Instead, all that's required is a "rough and ready" analysis that delivers the extension of a term in a variety of cases. In fact, Chalmers allows that people would be hard pressed to come up with a workable list to fill out the analysis for "the watery role." But the fact that people make reasonably stable judgments over a range of cases shows that they grasp the term's "intension," according to Chalmers. The intension of a term is a function that delivers a term's extension in various situations. That we can reliably make such judgments is evidence of our grasp of the intension.8 The analysis delivers a rough and ready set of conditions that allow us to see, a priori, how various lower-level conditions might realize or constitute the target phenomenon. For example, we can reason that, if H2O fills the watery role, then water is H2O. Or, if XYZ fills the water role, then water is XYZ. This specifies how a phenomenon might be reductively explained. The final step in

Chalmers (and Jackson) hold that a term actually possess two intentions, one fixed by application at the actual world, the other fixed by considering "other worlds as actual." Chalmers calls these primary and secondary intensions; Jackson calls them A and C intensions. We are here concerned with primary or A intensions, which are knowable a priori, according to both Chalmers and Jackson. See Chalmers 1996, chapter 2; Jackson 2003.

28 reductive explanation is to discover empirically which lower-level stuff actually fills the analysis in question. As it turns out, H2O fills the watery role. Thus, water is H2O, and water gains acceptance in a materialist ontology by being properly located with respect to physical phenomenon. H2O fits into atomic chemistry, and thus can be unpacked in principle in terms of the most basic elements of physics.9 The completed reductive explanation has the form of a deductive entailment. The water case is reconstructed as follows: 1. Water is whatever fills the watery role (by analysis) 2. H2O fills the watery role (by empirical investigation) 3. Therefore, Water is H2O (by transitivity of identity) It follows on this model that if materialism is true, all statements of macro-level fact must be entailed by statements of what Chalmers calls the "microphysical facts," facts stated in terms of "the fundamental entities and properties of physics, in the language of a completed physics" (Chalmers and Jackson 2001, 2). The only way for entities above the microphysical level to gain legitimacy in a materialist ontology is to be reductively explained in this manner. It can be argued that Chalmers model does not yield explanations, because deductive arguments are not in and of themselves explanatory.10

Throughout this dissertation, I will be referring to this style of explanation as "reduction." I will not address the issue of bridge laws and intertheoretic reduction, because the issue of reductive explanation has largely supplanted intertheoretic reduction as the focus of debate concerning consciousness. See Kim, 1998, for a review of these issues, and a defense of a similar model of reductive explanation. See, for example, criticisms of the deductive-nomological model of explanation, i.e. Achinstein 1981; Salmon 1989.
10

29 Chalmers acknowledges the worry, and argues that, although reductive explanations are not "illuminating explanations," they are "mystery-removing explanations," that reduce "the bruteness and arbitrariness of the phenomenon in question to the bruteness and arbitrariness of lower-level processes" (1996, 4849). In essence, such explanations show how an explanatory target connects with more basic facts in a way that closes off certain ontological questions. We can no longer meaningfully ask how water could be H2O: it just is H2O, because water is characterized as the watery role, and H2O actually fills that role.11 Consequently, according to Chalmers, reductive explanation is a two step process. The first is an a priori conceptual analysis of the explanandum. The second is an empirical investigation to determine which lower-level entities satisfy the analysis, and therefore realize the phenomenon in question. We are left at the end with a description providing an entailment of the target by the lower-level facts. Chalmers further argues that crucial elements in this process are a priori. First, as noted, the conceptual analysis is a priori. The analysis proceeds by considering various possible scenarios, and determining how a term would be applied. This does not require any knowledge of which world we are actually in. Therefore, the justification for the analysis is not empirical. Secondly, the reductive explanation itself turns on a priori conditionals like "if H2O is the watery stuff, then water is H2O." Chalmers calls these "supervenience conditionals" (1996, 53). Our analysis delivers sets of these conditionals, and they are the crucial linking premises in reductive explanations. Finally, once the
Chalmers does not endorse a model of "illuminating explanation." See Chalmers 1996, 48-49. The explication of this sort of explanation is vexed, and I will not address the issue here.
11

30 empirical information is supplied telling us which antecedent of a supervenience conditional is true in this world, the resulting argument is deductive, and requires no further empirical justification. Therefore, Chalmers concludes that we are at times in a position to determine a priori when a phenomenon is not reductively explainable. If we cannot secure the requisite analysis, or if we cannot produce a legitimate deductive argument, then we can conclude that a phenomenon is not reductively explainable. This, according to Chalmers, is the case with consciousness. Chalmers spells out the relationship between the epistemology of reductive explanation and ontology of materialism by introducing a supervenience framework. Supervenience is a dependence relation between two sets of properties or facts.12 Roughly, supervenience occurs when one set of facts A fully determines another set of facts B. There are a number of ways to unpack the dependence relations, yielding a variety of types of supervenience.13 Chalmers argues that for a phenomenon to gain location in a materialist ontology, it must logically supervene on the physical. According to Chalmers, "Bproperties supervene logically on A-properties if no two logically possible situations are identical with respect to their A-properties but distinct with respect to their B-properties" (1996, 35, emphasis in original).

12

Chalmers moves back and forth between talk of properties and talk of facts (and sometimes talk of truths, i.e., in Chalmers and Jackson, 2001), but at bottom he sees the relationship as holding between properties. "The appeal to facts makes the discussion less awkward, but all talk of facts and their relations can ultimately be cashed out in terms of patterns of co-instantiations of properties..."(1996, 361n2). Arguably, the same ideas can be expressed using talk of predicates instead of properties. See Klagge 1988. See Kim 1993 for a detailed discussion of the varieties of supervenience.

13

31 For Chalmers, a logically possible situation is one that is conceivable, where the constraints on conceivability include not only the rules of formal logic, but also conceptual or meaning constraints. Chalmers informally presents the notion in the following way: One can think of [logical possibility] loosely as possibility in the broadest sense, corresponding roughly to conceivability, quite unconstrained by the laws of our world. It is useful to think of a logically possible world as a world that it would have been in God's power (hypothetically!) to create if he had so chosen. God could not have created a world with male vixens, but he could have created a world with flying telephones. In determining whether it is logically possible that some statement is true, the constraints are largely conceptual. The notion of a male vixen is contradictory, so a male vixen is logically impossible; the notion of a flying telephone is conceptually coherent, if a little out of the ordinary, so a flying telephone is logically possible (1996, 35). If two conceivable situations are the same with respect to the A-properties, but differ with respect to the B-properties, B-properties do not logically supervene on A-properties. If B-properties logically supervene on A-properties, all God has to do to fix the B-properties is fix the A-properties. There is no extra work for her to do. If logical supervenience does not hold, God has additional work to do to fix the B-properties. The precise relationship between logical supervenience and reductive explanation is complex. Chalmers holds that logical supervenience is a

32 necessary condition for reductive explanation. We evaluate claims of logical supervenience by considering possible situations. If we can conceive of situations that are microphysically identical, yet distinct with respect to the target phenomenon, logical supervenience fails. The presence of this conceivable situation means that the microphysical conditions do not decisively fix the presence of the target. We can still meaningfully ask if the macro-phenomenon is present, even when we know the microphysical conditions. It's thus not entailed by the microphysical facts. If logical supervenience fails, there is no entailment from the microphysical facts to the facts about the target phenomenon. The a priori conditional required for reductive explanation on Chalmers's model is absent. Thus, logical supervenience is necessary for reductive explanation. However, we evaluate claims of logical supervenience by considering if there are residual meaningful ontological questions that remain open even when we are given the microphysical facts. And this just is to consider if a reductive explanation is available in principle for the phenomenon in question. So the order of priority is not clear, and in fact, our intuitions concerning the failure of supervenience seem to track our intuitions about the failure of reductive explanation. With this in mind, Chalmers writes, "in areas where there are epistemological problems, there is an accompanying failure of logical supervenience, and ... conversely, in areas where logical supervenience fails, there are accompanying epistemological problems" (1996, 74, emphasis in original). Chalmers claims to argue from the failure of logical supervenience to

33 the failure of reductive explanation for consciousness, but the same set of intuitions is in play. In any event, the appearance of a gap between consciousness and the physical blocks both possibilities, on Chalmers's view. It is beside the point whether we view this as a move from the ontology of supervenience to the epistemology of reductive explanation, or vice versa. The same set of intuitions determines our conclusion in either case.

1.2.2 The Hard Problem We are now in a position to detail Chalmers's argument for the "hard problem" of consciousness. The first step involves a conceptual analysis of "consciousness." Chalmers argues that we possess two distinct kinds of concepts that characterize the mind, psychological concepts and phenomenal concepts. "On the phenomenal concept, mind is characterized by the way it feels; on the psychological concept, mind is characterized by what it does" (1996, 11, emphasis in original). Psychological concepts characterize mental states in functional terms, in terms of perceptual inputs, behavioral outputs, and relations to other mental states. Phenomenal concepts characterize mental states in terms of subjective experience, in terms of what it is like for the subject to be in that state.14 Chalmers grants that many states can be characterized in both ways, but he argues that reflection on hypothetical cases delivers distinct psychological and phenomenal analyses. A typical psychological concept is

See Nagel 1974, for the introduction of this locution into contemporary philosophical discussions of consciousness.

14

34 "learning."15 Learning can be characterized as a functional process wherein an organism adapts its behavior over time in the face of environmental stimuli. This characterization makes no mention of what it's like for the organism to undergo the process. If an organism adapts its behavior over time in response to stimuli, we would say it learned, even if there is nothing it's like for the organism to undergo this process. A typical phenomenal concept is "pain," understood as the state that feels a particular way, namely painful. If an organism felt that particular painful feeling, we would consider it in pain, even if it lacked the various behavioral responses ordinarily associated with pain. Desire arguably is a mixed state, involving a functional response profile, and a particular compelling feel. Note that "pain" can also receive a psychological characterization, as a state that underwrites aversion and avoidance behavior. Chalmers holds that this does not create a conflict; rather, there are two concepts of pain, both of them valid, and both of them licensed by analysis (Ibid. 16ff). It is the phenomenally characterized mind that presents special problems for explanation. To the extent that a mental state can be psychologically characterized, it can be reductively explained, because psychological characterizations provide the functional analyses necessary for reductive explanation. According to Chalmers, there is no special problem in explaining how a material being could learn, for example. Given that "learning" can be analyzed as "the learning role," all we need to do is discover what physical
Here, the quotes pick out a concept as opposed to a term. I will occasionally use quotes to pick out concepts. The usage should be clear from the context.
15

35 processes realize the learning role. There is no prima facie problem with this project, and once we find what actually fills the learning role, we have an a priori entailment to the claim that the role-filler is learning. However, Chalmers claims the phenomenal cannot be fully captured in a functional characterization. "The phenomenal element in the concept prevents an analysis in purely functional terms" (1996, 23). This is demonstrated by reflection on cases where the phenomenal feel of a particular mental state is present, but the functional profile is absent. Again, consider pain. We can, according to Chalmers, imagine a state that feels painful to a subject, but never leads to any distinct behavioral responses. Chalmers holds that in such a case, we would consider the state an example of pain, despite its lack of connection to any functional role in the behavior of the organism. But this shows that the intension of "pain" is independent of functional considerations, and thus the phenomenal concept of "pain" is distinct from the psychological concept of "pain," which is characterized in functional terms of avoidance behavior and indication of bodily damage. Chalmers likewise argues that we possess a psychological and a phenomenal concept of consciousness. "Consciousness" in the psychological sense refers to various sorts of awareness and attention that can be given functional role characterizations. "Consciousness" in the phenomenal sense refers generally to states that there is something it is like for the subject to be in, states with a distinct phenomenal feel or character. Chalmers calls the latter

36 "phenomenal consciousness;" it is phenomenal consciousness that creates the special problem of consciousness.16 Armed with this analysis, Chalmers presents five separate (though arguably related) cases to demonstrate that phenomenal consciousness fails to logically supervene on the microphysical, or even on the functional, makeup of the organism. His first case involves the conceivability of "zombies," creatures physically identical to us that nonetheless lack phenomenal consciousness. Chalmers argues that zombies are conceivable because nothing in the zombie case seems contradictory or incoherent. He writes, I confess that the logical possibility of zombies seems... obvious to me. A zombie is just something physically identical to me, but which has no conscious experience--all is dark inside. While this is probably empirically impossible, it certainly seems that a coherent situation is described; I can discern no contradiction in the description (1996, 96). If zombies are conceivable, then they are logically possible in Chalmers's sense. If zombies are logically possible, then the phenomenal does not logically supervene on the physical, because fixing the physical facts fails to fix the phenomenal facts. And if it does not logically supervene, it is not reductively explainable. Furthermore, Chalmers argues that parallel considerations show that phenomenal consciousness does not supervene on the chemical, the biological, or even the psychological facts, functionally construed, because fixing

Block defines a parallel notion of "phenomenal consciousness," though he uses a different method to characterize the concepts. See chapter 2 below, sections 2.2.1-2.2.3.

16

37 those facts does not close off the conceivability of zombies. Thus, phenomenal consciousness is not reductively explainable. Chalmers second argument focuses on the possibility of the "inverted spectrum." The inverted spectrum is the possibility that two creatures can be physically identical, and yet have systematically inverted phenomenal experiences. Chalmers claims that it is conceivable that there could be physical duplicates, one that experienced phenomenal red when stimulated by an apple while the other experienced phenomenal green, despite the fact that the stimulations and the perceivers were physically identical. If this is conceivable, then by Chalmers's definition, it is logically possible. But, again, if it is logically possible, then logical supervenience fails. Holding the physical facts steady does not entail that the phenomenal facts are the same. And again, Chalmers claims that it is likewise conceivable that the phenomenal can vary despite holding the chemical, biological or the psycho-functional facts fixed. Thus, phenomenal consciousness cannot be reductively explained. The third case is somewhat different than the first two. Chalmers argues that there is an epistemic asymmetry between our knowledge of consciousness and our knowledge of other things, an asymmetry that is absent with reductively explainable phenomena. We know consciousness in an apparently direct and unmediated way. We don't seem to reason to its existence; it's simply given to us. But we do not know the conscious minds of other creatures in this way, and furthermore, we can reasonably wonder if they are conscious at all. The asymmetry is reflected in the so-called problem of other minds. Chalmers writes,

38 "Even when we know everything physical about other creatures, we do not know for certain that they are conscious, or what their experiences are..." (1996, 102). There is no parallel problem of "other lives" or "other economies." The physical facts close off these possibilities. Chalmers argues that this shows that consciousness does not logically supervene on the physical. If it did, there would be no such asymmetry present; we would be able to infer the presence of other minds from the physical facts. But there is no such logical entailment, logical supervenience fails, and thus consciousness is not reductively explainable. The fourth case involves the so-called "knowledge argument" against physicalism (Nagel 1974; Jackson 1982). The argument features Mary, the color-deprived super-scientist. By hypothesis, Mary knows all the facts of a completed science, but has never seen red. Eventually, she gets her first glimpse of red. The question is, does she learn anything new? Chalmers argues that even though she has all the physical facts, Mary lacks the facts about what it is like to see red. She learns these facts upon her release. Thus, the physical facts do not entail facts about what it is like to see red. But these are just the phenomenal facts about red. So the physical facts do not entail the phenomenal facts, logical supervenience fails, and thus phenomenal consciousness is not reductively explainable. Finally, Chalmers argues that the absence of an analysis that seems in any way amenable to reduction directly undermines logical supervenience. He contends,

39 For consciousness to be entailed by a set of physical facts, one would need some kind of analysis of the notion of consciousness--the kind of analysis whose satisfaction physical facts could imply--and there is no such analysis to be had (1996, 104). The natural candidate for a reductive analysis is a functional analysis. But Chalmers argues that functional analyses of phenomenal consciousness all miss the phenomenon, and so in effect change the subject. He notes that functional analyses have the implausible effect of dissolving the problem of consciousness. Indeed, if a functional analysis is correct, why did we think there was a problem at all? Further, Chalmers argues that simply adopting a functional analysis for the sake of avoiding the problem is ad hoc. He writes, "One might well define 'world peace' as 'a ham sandwich.' Achieving world peace becomes much easier, but it is a hollow achievement" (1996, 105). He also notes that functional analyses possess a degree of indeterminacy that seems lacking in the phenomenal case. While there may be vague borders for the extension of the functionally analyzable concept "life," for example, Chalmers claims that phenomenal consciousness is plausibly an all or nothing affair. To the extent that he is correct, functional analyses cannot capture the phenomenal. Finally, Chalmers argues that it is unclear what other sort of analysis could provide the needed link to the physical facts. Consciousness may in fact be correlated with some biochemical property in us, but this fact does not underwrite an analysis of "consciousness," nor does it illuminate how the

40 physical might entail the phenomenal. Thus, consciousness does not logically supervene on the physical, and thus cannot be reductively explained. The failure of phenomenal consciousness to logically supervene on the physical (or the chemical, biological, functional, etc.) leaves us with an explanatory problem. No matter what we find out about the physical and functional basis of the mind, we are still left with an open question: why are these processes accompanied by phenomenal experience? There is an explanatory gap between the physical and functional, and the phenomenal (Levine 1983, 1993, 2001). Many features of the mind succumb to the kind of reductive explanation detailed by Chalmers. Belief, desire, learning, and memory are all arguably amenable to functional analysis and reductive explanation, when considered as psychological concepts. However, phenomenal consciousness can't be so explained. Therefore, Chalmers concludes that consciousness poses a "hard problem" for scientific explanation. And, given the failure of supervenience, it follows that consciousness cannot be located in a materialist ontology.

41 1.2.3 Criticisms of Chalmers's view17 Chalmers argues that materialism is committed to the claim that all the macrolevel facts are a priori entailed by the microphysical facts. Thus, when presented with a possible microphysical description of the world, and armed with analyses of the relevant concepts, we should be able to a priori deduce the macro-level facts about that possible world. But any description in microphysical terms will be unreadable in practice. It will be far too long and complex to be of any use as a guide for us in deriving the relevant facts. Thus, Chalmers holds that such a priori reasoning is idealized: an ideal rational observer, with unlimited time and mental resources, could perform the derivation. This does not require any additional rational abilities; it requires only additional computing power and memory (Chalmers 1996, 68; Chalmers and Jackson 2001, 11). Furthermore, Chalmers holds that the relevant physical description need not be restricted to micro-level language. It can include information about the structure and dynamics of the world at the macroscopic level, at least insofar as this structure and dynamics can be

I will not here address Kripke-style counterarguments from a posteriori necessities, nor Chalmers's response employing 2-D modal logic. My criticisms are, I believe, independent of these issues, and apply even if Chalmers is correct about 2-D semantics. To briefly put things into 2-D terms, I shall argue that the meaning of mental state terms like "consciousness" are open to revision, potentially altering the primary intension of those terms, This undermines the argument against reductive explanation, which relies on the primary intension of "consciousness" being stable in the face of empirical results. Further, I shall challenge the particular interpretation of "consciousness" offered by Chalmers. It is not, pace Chalmers, a term possessing the same primary and secondary intension; instead, it functions in much the same way as other functionally-defined terms. There is a large and ever growing literature on the topic of 2-D modal semantics. See Chalmers 1999, and commentaries; Block and Stalnaker 1999; and the 2004 Philosophical Studies on 2-Dimensionalism, edited by Davies and Stoljar, for a start. See also Levin 1995. My thanks to Michael Levin for help in clarifying these points, and for helpful comments throughout.

17

42 captured in terms of spatiotemporal structure (position, velocity, shape, etc.) and mass distribution (Chalmers and Jackson 2001, 9). So we may require much less then ideal computing power to figure out the relevant facts. Still, we will be dealing with a massive amount of data in a clumsy and unfamiliar language. Our usual way of handling such data is to organize and systematize it using theory. We are able to weed out the irrelevant details by passing the data through the sieve of theory. We are not Laplacian demons; thus, we require a means of organizing and synthesizing the data. But this opens up a space of uncertainty within Chalmers's model. If theory is required to make the data comprehensible, it is open to question whether we at present possess the proper theory. This may seem at first a minor concern; surely, we can pick out the table-like masses from the data, and so adequately apply our concept "table," as the model requires.18 However, things are not so neat and clean when it comes to consciousness. We do not yet know how to pick out the neural structures that are relevant to our phenomenal states. Why think that the failure to deduce the phenomenal facts is anything more than a lack of adequate theory? We cannot, like the Laplacian demon, simply "read off" the facts from the microphysics. Therefore we require theory to intercede, and it is quite clear that we are in the early stages of theorizing about the brain. When we develop adequate theories, it may become clear that when such-andsuch brain events occur, consciousness occurs. To put the point another way,

18

Thanks to Martin Davies for pressing this point.

43 the phenomenal facts might be a priori entailed by the physical facts, but we just may not know it yet.19 A historical example helps to make the point. In manner quite similar to Chalmers, British Emergentist C. D. Broad argued that the behavior of chemical compounds could not be deduced from lower-level facts. He writes, If the emergent theory... be true, a mathematical archangel ... could no more predict the behavior of silver or of chlorine or the properties of silverchloride without having observed samples of those substances than we can at present. And he could no more deduce the rest of the properties of a chemical element or compound from a selection of its properties than we can (quoted from McLaughlin, 1992, 88; emphasis added). Imagine Broad consulting his intuitions about how to apply his term "silver" when given a microphysical description of the world. He would conclude that the properties of silver were not entailed by the description, and thus that silver was irreducible. But in the interceding years, physical and chemical theory have progressed, and we can now organize the data in ways that allows us to see that if the correct physical conditions are present, then silver is present. Perhaps our intuitions about consciousness are like Broad's intuitions about silver. But it may be countered that consciousness is not like silver. It is characterized in a way that cannot be captured in physical theory. As noted,
Chalmers refers to this sort of position "type-C materialism," which holds that we have no idea how to solve the problem of consciousness, though we may in the future (Chalmers 2003, 119122). In conversation he suggested that my position fits this characterization. But he argues that such a position tends to slide into either a functionalist or a physicalist view, if it is to be a real rival to dualism. I concur, and defend a broadly functionalist view (Chalmers's "type-A" materialism) though I am skeptical of the sharpness of Chalmers's distinctions. See also Chalmers 2003, 123, for an idea of how Chalmers's would respond to my position.
19

44 Chalmers holds that our concept of phenomenal consciousness is independent of any causal or functional notion. Thus, while Broad may have been unable to see how the silver "role" was filled, he would arguably recognize that silver could be characterized in a way that allowed us to see how physical theory might make it true. It's only that he didn't think physical matter could fill the relevant role. But this claim runs afoul of a central feature of Quine's critique of conceptual analysis (Quine 1951, 1960). Quine argued that we can never be sure that we will not need to revise our concepts in the face of empirical evidence. On discovering that cats are really robots from Mars, to take Putnam's famous example, we could well reason that, after all, cats are not animals. Why think that theoretical advances might not alter our conception of consciousness in similar ways? Chalmers anticipates this sort of objection, and responds by stipulating that all the relevant empirical information can be packed into the antecedent physical description of the situation. We can reason a priori what revisions will occur given the data, and then apply our concepts. Thus, revisions can be accounted for by the model. (See Chalmers and Jackson 2001, 12.) But my complaint is that even if we pack all the relevant empirical information in the physical description, we still must be able to comprehend it. This, as argued, requires the intercession of theory. If in developing the theory we alter our conception of consciousness, we may well then find the deduction we are looking for. Only if we can be sure that theoretical development won't affect the concepts in question can we be sure before hand that the failure of the relevant deduction.

45 But Quine argues that we have no principled way of guaranteeing that fact, prior to the adoption of a particular theory.20 We do not yet possess the requisite brain theory; therefore, we do not yet know if the physical entails the phenomenal. There is an additional important point concerning theory. In general, we do not simply invent theory in our armchair. We get our hands dirty with experiment and observation, and in this way we craft and elaborate our hypotheses. Further, it is well known that odd and recalcitrant results are the engine of scientific change and breakthrough. To say that we could pack all the empirical information into the relevant conditionals and simply deduce the higherlevel facts underestimates the influence that novel empirical results have on our space of concepts. Relevantly, we can see this at work in consciousness studies. The phenomena of blindsight and the odd results of Libet effectively alter the theoretical landscape, and in doing so, potentially alter our concept of consciousness.21 Still, Chalmers might respond that phenomenal consciousness just isn't a functional notion. Sure, if you want to change the subject, you can derive whatever you wish, but that's not the reductive explanation we were looking for. However, even if we ignore my current criticism, pace Chalmers, the nonfunctional analysis is not obvious. Our pretheoretic concept arguably fails to license such an analysis. Indeed, Ive found that many undergraduates do not possess qualophile intuitions, and must be taught that there is a hard problem of
20 21

See, e.g., Quine 1951, 1960; Putnam 1962a, 1962b; Harman 1999.

On blindsight, see Weiskrantz 1986, 1997; For Libet's results, see Libet 1985 and commentaries.

46 consciousness. This suggests that the nonfunctional claim is a theoretical extension of our folk concept, rather than its essential core. Still, when we focus our analysis, we want to know facts about what it's like for a creature to undergo conscious states. Surely this characterization of consciousness delivers the goods in the antimaterialist argument. However, a good case can be made that it does not. Another gloss on how to pick out phenomenally conscious states--one concordant with common sense--holds that phenomenally conscious states are ones that we are conscious of being in.22 To put it another way, if we are in no way conscious of being in a state, that state is not intuitively conscious. Consider the following scenario. I am angry, but not consciously so. I storm around the house bashing into things and grumbling, but when asked, I snarl, "I'm not mad!" Later, my anger becomes conscious, and I see that my interlocutor was correct. I was angry, but the anger was nonconscious.23 Then I became conscious of the anger, and there was something it was like for me to be angry. Most folk will find this a plausible story, and certainly not one that is confused or contradictory. Thus, a reasonable clarification of "there's something its like for the subject" is "the subject is conscious of being in a state."24 But being conscious of something can plausibly be cashed out in functional terms.

22 23

See Rosenthal, 1986, 2002a, 2002b; Lycan, 1996, 2001.

This is not to say that I'm unconscious when I stomp around the house, or that I'm unaware of the chairs and the rest of the room. Rather, I am unaware of my anger--my anger is nonconscious.
24

See Rosenthal, 2002b, for a detailed defense of this claim.

47 Thus, if we find the physical conditions that realize this state, we can reason, a priori, from the physical facts to the phenomenal facts. The "conscious of" locution is not the only way to paraphrase "what it's like" talk. Some hold that when we are in certain transparent representational states, there is something it is like to be us.25 Others hold that when a state is accessible by a range of mental systems, the state is conscious.26 Even if such clarifications are seen as alterations in meaning, that does not mean we have thereby "changed the subject" illicitly. It is quite reasonable to paraphrase our folk talk in order to ease its coherence with scientific theory. Indeed, the use of paraphrasing to clarify our concepts is explicitly endorsed by Chalmers's ally and coauthor, Frank Jackson. He allows that we may alter our folk conception in light of theoretical considerations, noting that the we do not conclude that nothing is really solid just because the folk conception of solidity required the idea of being everywhere dense (Jackson 1994, 484).27 Chalmers assertion that he can clearly recognize a change of subject in this context arguably commits him to an analytic/synthetic (a/s) distinction. Quine's attack has arguably made that distinction untenable (Quine 1951, 1960). Chalmers acknowledges that many so-called conceptual truths are actually revisable is the face of sufficient empirical evidence (Chalmers 1996, 55). However, he argues that the supervenience conditionals employed by his model

25 26 27

See, e.g., Dretske, 1995; Harman, 1990. Such a claim could be based on Baars' (1988) "global workspace" hypothesis. See also Jackson 1998, 44-46.

48 for fixing the intensions of terms and determining supervenience claims are immune to this problem. This is because "the facts specified in the antecedent of [these] conditionals effectively include all relevant empirical factors" (1996, 55). This empirical completeness incorporates all the revisionary factors, thus making them available to a priori reasoning. Thus, the conditionals that yield intensions safely deliver analytic claims. But this is just to repeat the argument noted above. And the response is the same. We can't comprehend the relevant antecedent without theory; therefore, we can't be sure beforehand how the concepts will turn out. The conditionals are not immune to revision; therefore, Chalmers hasn't avoided Quine's critique of analyticity. And this is not to simply employ the "ham sandwich" maneuver derided by Chalmers. While it may seem obvious that some changes in the use of a term represent a change of meaning, this is not the case here with the term "consciousness." In the first place, "consciousness" lacks the everyday clarity of "ham sandwich" or even of "world peace." Even though "consciousness" has a commonsense analysis implicit in folk-usage, it is a matter of considerable debate what the correct analysis amounts to. Second, the changes I am proposing (if they are changes) concern whether or not consciousness has the specific definition in terms of "what it's like for the subject" claimed by Chalmers. Is that meant to be analytic? Must states like pains or color experiences (the paradigm cases of phenomenally conscious states, for Chalmers) always determine what it's like for the subject, by definition? Are all the purported cases of nonconscious pains, nonconscious color sensations, etc.

49 false by definition? This is much more contentious than claiming "world peace" couldn't mean ham sandwich, despite Chalmers's claims to the contrary. And it is central to Chalmers's position that the two be on a par. If not, than "consciousness" may change its meaning in the face of new empirical evidence (not an unlikely scenario, given our paltry understanding of the brain), or it may be that Chalmers's analysis mischaracterizes the folk view, and so requires additional support. But no such support is given--the analysis is meant to be a priori and obvious. Thus, we can conclude that Chalmers argument against a reductive explanation of consciousness falls short. There is, however, a fallback position that may seem to offer some aid to Chalmers. Hillary Putnam (1962a, 1962b) argues that given an accepted theoretical framework, we can reconstruct a version of the a/s distinction. He argues that within a framework, some sorts of conceptual revisions are literally inconceivable. Thus, the idea that two parallel lines can cross was inconceivable prior to the development of noneuclidian geometry. Putnam argues that the situation after the development of noneuclidian geometry and Einstein's empirical theory of relativity plausibly is a case where the seemingly analytic statement "parallel lines never cross" is refuted. But before that time, it was indeed analytic, relative to the theoretical framework of euclidian geometry and Newtonian physics. This provides us with a theory-relative version of the a/s distinction. Relative to a background framework, there are analytic truths. Chalmers might argue that relative to the background of folk psychology, consciousness is nonfunctional, and that denying this claim constitutes an

50 alteration of our commonsense concept of consciousness. Since this arguably is the phenomenon we wish to explain, Chalmers is within his rights to charge change of meaning. However, I have already stressed that there are rival interpretations of the folk concept of consciousness. Indeed, the fact, which Chalmers himself notes, that many students fails to share these intuitions indicates that it isn't obvious which interpretation is correct. For now, it is enough to note that Chalmers is arguably committed to this "theory relative" notion of conceptual truth, and that his meaning claims stand or fall relative to the background of theory. But recall that the folk theory does not provide insight into the underlying nature of conscious states; instead, it only catalogs how things appear from the commonsense viewpoint. It follows that at best Chalmers can conclude that, relative to the folk theory, consciousness appears nonfunctional, and, on that basis we find zombies and inversions conceivable, Mary to be lacking in knowledge, and so on. But given that his argument from the failure of reductive explanation falls short, this is simply a claim about how things folkpsychologically appear. If a materialist theory can explain these appearances, the door remains open to a full-blown materialist explanation of consciousness. To recap my criticisms of Chalmers's view, I argue that in order to "read off" the microphysical facts that fix meaning and supervenience relations, we must employ a theory that allows us to systematize it into comprehensible bits. In the absence of such a theory, we can draw no conclusions about the relevant supervenience conditionals. Further, the need for theory opens up our concepts to the possibility revision. We can't be sure that theoretical developments won't

51 alter our concepts in ways that allow the desired deductions to go through. I also challenged Chalmers's analysis of consciousness, noting rival interpretations that also fit well with commonsense intuition. Finally, I challenged Chalmers's use of analyticity in his approach. His counterarguments at best yield a theory relative notion of analyticity. We are left with the following results. Chalmers is arguably correct that consciousness does not appear functional, from a first-person perspective. While I have argued that he is incorrect to infer that consciousness fails to supervene on the physical, we still must explain this appearance, and the strong intuitions concerning the nature of consciousness that it engenders. Further, Chalmers's model of reductive explanation is flawed to the extent that it employs unreconstructed analyticity and a priority; however, the model arguably can be accommodated if we take into account the theory relativity of such notions. We are left with the challenge of explaining the appearances in a materialistically acceptable way, and of demonstrating the relevant explanatory connections between consciousness and the rest of the physical world. If we do so, we will have answered Chalmers's hard problem. However, there is more than one way to set up the issue. I will look next at the work of Joe Levine, after that at the work of Ned Block. Both offer characterizations of the problem of consciousness that differ from Chalmers's approach.

CHAPTER 2: LEVINE AND BLOCK

2.1 Levine's Explanatory Gap 2.1.1 Levine's Defense of Materialism Joseph Levine rejects Chalmers's model of reductive explanation and his antimaterialist metaphysical conclusion. However, he argues for an epistemological problem of consciousness, a problem exemplified by the apparent failure of any convincing materialist explanation. In his 2001 book Purple Haze: The Puzzle of Consciousness and in a number of related papers (1983, 1993), Levine details both his support for materialism and his worry over an "explanatory gap" between materialist theory and our first-person understanding of conscious experience. So if Levine is correct, even if Chalmers's metaphysical hard problem has a solution, there is still a special explanatory problem when it comes to consciousness. In this section, I will lay out Levine's materialist response to Chalmers, and then I will present his formulation of an epistemic problem of consciousness. I will conclude by criticizing Levine's claim and presenting the residual challenge posed by his view. Levine begins his defense against Chalmers by offering the following characterization of materialism: Only non-mental properties are instantiated in a basic way; all mental properties are instantiated by being realized by the instantiation of other, non-mental properties (2001, 21).

-52-

53 This characterization is crafted to avoid a worry about formulating materialism in terms of our current physics.1 It holds that whatever the basic properties turn to be, if materialism is true, they won't include mental properties. However, if Chalmers's arguments are correct, then phenomenal consciousness fails to supervene on the physical and must be counted among the basic features of reality. Thus, materialism as characterized by Levine would still be false. However, Levine contends that we have positive reason to believe in materialism, even in the face of the apparent difficulties. Materialism, according to Levine, is the best explanation of the causal efficacy of the mental. He argues as follows. An extremely fruitful assumption of modern physics holds that the physical world is "causally closed"--that is, every physical effect is fully accounted for by a physical cause. But on occasion it seems obvious that the mind causes the body to move. But to avoid violating causal closure, the mind too must be physical. Therefore materialism must be true. The alternatives are dualism and epiphenomenalism. Dualism rejects causal closure, but it is saddled with explaining how non-physical substance or properties causally interact with a physical ones. Epiphenomenalism respects causal closure by holding that the mind is not causally efficacious. Levine argues that dualism's interaction problem is insurmountable, and that epiphenomenalism is highly counterintuitive. Therefore, he concludes that materialism is justified as the best explanation of mental causation. However, Chalmers's arguments against materialism still must be defused. As noted in chapter 1, one of Chalmers's central arguments against
1

See Montero 1999.

54 materialism invokes zombies, holding that the mere conceivability of zombies implies their possibility, undermining materialism. Levine focuses his attack on this claim. On Chalmers's model, zombies are conceivable because they are not ruled out a priori by our concepts. Further, our concepts, according to Chalmers, define the space of logical possibility--flying telephones are possible, male vixens are not; this is a catalog of what is logically possible. Zombies are conceivable, so they are possible. But this entails the failure of logical supervenience--there is a possible world with the same physical facts as our world, but different mental facts--falsifying materialism. Levine challenges the theory of conceptual content underwriting the link between conceivability and possibility in Chalmers's argument. According to Chalmers, our concepts are (in part) constituted by a priori accessible semantic connections.2 It is an a priori accessible feature of our concept "water" that water is a liquid, for example. This underwrites our ability to evaluate would-be supervenience claims. For instance, because we know, by knowing the intension of the term, that water is the clear, potable liquid that fills our rivers and lakes, we can ascertain a priori the truth of the supervenience conditional "if H2O is the clear, potable liquid that fills our rivers and lakes, then water is H2O." Further, it is not conceivable, once we discover that H2O plays the "watery role," that there could be H2O present but no water present. The intension of "water" just is the "watery role," and if we have the role-filler, it is inconceivable that we don't have water, according to Chalmers. A priori accessible semantic information is crucial
"In part" because Chalmers's holds that the secondary intension of a concept need not be a priori accessible. Primary intensions are what matter for Chalmers's antimaterialist arguments. See Chalmers 1996, chapter 2, 1999.
2

55 to Chalmers's model. Levine argues that such information is not available, and so the model fails. Levine argues for a rival account of conceptual content. On his view, concept possession requires standing in the right causal or nomological relation to the concept's referent. It does not require possessing a rich network of semantic information about the concept's referent. On this view, to have the concept "water" is to have a representational state that is appropriately caused by water or that is lawfully connected to the presence of water. Thus, one can possess the concept "water" without knowing that water is a liquid, is potable, is clear, etc. While we may have the belief that water is a liquid, this is not constitutive of concept possession, nor is it information that is a priori accessible on the basis of concept possession alone. We require empirical investigation to establish that water is a liquid. This has serious consequences for Chalmers's arguments. Chalmers's position is premised on the fact that we can distinguish between possible and impossible situations exclusively on the basis of conceivability intuitions. Once we have analyzed "water" and discovered that H2O fills that role, it is inconceivable that a sample of H2O is not a sample of water. But analyzing "consciousness" and positing that it is realized by a physical or functional property does not render zombies inconceivable. Therefore, Chalmers concludes, water logically supervenes on the physical, but consciousness does not. However, Levine argues that zombie-H2O (H2O that is nonetheless not water), like zombies, is in fact conceivable. Both Chalmers and

56 Levine agree that zombie-H2O is impossible. Thus, its conceivability would break the link between conceivability and possibility needed for Chalmers's model, thereby blocking his antimaterialist conclusion. Levine's theory of content explains the conceivability of zombie-H2O. Given that our mental terms "water" and "H2O" in fact refer to the same stuff, it is impossible that H2O is present and water is not. However, if all that matters for concept possession is causal/nomological relations, we cannot tell simply by a priori reflection that they co-refer. We must investigate the world to determine what causal/nomological connections hold. For all we know before that, "water" may refer to clowns and "H2O" to accordions. In this case, it's conceivable that H2O is present and water is not. But the scenario is not possible. Thus, the link between conceivability and possibility needed for Chalmers's argument is broken, and materialism is saved. Levine's argument in effect expands the space of conceivable situations to include all situations that are not contradictory on the basis of logical form alone. In essence, this is a rejection of the a/s distinction, though one rooted in considerations from causal theories of content rather than confirmation holism. But the result is largely the same. Chalmers's model cannot rule out a wide variety of conceivable situations that are nonetheless deemed impossible by all parties involved. Conceivability is not a reliable guide to possibility, so the mere conceivability of zombies (and the inverted spectrum) does not threaten materialism.3

Levine's argument thus provides an additional attack on Chalmers's use of a priority and analyticity. However, Levine's position is committed to an atomistic view of content, while my

57

2.1.2 The Explanatory Gap Despite his rejection of Chalmers's metaphysical antimaterialist conclusion, Levine argues that consciousness still poses a special epistemic problem for a materialist science of the mind. At the core of Levine's argument are the same conceivability intuitions used by Chalmers against materialism. Levine holds that the persistence of these intuitions, even in the face of viable empirical theory, points to a serious shortcoming with materialism. We have good reason to believe in the truth of materialism, but when presented with materialist theory, we still are left wondering how the physical brain could be phenomenally conscious. In other examples of scientific explanation, once a theory is developed, our antitheoretical intuitions are blocked. But we can still legitimately ask why the brain is conscious, even if we accept materialism. Levine calls this problem the "explanatory gap" (1983, 1993, 2001). Even when presented with an apparently true theory of the physical brain, we are left wondering about consciousness. There is an epistemic gap between physical theory and the phenomenal mind. Levine begins his argument for the presence of an explanatory gap by clarifying what he takes to be required of a good scientific explanation. He writes, in a good scientific explanation, the explanans either entails the explanandum, or it entails a probability distribution over a range of alternatives, among which the explanandum resides... [W]e achieve

arguments relied on holistic considerations. Further discussion of these important issues concerning content goes beyond the scope of this dissertation.

58 understanding when we can see why, given the information cited in the explanans, the phenomenon cited in the explanandum had to be; or, to put it another way, why the relevant alternatives are ruled out, as inconsistent with the explanans (2001, 74).4 Thus, we ought to be able to deduce the explanandum in a successful explanation, according to Levine. He argues that if we cannot construct the desired deduction, there are three possible explanations. One is that we haven't fully specified the mechanisms and processes cited in the explanans. Two is that the target phenomenon is stochastic in nature, and the best that can be done is delivering a range of probabilities concerning the explanandum. The third is that there are as yet unknown factors at least partially involved in determining the phenomenon. If we've adequately specified the mechanisms and processes in question, and if we've adjusted for stochastic phenomena, then their description should deductively entail the explanandum, or the third possibility is in effect. But the third possibility is "precisely an admission that we don't have an adequate explanation" (2001, 76). Thus, if the explanandum is not entailed by the explanans, we don't have an adequate explanation.

Levine argues that his view of explanation is not only compatible with the deductive-nomological model of Hempel (Hempel 1965), but that it also fits with Salmon's "ontic" conception of explanation (Salmon 1989). The ontic conception holds that in order to explain a phenomenon we must exhibit the mechanisms that are causally responsible for it. Levine contends that if we produce a description of the appropriate mechanism, we should be able to deduce a description of the phenomenon. That preserves the spirit of the ontic model and maintains the deductive requirement. Some argue against a deductive requirement because deductions are symmetric while explanations are not (Achinstein 1981); others argue that actual deductions are never forthcoming, and we always employ a variety of pragmatic considerations in successful explanation. I will say more about the requirements on an explanation of consciousness at the beginning of chapter 3. For now, I will accept Levine's claim; my explication and criticisms of his view are independent of this issue.

59 Levine argues that the conceivability of zombies, the conceivability of the inverted spectrum, and the problem of other minds (for entities whose constitution is very different from our own), all show that the deduction required for a successful explanation of consciousness is lacking. If we possessed the deduction, we could rule out zombies and inversion on that basis. For example, if we knew that conscious experiences of red were identical to c-fiber firings, and we knew that a creature had firing c-fibers, we would know that it was having a conscious experience of red. We could deduce the presence of conscious states in the creature, and this would rule out both zombies and inversions. Likewise, we would be in a position to judge if it had conscious states at all, solving the problem of other minds. However, despite the development of a number of reductive materialist theories and a great improvement in our knowledge of the brain, the problematic cases persist. Zombies and inversion seem as easily imagined as ever, and the problem of other minds still seems pressing. We cannot, it seems, deduce the presence of consciousness from a materialist description of the mind. Therefore, we have not explained consciousness. Still, there is a reasonable explanation for the persistence of these intuitions. A materialist explanation of consciousness entails that certain brain states are identical to certain phenomenal states. The example above posited that red experiences are identical to C-fiber firings. Levine's questions amount to asking, how could this identity be true--what explains it? But ordinarily, identity claims do not have explanations. A thing just is what it is. Mark Twain, for example, is Samuel Clemens. There is no sense to the question, "how could

60 Mark Twain be Samuel Clemens?" He just is. We may ask why we thought that one thing was really two. But once that story is filled-in, the question does not make sense. So the explanatory story that Levine demands need not and could not be given. He is asking how something could be itself, and there is nothing meaningful to say about that. 5 Levine counters that when we closely compare identities involving consciousness to other examples, we see that they still fail to eliminate meaningful questions. He canvases identity claims involving indexicals, demonstratives, and natural kinds, and concurs that in those cases the questions do indeed drop away. However, the fact that meaningful questions remain in the case of consciousness suggests a substantive difference, one indicative of the explanatory gap. In the case of indexicals, it is widely agreed (following Perry) that indexical claims cannot be derived from nonindexical descriptions. However, this need not point to anything of metaphysical interest, and indeed, when the relevant referents are introduced, questions fall away. Consider the claim "I am here now." Once we learn that Josh Weisberg is the current referent of "I," my office is the current referent of "here," and 9:45 is the current referent of "now," there isn't any sense in asking, "But how could I be here now?" given that Josh Weisberg is in my office at 9:45. That's just how these terms work, and once the relevant contextual features are supplied, meaningful questions drop away. The situation is the same with demonstratives. We arguably can't derive their referents from nondemonstrative descriptions, but once the relevant

This claim is pressed by Papineau 1995, 2002; Block, 2002a, 2002b; and others.

61 contextual features are filled-in, certain questions don't make sense. Following Levine's example, imagine that I point blindly in front of me and say, "I wonder what that is?" I look up and it's a red diskette case. Does it make any sense to ask how it could be that my red diskette case is that? Levine holds that it does not. Once we've determined what lies at the end of the demonstrative, this kind of question lacks sense. More to the point is the case of natural kind identities, like "water = H2O." In arguing against Chalmers, Levine defended a causal/nomological theory of content. The view entails that prior to empirical investigation, concepts are simply uninterpreted strings, mere labels in the language of thought. We must discover how we are hooked to the world in order to determine the referents of our concepts. But once that information is supplied and we discover that two "mental words" refer to the same thing, it makes no sense to ask, "But how could they co-refer?" If the referential links are there, they just co-refer, and that's that. In the case of "water," we learn it refers to a particular stuff in our environment. Then we learn that a chemical term "H2O" refers to the same stuff. We may ask how it is that H2O displays the various features that we already believed were displayed by water. But once the requisite chemical theory is provided, those questions have good answers. We learn that both terms refer to the same stuff, and meaningful questions drop away. These examples are in stark contrast to the case of consciousness, according to Levine. We can still meaningfully ask how C-fibers firings (or specific functional or representational states) could be my conscious experience

62 of red. We may fully believe that the two terms co-refer. But this does not eliminate the questions. Thus identity claims involving consciousness differ from ordinary cases of indexicals, demonstratives, and natural kinds. Still, there is a crucial unresolved point. In other cases of natural kinds, prior to the development of the requisite background theory (chemistry, in the water example), meaningful questions of a sort were indeed still open. Before the development of chemical theory, we could reasonably ask why we should think that "water" and "H2O" refer to the same stuff. How could this molecular stuff be transparent, thirst-quenching, etc.? Granted, once the theory was in place, such questions fell away, but before that, there would appear to be a gap between our everyday knowledge of water and the claims of chemical theory. How do we know that this isn't the case with consciousness? Why think that the background theory is in place to block our questions, especially given our relatively scant knowledge of the brain? Levine responds that there is an important difference in the way we access our conscious mental states and the way we access everything else. Given this difference, it is apparent that prospective reductive explanations fail to narrow the gap. Our access to the world by way of concepts like "water," "H2O," "cat," etc., involves what Levine calls "thin modes of presentation" (MOPs). Thin MOPs merely label a phenomenon/substance in the world... [This] simultaneously explains why water facts are not strictly derivable from the physical facts and also why, nevertheless, requests to explain the identity

63 of water with H2O, once the relevant physical facts are known, are unintelligible. There isn't enough cognitive substance associated with 'water' to make sense of this request for explanation (2001, 84). I termed these thin MOPs "uninterpreted strings in the language of thought" above. We cannot a priori derive them from the physical facts because we do not know a priori what they refer to. But once empirical information is supplied, there isn't any room for questions about how two terms could co-refer. If they stand in the right causal/nomological relationships, they co-refer. On the other hand, our awareness of our conscious mental states is by way of "thick modes of presentation." With conscious experience, according to Levine, "We are not just labeling some 'we know not what' with the term 'reddish,' but rather we have a fairly determinate conception of what it is for an experience to be reddish" (2001, 84). Levine calls this sort of conception "substantive and determinate." He writes, When I think of what it is to be reddish, the reddishness itself is somehow included in the thought; it's present to me. This is what I mean by saying it has a 'substantive' mode of presentation. In fact, it seems the right way to look at it is that reddishness itself is serving as its own mode of presentation. By saying the conception is 'determinate,' I mean that reddishness presents itself as a specific quality, identifiable in its own right, not merely by its relation to other qualities (2001, 8). Because we possess this special access to our conscious mental states, materialist identity claims fail to close off meaningful questions. The proposed

64 identities do not answer to our substantive and determinate conception--indeed, we cannot see how they could. Levine calls an identity that leaves open meaningful questions "a gappy identity" (2001, 81ff). If the deduction from the physical facts to an explanatory target runs through a gappy identity, we are left with an explanatory gap. Such is the case with consciousness. Levine's insistence on thick MOPs for conscious mental states underwrites his rejection of a range of materialist explanations of consciousness (2001, chapter 4). He argues that theories which construe conscious qualitative mental properties, or "qualia," as relational fail to do justice to our deep intuition that qualia are intrinsic features of experience. This intuition is underwritten by our substantive and determinate access to qualia--we have seemingly direct access to a feature which apparently defies materialist explanation. Furthermore, socalled "higher-order" theories of consciousness hold that our access to our conscious states is mediated by representation. But according to Levine, the substantive nature of thick MOPs demands that qualia are somehow constitutively involved in our awareness of them, blocking higher-order views. Finally, neuroscientific theories of qualia fall short as well. Neuroscience deals in relational information, for example data concerning neural connections and firings or the flow of information through neural nets. But relational information cannot do justice to our substantive and determinate awareness of the intrinsic features of experience, according to Levine. Since neuroscience arguably deals only in relational information, it cannot close the explanatory gap.

65 Thus it seems that there is a special explanatory problem of consciousness, even if we accept a materialist metaphysics. Our special access to consciousness undermines the possibility of a relational explanation, and it seems that relational information is all that the materialist has to offer. However, Levine's claim begs a crucial question. Why should we accept that our so-called substantive and determinate MOPs provide us with a fully accurate grasp of the nature of our mental states? Perhaps such states only appear to have intrinsic experiential character, a character that can't be fully explained in a materialist theory. Levine requires a defense of the claim that things are as they appear when we access our conscious states, and further, that there are no hidden workings that explain the nature of consciousness. Otherwise, it is open for his opponent to charge that despite appearances, conscious states really are functional or neurological states, albeit states that we do not recognize as such in introspection. Levine thus seems committed to the Cartesian doctrine that the mind is transparent to its subject, and the subject is authoritative, or even infallible, about the contents of her mind.6 In keeping with the commonsense approach to fixing the data argued for in chapter 1, claims that go beyond the folk-psychological appearances to the underlying nature of consciousness require argument. It is not enough to say that conscious states appear to have an intrinsic quality, therefore they do.

Levine acknowledges that his view commits him to "self intimating" conscious states, the idea that qualitative character is intrinsically conscious. But he does not argue for this, and holds that it is part of the "paradoxical nature" of the link between our subjective access to our conscious states and their character (2001, 109). He requires support for this claim, and for the stronger claim of infallibility.

66 However, Levine does not directly argue for such a claim. He comes closest in rejecting the idea that he, himself, might be a zombie, writing it is not really even conceivable to me that I might be a zombie. I can rule out this possibility from within... First-person skepticism doesn't even get a foothold because my epistemic situation in some way includes my conscious experience... The qualia are essential components of how, cognitively and epistemically, it is with me (2001, 167). However, this clearly does not establish transparent access--access where we can "see right through" to the nature of our conscious states--nor does it entail that we are infallible with respect to the underlying reality of mental states. It is still possible that the features we are aware of are relational in nature. Further, Levine explicitly rejects any appeal to privileged access and infallibility or incorrigibility (103, 117, 138ff). This is in keeping with the commonsense view of the mind--we can be in error about what states we are in, and we are not in infallible contact with the underlying nature of the mind. But it is not clear that he can do without such an appeal. In its place, Levine offers the following concerning intuitions about the apparently intrinsic nature of qualia and our determinate and substantive access to them: On my view, intuition has no special epistemic status; it's not a faculty in its own right, nor are its dictates to be treated as incorrigible. As far as I can see, intuition is just reasonableness. That is, to say that something is intuitively wrong or odd is to say that it strikes one as unreasonable,

67 implausible. One could be wrong about this, and the basis for this response should always be sought out to the degree possible, but sometimes one just has to rest on the fact that some hypothesis seems blatantly implausible (2001, 103). But this certainly leaves open the possibility that, despite appearances, qualia are in fact relational, and, therefore, there is no explanatory gap. Our conscious states may appear to have intrinsic features, but that may just be appearances; the reality may be quite different. And our "substantive and determinate" access may only appear to constitutively involve the conscious states we access. Indeed, we may have no insight into the nature of the access. All we can reasonably conclude is that the access appears substantive and determinate and that we appear to access intrinsic features, while the reality may be quite different. In the absence of an argument against this possibility, it is an open question whether there really is an explanatory gap. There is an additional move, however, that might seem to aid Levine. While folk psychology may not license infallible access to the states we are in, it may provide infallible access to how things seem to us in conscious experience. I may be incorrect that I am in some sensory or emotional state, but can I really be wrong that I seem to be in those states? And perhaps this "seeming" is itself an intrinsic property of my experience, thus establishing the presence of a gapgenerating feature. But this move does not help. Even if we are infallible in this manner (and I will argue in later chapters that even this is unfounded), all that this provides Levine is that we seem to be in states with intrinsic properties. I

68 concur, and hold that this appearance requires explanation. But I resist the move from it seeming we are in states with intrinsic properties to the claim that we are in states with intrinsic properties. Why think the "seemings" themselves are intrinsic? While I may be infallible that I seem to be in pain, how does that establish that I am infallible (or even reliable) about whether the "seemings" themselves are intrinsic or relational? Nothing in folk psychology licenses this claim. Indeed, Armstrong's point seems fully appropriate here: just because we are not aware that the seemings are relational, this does not entail that we are aware of them as nonrelational. In the absence of additional support for this shift, we have no reason to accept it. There is nothing to be gained by Levine in this move of "introspective ascent." To recap, there appears to be an explanatory gap because of the nature of our access to our conscious states. We seem to be aware, in a direct and unmediated manner, of a determinate quality "identifiable in its own right." If we take this access as fully accurate, materialist explanations involving functional or neurological states will inevitably seem to fall short. Such theories will have the task of explaining an intrinsic quality of experience in relational terms. But in the absence of supporting argument, we need not take this access as fully accurate concerning the nature of our conscious states. Perhaps they only appear to have an intrinsic quality, while in fact they do not. Without an argument supporting the accuracy claim, it is an open possibility that our conscious states may be explained in functional or neuroscientific terms. Levine provides no such

69 argument; therefore, I conclude that there may only appear to be an explanatory gap. We are left with the following task. We must explain, in materialistically acceptable terms, the apparently substantive and determinate access we have to seemingly intrinsic features of conscious experience. This opens the door to a satisfying materialist theory of consciousness--one that fully accounts for the commonsense data while accounting for the intuitions that prompt claims of hard problems and explanatory gaps. I will have more to say about the requirements for such a theory of access at the beginning of chapter three. However, there is another theorist, Ned Block, whose work seems to point to a special explanatory problem of consciousness, one that differs from other scientific conundra. Further, he rejects both Chalmers's hard problem and Levine's explanatory gap. However, his defense of materialism incurs what he terms "the harder problem of consciousness." I will turn to his work in the next section.

2.2 Block's Phenomenal Consciousness 2.2.1 Kinds of Consciousness In his 1995 paper "On a Confusion about a Function of Consciousness," Ned Block writes The concept of consciousness is a hybrid or better, a mongrel concept: the word "consciousness" connotes a number of different concepts and denotes a number of different phenomena. We reason about "consciousness" using some premises that apply to one of the

70 phenomena that fall under "consciousness," other premises that apply to other "consciousnesses," and we end up with trouble (Block 1995, 375). He identifies four different phenomena that are referred to by "consciousness," most importantly one that defies characterization in "cognitive, intentional, or functional terms" (1995, 381). Confusing this kind of consciousness with one of the others leads to the false sense that the mystery of consciousness is easily solvable by the methods of a functionalist cognitive science or neuroscience, which deal in cognitive, intentional, and functional explanation. Thus, if we are to avoid illicitly explaining away the problems of consciousness, we must pay careful attention to Block's theoretical distinctions. The first kind of consciousness Block identifies is "phenomenal consciousness" or "phenomenality" (p-consciousness, for short). Pconsciousness is a "pretheoretic" notion characterized by mental states that there is something it is like for the subject to be in. P-consciousness is "what it is like to have an experience. When you enjoy the taste of wine, you are enjoying gustatory phenomenality" (Block 2001, 202). The paradigm cases of p-conscious states are sensations, states with a sensory or qualitative character. Block acknowledges that this is not a particularly informative way of characterizing the phenomenon, but that is to be expected. On his view, p-consciousness cannot be characterized in a noncircular way. We can only state our characterization of p-consciousness in terms of closely synonymous expressions. P-conscious states are thus "experiences" or "states that there is something it is like for the subject to be in." Furthermore, and most crucially, p-consciousness, according to

71 Block, is distinct from any "cognitive, intentional, or functional" notion, rendering it at least conceptually independent from mental processes picked out in these terms (1995 381ff). I will detail below Block's attempts to flesh out and defend this characterization, but first I will present the other kinds of consciousness. The second kind of consciousness is "access consciousness." A state is access conscious "if it is broadcast for free use in reasoning and for direct 'rational' control of action (including reporting)" (Block 2002b). "Broadcasting" here means that the state is actively available to a number of different psychological systems. Access conscious states influence behavior by flexibly interacting with the goals, beliefs, and desires of a creature. States that are access conscious are "globally accessible" in Baars's sense (1988), or achieve "fame in the brain" in Dennett's terminology (1993). Access consciousness is arguably functionally characterizable. If a state plays the right role in the mental life of a creature, then that state is access conscious. Block also notes two other kinds of consciousness, "self-consciousness" and "monitoring consciousness." Self-consciousness entails the possession of a self concept and the ability to use this concept in thinking about oneself (1995, 389). Monitoring consciousness (also termed "reflexivity") occurs when a creature becomes aware of one of its own mental states. This occurs when a state is reflected on by a higher-order state of a creature, one that is about another of its mental states. In this way a creature monitors its own mental life (1995, 390). Both these notions plausibly involve cognitive and intentional

72 processes. We represent ourselves or we represent one of our states when instantiate one of these kinds of consciousness. Block's claim that p-consciousness is conceptually distinct underwrites his charge that many theorists illicitly avoid the real explanatory problem of consciousness. If he can make good on that claim, what Block terms a "functionalist" theory of p-consciousness cannot work. It will inevitably miss the target due to the nonfunctional characterization of the phenomenon.7 However, Block argues that this does not obstruct the route to a materialist theory of pconsciousness. Instead, he endorses what he calls a "physicalist" view, on which there is a type-identity between p-conscious states and neural states.8 Pconscious states just are brain states, despite appearance and intuition to the contrary. Further, there is a decent explanation of our intuitions here: we access our p-conscious states by two distinct conceptual routes. One involves firstperson access to the mind; the other involves the theoretical concepts of neuroscience. These concepts refer in very different styles, but there is no bar to claiming they refer to the same thing. This opens the door to the dissolution of Chalmers's hard problem and Levine's gap. However, Block argues that his position is faced with another worry, what he calls "the harder problem of
Block includes under this heading the functionalist and representationalist approaches developed by Harman 1990, Dretske 1995, and Tye 1995; the "higher-order" approaches of Armstrong 1968, 1980; Rosenthal 1986, 1997; and Lycan 1987, 1996; the functionalist cum eliminativist approaches of Dennett 1991; Paul Churchland 1981; and Rey 1997; and many others. Any view positing a constitutive connection between p-consciousness and a cognitive, intentional, or functional notion is included. Block's use of the terms "functionalist" and "physicalist" are not completely standard. Some hold that functionalism view is a form of physicalism (i.e. Lewis, 1994, 291). Other hold that intentionalist views are not functionalist views (i.e. Dretske, 1995). And so on. In this section, I will follow Block's usage.
8 7

73 consciousness" (Block, 2002a). Even if we can avoid the problems of Chalmers and Levine, this issue still lurks. But before presenting Block's physicalism and the harder problem it allegedly generates, I will focus on his characterization of pconsciousness. Block (with Robert Stalnaker) argues against the use of a priori conceptual analyses in characterizing the data for a scientific theory. He contends that such analyses are never actually produced in sufficient detail and that even the roughest sketches are vulnerable to counterexample. Further, Block contends that a priori analyses are not needed to fix the data. Instead, the data can be fixed by pointing to paradigm examples of the target phenomenon. Uncovering the scientific basis of life, for example, requires focusing on the paradigm living things around here and empirically unpacking the features that explain the common underlying nature of those samples. Furthermore, empirical science has the last word about the data under investigation--if scientific practice dictates a revision of our commonsense intuitions, so be it. Armchair considerations have little role in fixing the data or shaping a scientific theory.9 The situation is the same with p-consciousness--we must find ways to "point" to paradigm cases of p-consciousness in order to fix the explanatory data.

See Block and Stalnaker 1999. See also Chalmers and Jackson, 2001, for a detailed response. I have addressed some of these issues in the previous chapter, but, to recap, Chalmers and Jackson deny the need for explicit analysis in necessary and sufficient terms. They argue that a priori intensions can be reconstructed by reflection on possible cases, and this is enough to fix the analyses. Block and Stalnaker argue against the existence of a priori accessible intensions, and argue that empirical considerations are always required to fix the data. I attempt to split the difference between the views; I accept the need for the sorts of commonsense analyses that Chalmers and Jackson (following Lewis) require, but I deny that there is any role for the a priori in fixing the data, thus acknowledging the empirical considerations of Block and Stalnaker. See also Philosophical Studies, 2004, Davies and Stoljar, eds., on this issue.

74 However, conscious states are not amenable to standard methods of pointing. One way to get around this problem is to point "via rough synonyms" (1995, 380). We can pick out p-consciousness by employing other terms to pick it out. This, however, is of little value because Block contends that there is no noncircular characterization available. The next form of pointing is to introspect upon the states in question in order to pick out their defining features. Sensations are the prime exemplars of p-conscious states, so we should introspectively reflect on our sensory experience. This perhaps yields the characterization that pconscious states are states that there is something it is like for the subject to be in. But it does little to support Block's desired distinction--that p-consciousness is not characterizable in cognitive, intentional or functional terms. Arguably, all introspected cases of sensation play a functional role in subject's mental lives. They are always involved in providing information about one's environment or body; further, they are the normal precursor of perceptual belief. In addition, a number of theorists argue that sensory experience is always representational. When we introspect upon our sensations, we always seem to be in a position to note what they represent. Representation can plausibly be cashed out in cognitive, intentional, or functional terms. Thus, introspection does not yield the characterization of p-consciousness that Block is after. 10 Block then turns to a variety of thought experiments in order to point to pconsciousness. He acknowledges the controversial nature of this method, given

See Harman, 1990; Tye, 1995, 2000. See Harman, 1995, for a conceptual (therefore cognitive) view of sensations; Tye 1995, 2000, and Dretske 1995, for intentional views; and Clark 1993, Rosenthal, 1999a, 1999b, for a broadly functionalist view (in terms of quality spaces).

10

75 that the proper interpretation of the thought experiments themselves is at issue. However, he believes that, despite the controversy, the cases effectively point to a conceptually distinct phenomenon. Block focuses on Levine's explanatory gap. He writes, "By way of homing in on p-consciousness, it is useful to appeal to what may be a contingent property of it, namely the famous 'explanatory gap'" (1995, 381). He argues that reflecting on possible neuroscientific theories of consciousness leave open the question of why these neurological processes should be accompanied by this p-conscious quality rather than another, or any quality at all. According to Block, we don't have a clue how such theories might answer these questions. Further, this contrasts with our take on current theories of cognition and representation. There seem to be good working paradigms in those cases, in stark contrast with p-consciousness. Block concludes that the gap points to p-consciousness: "...that's the entity to which the mentioned explanatory gap applies" (1995, 382). But this fails to establish Block's

distinction. What is it we are thereby pointing to? Perhaps it is a complex functional phenomenon that, as of yet, we have not found an adequate theory. Further, the claim begs the question. Those who deny Block's distinction will also reject the explanatory gap, and for the same reason--they think consciousness can be characterized in cognitive, intentional, or functional terms. To invoke the gap to establish the distinction assumes that this cannot be the case. But that is what is at issue. Block next offers a number of thought experiments designed to support the distinction directly. He contends that the cases show that p-consciousness is

76 conceptually distinct from access consciousness, self-consciousness, and monitoring consciousness. Since the other notions are plausibly captured in cognitive, intentional or functional terms, showing p-consciousness's conceptual independence lends support to the idea that the distinction is correct. In particular, he focuses on the independence of p-consciousness from access. Block presents that case of the "super blindsighter" to show the possibility of access without p-consciousness. Blindsight is a condition that sometimes occurs when the visual cortex is damaged. Regions of the visual field will seem to the subject to be completely devoid of visual input. However, when prompted to guess at what is being presented in their "blind field," blindsight subjects are well above chance for certain types of stimuli, like motion, orientation, and simple shape. Block asks us to consider a blindsighted subject who can spontaneously prompt himself to guess about what is present in his blind field and then employ this information in perceptual judgments. Thus, the content of the blind field would be access conscious, because it would play the right role in the subject's mental economy--it would be broadcast for use in speech, for example. Still, we wouldnt think that this "super blindsighter" now enjoyed p-conscious visual experience. Rather, according to Block, it is natural to think that they have access to the information without phenomenal awareness. Access and phenomenality can come apart, it seems. Block further holds that we can conceive of p-consciousness without access in a creature in which the connections between p-conscious states and the rest of the mind have been ablated. Why think the p-conscious states would

77 wink out of existence? Isn't it at least possible that such states exist unaccessed? In addition, coming to be aware of a background noise that has been going on for some time potentially indicates the presence of pconsciousness without access. The sound was there, but we didn't access it--it didn't have an impact on a wide range of mental processes. According to Block, prior to the access it is plausible to hold that we were in a p-conscious state, a state involving auditory sensory quality. But until we became aware of it, it went unaccessed. These seem to be cases of p-consciousness without access. But all of these examples assume the distinction that Block wishes to establish. If we believe that p-consciousness is conceptually linked to cognitive, intentional, or functional processes, then we need not accept Block's interpretation of the thought experiments. In the super blindsighter case, if we believe that p-consciousness is constitutively tied to access, then, if the right access relations are really present, it follows that the super blindsighter has pconscious states. Further, the details of the access matter. Many views reject the claim that the ability to self-cue guesses is sufficient for access. When the super blindsight case is restated with that in mind, Block's conclusion becomes less plausible. There may be some access present, but not the sort of access sufficient for phenomenal experience.11 Finally, it is not clear that the thought experiments indicate anything about our "pretheoretic" notion of consciousness. By the time we come to grasp the subtleties of these imagined cases, we lose our grip on the initial data we wanted to pin down. How do we know what an
For a related discussion, see Siewert 1998, and the commentaries on his work on the PSYCHE website, http://psyche.cs.monash.edu.au/symposia/siewert/index.html (Siewert 2004), especially the commentaries by Lycan and Carruthers.
11

78 imaginary brain-damaged subject will experience? How are we to tell if ablating an animal's brain leads to this or that experiential result? Our folk concept does not extend out to these cases. It is only if we are already sure that they are distinct in the way that Block claims that we will find his readings appealing. But that goes beyond mere "pointing" and into full-blown theorizing. Thus, if it is to provide a datum that a materialist theory of mind must explain, Block's characterization of p-consciousness requires further support. So, where can we turn for that support? In his 1995 paper, Block claims that "[t]hough I believe that functionalism about p-consciousness is false, I will be trying not to rely on that view" (1995, 381). But this is the root of the issue. So it is useful to look back on Block's previous rejection of functionalism. In arguing for the failure of functionalism, it may be that Block provides support for the claim that there is a kind of consciousness that defies characterization in cognitive, intentional, or functional terms. In his 1978 paper "Troubles with Functionalism," Block presents several cases designed to undermine a functionalist theory of "qualia," the qualitative aspects of conscious experience. Qualia are paradigm cases of p-conscious properties. The thought experiments are very much in the mold of those already presented. One involves imagining the nation of China "wired up" by radios in order to instantiate a functional analogue of human psychology. All the inputs, outputs, and intermediate functional states could be mimicked (in principle, it is claimed) by this organized mass. But, Block contends, it is absurd to think the mass itself would thereby be in conscious qualitative states. But functionalism is

79 committed to this position, rendering it absurd as well. A second related case focuses on a broader form of functionalism, one only committed to the equation of mind with the actual functional organization of our brains, rather a more abstract functional-role characterization. This is the case of the "homunculi head," where our neural organization is reproduced by tiny sentient creatures who manipulate the requisite connections and processes. But again, Block holds that it is counterintuitive to believe that such a functional system would possess qualia. The presence of the sentient beings in the middle of the process helps bring out the problems functionalism has in explaining our conscious qualitative states. However, such cases do no more than restate the intuitions that drive the claims in the 1995 paper. Further, it is again open to the functionalist to argue that such creatures would be p-conscious, despite our intuitions. Such intuitions may simply represent our pretheoretic ignorance about the nature of our conscious minds, rather than the conceptual distinction Block is after. Indeed, neurons are characterized fully in functional terms in neuroscience. However, simple reflection on our neurons fails to reveal how they could instantiate pconsciousness, either. But Block accepts that they do. So why not think that the situation might be the same in the problem cases? Block argues that we know that we are conscious, so we have reason to believe that neurons underwrite pconsciousness in our case. But we have no parallel reason for such a belief with non-brain functional systems. But that misses the point. We may not recognize that what matters for p-consciousness is the functional arrangement of our

80 neurons, rather than their constitution. Why should we be able to tell this? And if that is the case, than a system that recapitulates that organization will also possess p-consciousness, despite the intuitions.12 For all of my criticisms, however, it may still seem Block is quite correct that p-consciousness is distinct from any cognitive, intentional, or functional notion. There must be something underwriting the pull of the thought experiments. Isn't that alone enough to motivate the conceptual distinction? It may seem to follow from the definition alone that p-conscious states are distinct. Block characterizes them as states that there is something it is like for the subject to be in, and this maps directly to Chalmers's problematic notion. Perhaps this is enough to establish the independence that Block seeks. If a state is characterized in terms of "what it's like," what does that have to do with cognition, intentionality, or function? However, as noted in chapter 1, there is more than one way to unpack the "what it's like" locution. One focuses on the front end of the clause, on the something that it's like for the subject. This suggests a property glowing with consciousness, one that by itself carries the problematic feature that cries out for explanation. Perhaps incidentally it is for a subject, but the quality itself floats free from cognition, intentionality or function. But the other way of unpacking the phrase puts the emphasis on the "for the subject" element. In this case, if the state isn't registered by the subject at all, it is wrong to call it a conscious state.

An additional support of support for Block's distinction might be his "inverted Earth" claim from his 1990 paper of that name. But this issue is largely the same. It is open to the representationalist to either embrace Block's conclusion, or to argue for a more restricted version of the thesis. See Harman 1990, 1999; Dretske 1995; Tye 1995, 2000.

12

81 This second reading of the phrase is stressed by David Rosenthal (2002b). He argues that sensory states (the "something") that are in no way for a subject fail to qualify as conscious states. If a subject is in no way aware of the state she is in, it is not intuitively a conscious state. The presence of the subjective element is what matters for consciousness. Such a reading is in line with our commonsense characterization of consciousness. States that we are in no way aware of are not intuitively conscious states. This seems as good a way to characterize states that there is something it is like for the subject to be in as Block's. Block acknowledges that there are alternate ways to fix the data, but he argues that his distinction presents a theoretically useful way to interpret empirical results. This, he feels, provides a modicum of empirical support for his position. If it provides a useful way of clarifying and interpreting psychological experiments, the distinction gains legitimacy. Indeed, Block argues that even if his distinction isn't unambiguously supported by empirical results, "we can get many convergent though flawed sources of evidence--[and] so long as the flaws are of different sorts and uncorrelated--we will have a convincing case" (2001, 217, emphasis in original). He offers several examples to make his point. One is the condition Block labels of "aerodontalgia." Pilots in World War II flying in unpressurized planes occasionally complained of feeling pain in places where they had undergone dental work, despite the fact that this work was done under anesthetic. It turned out that only that work done under general anesthesia produced the recalled pain; work done under local anesthesia failed

82 to produce the experience. Block hypothesizes that this indicates the presence of phenomenal consciousness (pain) without either access or reflexive awareness. The pain must have been present in order to lay down the pathways activated later in flight. General anesthesia therefore must only block access or reflexivity, rather than blocking the p-conscious pain state altogether. This seems to be a useful way of describing the case, and one that points to distinct phenomena to be studied. The distinction between p-consciousness and cognitive, intentional, and functional notions seems fruitful. However, rival interpretations can handle the case with equal ease. The advocates of access or reflexivity can argue that in the general anesthesia case, the pains present at the time of the dental work either failed to be accessed or were not the objects of reflexive awareness. And, therefore, the pains were nonconscious--there was nothing it was like for the subject to experience the pains. If access or reflexitivity are constitutive of conscious states, then nonconscious sensory states are an accepted possibility. Furthermore, there is good evidence that folk psychology licenses nonconscious sensory states; additionally, they have wide acceptance in empirical work. People regularly speak of a headache that went on all day, though at times the pain wasn't conscious. Also, sensory states that occur very quickly are often termed subliminal perception. It is accepted usage to call them unconscious sensory stats, both in commonsense talk and in scientific research. Block is only in a position to rule out nonconscious states if he has evidence from some additional source that such states are impossible--

83 otherwise, he is question begging in the current context. If that source is our commonsense, pretheoretic notion of consciousness, then at best he gets a standoff. Some cases pull the usage one way, some the other, and neither gains the upper hand. Block runs through a number of other empirical cases, from commonsense reflection on the experience of hearing a jackhammer, to priming and perception experiments in psychology. However, the line of response is the same as in the aerodontalgia case. The other side has a reasonable interpretation in terms of nonconscious or partially accessed or reflected upon states. The cases are a standoff. Thus, both sides gain the same "convergent though flawed sources of evidence." Block's view gains no advantage from empirical data. At this point we are left only with more general theoretical considerations to separate the views. These include the usual theoretical "virtues" of predictive power, simplicity, scope, familiarity of principle, fruitfulness, etc. (see Quine 1976; Quine and Ullian, 1970). But in that case, it is arguably the access or reflexivity theories that gain the upper hand. These theories integrate p-consciousness with the rest of scientific psychology and neuroscience and they avoid both the explanatory gap and the hard problem. The main challenge for such views is saving, as much as possible, our pretheoretic intuition concerning consciousness. But this seems a more desirable theoretical situation than being saddled with the hard problem or the gap. And there is an additional consideration that tells against Block's distinction in this context: Block's own theory of consciousness, which embraces the independence of p-consciousness,

84 leads to what he terms the "harder problem of consciousness." Thus, even if the distinction is granted, substantial explanatory problems are incurred. In the following section, I will present Block's harder problem; however, I will argue that it is not significantly different from Levine's explanatory gap. I conclude that Block's theory fails to explain consciousness, and therefore, his distinction is not theoretically fruitful. Thus, we have no compelling reason to accept that pconsciousness is conceptually distinct from any cognitive, intentional, or functional notion.

2.2.2 Block's Identity Thesis and the Harder Problem of Consciousness To review, Block argues against any a priori analysis of p-consciousness in cognitive, intentional, or functional (or any other informative) terms. Further, he contends that p-consciousness is conceptually distinct from any such notion. Still, he holds that physicalism is true, and therefore, that p-consciousness is ultimately a physical phenomenon. He argues that p-conscious states are identical to neural states. The identity is justified on grounds of ontological parsimony and explanatory power. We are thus entitled to posit the identity, even in the absence of any deductive link between the two phenomena, contrary to Chalmers's arguments.13 Furthermore, identity claims do not allow for meaningful explanations. Thus, the lack of an informative explanation of the
Levine, like Chalmers, holds that successful explanations must deductively entail their explanans. However, he does not hold that identities are justified deductively, as Lewis, Chalmers, and Jackson argue (as did Levine is his 1993). Levine, like Block, holds the identities are posited because of their explanatory and ontological value. Once the identity is in place, we should be able to construct the requisite deduction needed for explanation. See Levine 2001, Chapter 3, section 2.
13

85 consciousness/neuronal identity is not an objection to the position. It simply follows from the fact that identities are not informative in general. So, how do we arrive at a scientific understanding of p-consciousness, according to Block? First, the data for the theory is fixed by picking out paradigm cases of p-consciousness--a sensory experience of red, for instance. Then, using brain imaging and other empirical methods, we isolate the "neural correlates" of p-consciousness, the neural states that reliably co-occur with pconscious states picked out introspectively. We then seek to corroborate such evidence with dissociative disorders and other clinical data. When enough evidence is in hand, we posit an identity claim between the experiential state and the neural correlate, in the interest of explanatory power and ontological parsimony. So, for example, it might be that visual sensations of red are correlated with a certain type of cell firing in areas V1 and V5 of the visual cortex. According to Block, we are justified in claiming that visual sensations of red just are cells of that type firing in the relevant areas. We thereby close off questions of why these two phenomena are always correlated, and reduce the number of distinct entities in our ontology. Finally, all this is achieved without recourse to a priori analysis, contra Chalmers and Jackson. Block also argues against the legitimacy of apparently open questions used by Levine to support the gap. Identities do not have explanations, according to Block. Asking for an explanation of an identity is just to ask how something could be what it is. Still, we can ask for reasons to believe that we've found a true identity, rather than a mere correlation of distinct entities. And Block

86 admits that there is considerable mystery about how things that seem as diverse as sensations and neural states could be identical. The solution, according to Block (in agreement with Loar, Papineau, Perry, and others, and even Levine in a qualified form), is to recognize that we can pick out the same thing under more than one concept. Block argues that our first-person access to p-consciousness is mediated by "phenomenal concepts," concepts that achieve their reference by somehow instantiating the very phenomenal states they are about. Because of the involvement of p-consciousness in phenomenal concepts, they seem to pick out something radically different from the referents of theoretical concepts employed in neuroscience. Those concepts are in the descriptive language of scientific theory. Still, this does not preclude the possibility that both concepts share the same referent. Thus, according to Block, we "replace a dualism of properties with a dualism of concepts" (Block, 2002b). This is acceptable from a physicalist perspective, so long as the phenomenal concepts themselves do not require nonphysical elements to do their work. The dualism of concepts helps dispel the appearance of an explanatory gap, according to Block. The distinctive nature of our first-person access to consciousness explains why we are left with lingering questions. But given the truth of our identity, such questions are illegitimate. While Block acknowledges that the details of the identity are still sketchy (particularly on the neural side), he contends that we have a clear path towards closing the gap. Thus, both the metaphysical hard problem of Chalmers, and the epistemological explanatory gap of Levine fall to Block's version of physicalism.

87 However, there is another problem lurking for Block's brand of physicalism. The "harder problem" of consciousness arises from the tension between several theoretical commitments of the view (Block, 2002b). First is a commitment to what John Perry (2001) calls "antecedent physicalism," the idea that physicalism is the default reasonable position to take concerning the metaphysics of mind (here "physicalism" just means "materialism" in Levine's sense). Second is a belief in "naturalism," a commitment to the methods of science and to the application of empirical methods in theorizing about consciousness. Third is a dedication to "phenomenal realism," which consists in a rejection of an a priori analysis of p-consciousness into cognitive, intentional or functional terms--this falls out of his characterization of p-consciousness. Phenomenal realists thus hold that consciousness can't be unpacked into the more tractable functionalist notions of mind.14 Finally, Block's view rejects skepticism about "other minds." It is taken to be beyond doubt that other humans have minds with p-conscious states, while rocks do not, though there is no functionalist justification of this claim. Instead, it is rooted in beliefs about both our functional makeup and our flesh and blood constitution. Other minds skepticism is not, according to Block, the basis of the harder problem. Given these commitments, Block argues that an epistemological worry arises concerning the possibility of alien or robot consciousness. He considers the case of a robot, Star Trek's Data. Data's behavior is similar enough to our

Block calls the views that accept such analyses "deflationists" about p-consciousness; such views hold that p-consciousness is ultimately nothing more than one of the better understood facets of the mind in different attire. See Block, 2002a.

14

88 own to legitimately raise the question of whether or not he has p-conscious states. However, Data is made of stuff utterly unlike our own flesh and blood. This removes any obvious parallel to our neural correlates of p-consciousness. In our own case, we are able to correlate p-consciousness with neural states by employing our first-person access to the phenomenon. But we are not sure if Data is p-conscious at all. Why think the causes of his verbal "reports" are pconscious states, rather than nonconscious analogues? In our case, first-person access is a crucial step in identifying the neural correlates of p-consciousness. But we cannot access Data's states in this way. Further, any reliable functional mark of p-consciousness is blocked by the endorsement of phenomenal realism. This removes any third-personal way of settling the issue. Thus, when faced with Data, we are at a loss. Is he made of the right stuff to support p-consciousness? And how could we possibly figure this out without being Data? We are left in the following situation, according to Block. We have reason to think that physicalism and phenomenal realism are true. But our naturalism dictates that we must view the possibility of Data's p-consciousness as an open, empirical question. Yet such a question seems unanswerable in principle. Block admits that this is not a full-blown paradox for the physicalist, but he insists that it presents an uncomfortable epistemic tension in the view. A question that ought to be empirical and open is arguably beyond our means to settle. Yet it seems a fair and reasonable question, one that a theory of consciousness ought to address. This is the "harder problem of consciousness."15

Block adds considerable detail to the argument by stressing the epistemic strength of various belief commitments that accompany the view. He distinguishes between reasons for believing,

15

89

2.2.3 Block and Levine Block and Levine occupy very similar regions of theoretical space on the problem of consciousness. Both hold that materialism is the default metaphysical position on the mind, given its ability to account for the causal efficacy of the mental and to deliver a unified scientific picture. Further, both reject the use of a priori conceptual analysis and argue that conceptual content, including content about consciousness, is fixed using a posteriori methods. Both also agree that phenomenally conscious states are conceptually distinct from functional processes, and this presents an in-principle obstacle to a broadly-construed functionalist theory of consciousness. This creates an apparent explanatory gap between the physical and the phenomenal; indeed, the gap seemingly defines the phenomenon. Finally, both agree that the epistemic gap carries no ontological weight, because the gap can be explained in terms of distinct concepts picking out the same referent. This clears the way for a physicalist identification of p-consciousness and neural states. However, the two differ over the implications of the explanatory gap for a satisfying theory of consciousness. According to Block, the two-concepts line offers a route to closing the gap. When viewed in proper conjunction with the identity of p-conscious states and brain states, the presence of distinct concepts explains why we seem to have meaningful questions. Once we grasp the
grounds for rational belief, and certainty, and employs these notions throughout. I have simplified the picture in the interest of clarity, and because it will not affect the main thrust of Block's argument nor my comments on it. For a detailed reconstruction of Block's argument, see McLaughlin 2003.

90 divergent ways we access the mind, our apparently meaningful concerns are exposed as empty. We may still feel the explanatory itch, but that is simply a reflection of how deeply we sense the difference in concepts. But it does not signify meaningful open questions indicative of a real explanatory gap.16 Levine counters that there are meaningful open questions in the consciousness case, due to the substantive and determinate nature of our access to consciousness. Levine, recall, argues that we access our conscious states from the first-person perspective by employing phenomenal concepts with substantive and determinate thick MOPs. Because we have such apparently direct access to what we are referring to, we fail to see how p-conscious states could just be brain states. It is not simply a matter of ignorance; rather, we know p-consciousness in a way that holds open valid questions. This differs greatly from other identity claims, so Levine concludes that identities involving substantive and determinate MOPs on one side and thin MOPs on the other leave meaningful open questions. This signifies a gappy identity, and hence an explanatory gap. Block in turn rejects Levine's explanation of gappiness. He argues that there are identities involving substantial and determinate MOPs that nonetheless fail to display gappiness. He writes Consider that the mode of presentation of a sensation of a color can be the same as that of the color itself. Consider the identity 'Orange = yellowish red'. Both modes of presentation involved in this identity can be
16

Cf. Papineau 1995; 2002 chapter 4.

91 as substantive as those in the putatively "gappy" [mind-body] identity..., yet this one is not "gappy" even if some others are. To get an identity in which only one side is substantive, and is so a better analogy to the mindbody case, consider an assertion of 'orange=yellowish red' in which the left hand concept is phenomenal but the right hand concept is discursive (Block, 2002b, xx). Block thus concludes that Levine's explanation of gappiness fails. There are cases where substantive and determinate MOPs do not result in gappy identities; therefore, a better explanation of the lingering questions in the consciousness case is the extreme distinctiveness of our concepts, rather than a revealing sort of first-person access. The residual questions remain illicit. But Block's claim is not convincing. The fact that MOPs of color sensations and colors can be the same does not provide the needed contrast. If orange=neural state x, then we have Levine's gappy identity. But if we consider orange=light of 500 MHz, where orange is a presented with a phenomenal MOP, we get a similar gappy identity. It is one of the central difficulties of color theory to explain how it could be that a wavelength just is a color, given the way colors appear to us. Wavelengths just don't seem to be the right sort of thing to possess the qualitative aspects that define color. In fact this problem has driven some theorists to be dualists, and others to be eliminativists, about color. And it is plausibly because we possess a substantial and determinate MOP of orange, because we have seemingly direct access to the quality itself, that we find these identities mysterious.

92 Block's claim concerning the identity of orange and reddish-yellow, where reddish-yellow is taken "discursively," is also unconvincing. What exactly is being identified here? Let us allow that one can possess the purely discursive concept "reddish-yellow" in the absence of any phenomenally grounded knowledge of the color (otherwise, the identity would be non-gappy due to the presence of substantial and determinate MOPs on both sides of the identity). This would plausibly involve knowing about reddish-yellow in relational terms, for example in terms of its location in a relational quality space. Thus, orange=point x in the quality space, the space in between red and yellow picked out in the same fashion. But this is also plausibly a gappy identity. How could the sensation of orange just be a location in an abstract quality space? Such a claim unpacks the sensation in relational terms, or in the cognitive, intentional, or functional terms that Block rejects. It is not open to a "phenomenal realist" to embrace this view. And furthermore, it leaves open exactly the questions that Levine claims are indicative of a gap (see Levine 2001, sections 4.2-4.3). Block has failed to provide a counterexample to Levine's position. Thus, Levine's explanation of gappiness holds, and Block must find a way either to address the remaining meaningful questions or to explain them away. If he cannot, the explanatory gap remains as a worry. I argued in the section on Levine's gap that what really demands explanation is the appearance of the gap. To put things into Levine's terminology, what demands explanation is our apparently substantial and determinate access to seemingly intrinsic qualities of experience. If we can

93 provide such an explanation, it is open to us to reject the gap as illusory. In a sense, this is what Block wishes to do with his two-concepts line. Levine also has sympathy with the two-concepts line, but he argues that it fails to close the gap because we do not understand how the presence of the phenomenal state in the phenomenal concept itself can make a cognitive, referential difference. He writes It does seem as if the very property of reddishness is somehow present in the concept, making a cognitive contribution that endows the content with genuine substance and explains the gappiness of the identity. But how do we explain that on a physicalist model? How does a property referred to by a mental representation get cognitively incorporated into the representation in the way it seems to with phenomenal concepts and properties? ... What emerges...is that the explanatory gap is intimately connected to the special nature of phenomenal concepts. ...But then we have the problem of providing an explanation in physicalist terms of that very specialness, and we don't seem to have one (2001, 86). Simply saying that a p-conscious state is in a phenomenal concept is not sufficient to explain how this special mode of access works. And in the absence of a more detailed model, it is fair to say that legitimate open questions remain. Therefore, the gap is still a worry for Block.17

I will have much more to say about the "two concepts" model in chapter three, where I present and criticize the approach in detail. Block's comments in his 2002b are, by his own admission, sketchy. He refers favorably to the work of others in that context, particularly the work of Brian Loar. I address Loar's proposal in chapter 3 below.

17

94 And that is not a surprise. Levine insists that the issue at the root of Block's harder problem, the puzzle of robot or alien consciousness, is just a facet of the explanatory gap, and not an independent worry. According to Levine, Another way to see a manifestation of the explanatory gap is in our deep puzzlement over the question of attributing conscious experiences to creatures somewhat different from ourselves... What we lack is a principled basis for determining how to project the attribution of conscious experience. I submit that we lack a principled basis precisely because we do not have an explanation for the presence of conscious experience even in ourselves. We know, perhaps, or have good reason to believe, that its presence is due to something about our physical constitution. But without an explanation of how our physical constitution gives rise to consciousness, we can't use that knowledge as a basis for determining what else has it... The fact that we don't know what to look for is... a manifestation of explanatory ignorance (2001, 77-78). If he is correct, than arguably there is no substantial difference between the two positions. There is just the explanatory gap, though it manifest in a number of guises. We can see the collapse of the harder problem into the gap by noting that a solution to one entails a solution to the other, and vice versa. First, if we had a solution to the explanatory gap, we would be in a position to extend our concept of consciousness and dissolve the harder problem. A solution to the gap would answer how the physical brain achieves phenomenal consciousness, what it is

95 about particular neural tissue that makes it a phenomenally conscious state. That kind of understanding would provide a principled basis for the relevant extension. If the range of extension were still a mystery, we would still be in a position to ask what it is about our neural tissue that makes the crucial difference, and therefore, the gap would still be open. In the other direction, any solution to the harder problem would entail a solution to the explanatory gap in our own case. Again, we would then be in a position to determine in a principled way if an arbitrary alien with a radically different constitution possessed phenomenally conscious states. To do that, we would need to know what sort of relevant connection there is between constituting states and phenomenal states. But that would offer the same sort of guidance in our own case, closing the explanatory gap. Thus, the two problems do not come apart. Levine is correct that the problem of robot and alien consciousness is just a manifestation of the explanatory gap, and his position and Block's position do not diverge. To conclude, Block's view is not significantly distinct, in the final analysis, from Levine's position. Both are physicalists whose positions fail, largely for the same reasons, to close the explanatory gap. Block's view does not present a new, "harder" problem of consciousness; rather, it accentuates a previously identified manifestation of the explanatory gap. The main challenge posed by the gap, as argued in section 2.1.2, is to account for the apparently substantial and determinate access we have to our conscious states. Furthermore, Block's characterization of p-consciousness is not the best way to pick out the

96 phenomenon, and upon scrutiny, his claims lack support. The way remains open for a characterization of p-consciousness in cognitive, intentional, or functional terms, and thus to a reductive explanation of consciousness. In the concluding section of this chapter, I will lay out the residual explanatory task remaining after our criticisms of Chalmers, Levine, and Block. I will argue that explaining our access is central to opening the route to a satisfying materialist theory of consciousness. However, that explanation of access must also account for the appearance of seemingly intrinsic features of consciousness.

2.3 Conclusion: What's the Problem? After considering the efforts of Chalmers, Levine, and Block to spell out a special explanatory problem of consciousness, we are left with the following results. At bottom, all three theorists hold that consciousness does not appear to be functional, from the first-person point of view. Worse, it seems to them that there isn't an illuminating connection between consciousness and any relational notion, any notion specifiable in terms of an entity's relations to other things. This appears to cut off consciousness from the sorts of connections ordinarily used by science to pin down and explain a target phenomenon. Further, following Levine, we seem to have direct access to our conscious states, and what's more, we seem to be in contact with intrinsic features of consciousness. We seem to be in direct contact with nonfunctional aspects of the conscious mind. This is how it appears to us, according to these three theorists. However, all three attempted to move beyond these appearances to make substantial

97 claims about the nature of consciousness. Chalmers argued that the appearances underwrite our phenomenal concept of consciousness, and given the nature of our concepts, this leads irreducibility and the failure of supervenience. Levine argued that the appearances undermine the possibility of a materialist explanation of consciousness--we can tell from the appearances that the usual type of materialist theory is doomed to failure. And Block argued that the appearances support the presence of a distinct concept of consciousness not characterizable in cognitive, intentional or functional terms. This blocked the possibility of a functionalist theory of consciousness, forcing us to posit an identity claim which ultimately failed to shed light on how it is that the states of a physical brain could be conscious states. I argued that all three of these claims fall short. Given our commonsense approach to fixing the data, we require successful argument to move from the appearances to the underlying nature of consciousness. Folk psychology alone does not license this move; independent justification is needed. However, under scrutiny, that justification was lacking. Chalmers's model of reductive explanation failed to reliably rule out the possibility of conceptual change in the face of theoretical advance. Levine and Block also failed to provide the needed support for their characterization of a nonfunctional kind of consciousness. At best, the three theorists established that it appears to us that consciousness is nonfunctional. This opens the door to a materialist explanation of consciousness. If we can explain why it appears to us that consciousness is nonfunctional in a way

98 that does not entail that consciousness really is nonfunctional, we can then employ the usual methods of theory-building in psychology and neuroscience. If there only appears to be intrinsic features of consciousness, then a relational theory is fully adequate. The apparently intrinsic features can be explained away as a product of our access mechanisms. A satisfying explanation of those mechanisms explains why things appear as they do to common sense, and why we are at times swayed by the intuitions cited by Chalmers, Levine, and Block. But it would not require us to take those appearances as accurate. If such an explanation can be developed, there is no barrier to a fully satisfying materialist explanation of consciousness. We must therefore pin down the conditions of adequacy on a model of our first-person access. Such a model must explain both the seemingly direct nature of our access, and the seemingly intrinsic features we seem to access directly. In the next chapter, I will provide more detail about what is required of such a model. Then I will present in detail and criticize the "two concepts" approach mentioned in conjunction with Block. This approach has become the standard move in explaining the intuitions that create resistance to materialism. However, I will contend that under scrutiny, the view falls short. In chapter 4, I will present a rival model, one that successfully explains the appearances while remaining firmly within the materialist framework.

CHAPTER 3: THE APPEARANCES TO BE EXPLAINED AND THE PHENOMENAL CONCEPTS APPROACH

3.1 What are the Appearances? 3.1.1 Three Features of Conscious Experience We can conclude from the work of the first two chapters that a central task for a materialist theory of consciousness is to explain why conscious states appear to have features inexplicable in materialist terms. As we have seen, these appearances underlie attempts to characterize a substantial problem of consciousness, one leading to an explanatory gap or hard problem. The goal of the first part of this chapter is to clarify the nature of these appearances. As argued in chapter 1, claims about the appearances must be grounded in the commonsense approach to fixing the explanatory data. Any claim about seemingly problematic features of consciousness must be rooted in our everyday ways of characterizing the conscious mind. In what follows, I will provide a commonsense characterization of the allegedly troublesome appearances. Then I will present and criticize a popular materialist approach to explaining the data--the so-called "phenomenal concepts" approach.1 Though I will argue that the approach falls short, the investigation will help further clarify just what is required of a materialist explanation of the appearances. Then in chapter 4, I will defend my own explanatory model of

Peacocke 1989; Loar 1997; Lycan 1996; Sturgeon 1994; Papineau 1993, 2002; McLaughlin and Hill 1999; Tye 1995, 2000; Perry 2001; Block 2001, 2002a; etc.

-99-

100 these appearances, one that explains the appearances in materialist terms, but avoids the troubles that derailed the phenomenal concepts approach. The seemingly problematic appearances emerge when we cognitively access--think about, reflect upon--our conscious states from the first-person perspective. Introspection2 allows us to focus on various features of our conscious states. Further, we can speculate, based on the deliverances of introspection, on how these states are connected with the world and how they interact with each other. This, in large part, is the basis of our folk-psychological characterization of the conscious mind; the data a theory of consciousness must explain is pulled from these materials. In particular, we can cognitively access our conscious sensations, like sensations of red or feelings of pain. These states allegedly display especially problematic appearances. Sensations possess "sensory qualities," the mental character that makes a conscious visual experience different from a conscious aural experience or a pain experience.3 It is these sensory qualities, the "redness" of a conscious red sensation, the painfulness of a conscious feeling of pain, the felt warmth of a sensation of heat, that purportedly cause the greatest difficulty for a materialist theory. With that in mind, I will focus on the appearance of sensory conscious experience. If we can characterize and explain these appearances, the way will be cleared to developing a satisfying materialist theory of consciousness.
When I use the term "introspection" I simply mean first-person cognitive access to our conscious states. For more on introspection, see Rosenthal 2000; see also Lyons 1986. The term "qualia" is also used to pick out these aspects of experience. However, that term is often taken to pick out intrinsic qualities of experience. Therefore, I will use the more neutral "sensory qualities." As should be clear, I am skeptical of the existence of qualia in the strong sense.
3 2

101 There are three central features that allegedly mark conscious sensory states as problematic. According to Chalmers, Levine, Block, and many others, we seem to have direct access to our conscious states and to our conscious sensory states in particular. There seems to be nothing mediating our cognitive access to our conscious states. I will label this feature "immediacy." Further, conscious sensory qualities appear disconnected from the rest of our mental lives. We can, it is claimed, easily imagine conscious sensory qualities varying independently of other mental processes. I will label this feature "independence." Finally, conscious sensory qualities seem to defy informative description. According to many theorists, no matter how we characterize them, we miss what is most central and important to these qualities. I will label this feature "indescribability." These three features, combined with certain (generally unstated) inferences about the accuracy of our first-person access to consciousness, lead to the antimaterialist claim that conscious sensory qualities don't just appear disconnected and indescribable, but actually are disconnected and indescribable, in principle. My goal here is to investigate whether conscious states, conscious sensory states in particular, characterized in commonsense terms, really appear the way these theorists claim. I will also consider just how much we can infer from these appearances, if things do in fact appear this way. It is often the case when considering the appearance of conscious states that there is a more moderate folk-psychological reading of these features, one that captures the appearances but does not deliver antimaterialist conclusions, at least not without

102 substantial further argument. Finding the limits of our commonsense characterization is a crucial step in this debate.

3.1.2 Immediacy The first important feature to be explained is the seeming immediacy with which we cognitively access our conscious states. When I cognitively access my conscious states from the first-person perspective, I am not aware of any mediating inference or process that lies between the state and my awareness of it. If I consciously look at a red apple, for example, it seems that no inference is required for me to comprehend that I am having a reddish sensory experience. I just know that I am without, as it were, thinking about it. Further, I am unaware of any intermediate process at work when I cognitively access the experience. I do not notice any mechanism that accounts for the awareness, nor do I notice any intervening medium through which I access the experience. It seems that I know of my states in virtue of having them. I apparently do not need to engage in any deliberate process, nor do I notice anything lying between the experience and my awareness of it. This is different from what occurs in ordinary thought or perception about objects in the world. In perception, for example, it is possible to become aware of the intervening medium--the air or water, say--that lies between me and the object of my awareness. This is not the case in our first-person access to conscious experience. Likewise, in thought, I often must infer from other evidence that the object of my awareness is present. And it is possible to

103 become aware of my thought process churning away as I come to that conclusion--I can explicitly reconstruct what steps I likely took in getting there. Again, this is not the case in our first-person access to conscious experience. That awareness seems direct and unmediated. I concur that this is how things seem, from the commonsense point of view. People will generally answer the question, "how do you know what you are experiencing" with a bemused claim that they "just do" in virtue of having the experience. In addition, we generally do not question another's claim about what they are experiencing. We usually take it as given that they know what's going on in their conscious lives and that there is nothing ordinarily threatening such a claim. This, perhaps, rests on the idea that whereas we need to infer what is going on inside them, they know without resorting to inference. I will challenge this explanation of our conversational practices in the next chapter, but for now it is enough to note that this is a common way of characterizing the situation, and that it's not jarring to our ordinary ways of speaking to describe things in this manner. This indicates that it does indeed appear to us, from the first-person perspective characterized folk-psychologically, that we have unmediated cognitive access to our conscious states. Therefore, the appearance of immediate access requires an explanation in any theory of consciousness. It remains to be seen whether the appearance of immediacy serves to justify claims that we have access to the nature of our conscious states; that is, to whether they are physical or nonphysical, or whether they are independent and indescribable or not. This is a vital move in motivating the antimaterialist

104 position. It is important to note how such a claim might be defended from a commonsense point of view. One could reason as follows: I am unaware of any mediating inference or process involved in my cognitive access to my conscious states. In ordinary perception and thought about the world, we go wrong when we make an error in inference (when we are misled by the apparent evidence or we reason improperly), when intervening processes go astray (we are tired, confused, drunk, etc.), or when external conditions prevent us from accurate appraisal of the object of our awareness (it's dark, noisy, foggy, etc.). Such intervening elements are not present in cognitive access to consciousness. Therefore, we are accurate in our awareness of our conscious states. From this, it is claimed that if conscious states seem, for example, independent and indescribable, then they really are independent and indescribable. This sort of argument (often unstated) plausibly lies at the root of the claims made by Chalmers, Levine, and Block, surveyed in the preceding chapters. And it does have a commonsense ring to it. But this claim of special accuracy is not well-supported in folk psychology. It is common enough to find ourselves in situations where we correct others concerning what they are experiencing, and we at times accept such criticisms from others. For example, we may correctly inform a friend that they are jealous, despite their denials. Later, they may come to agree that we were correct. Or we may be told by a doctor that our back hurts, explaining why we are limping. This may come as a surprise; we may be convinced that our feet are hurting, or that we aren't in pain at all. There is nothing incoherent or particularly jarring about these situations

105 from the commonsense perspective. The appearance of immediacy does not entail accuracy, from a commonsense perspective. Therefore, additional argument or evidence or evidence is required to establish the antimaterialist conclusion. But the appearance of immediacy must be explained however one comes down on this issue.4

3.1.3 Independence The second important feature requiring explanation is the seeming independence possessed by conscious sensory qualities. According to a range of theorists, conscious qualities, like the redness of a red sensation or the painfulness of a pain, seem dissociable in principle from their external causes and from other mental processes. We can easily imagine, it is maintained, everything else that occurs when we perceive red--a red stimulus in good lighting, a clear line of sight, my belief that a red object is present, my various dispositions to behave-but with a different conscious quality occurring, or perhaps no quality at all. This indicates, according to the theorists, that conscious qualities can vary independently from the rest of the world and even from the rest of our mental lives, at least in principle. There are two distinct issues here. One is how things appear from the first-person perspective, characterized folk-psychologically. Do conscious qualities really appear to be independently variable in this manner? The other is what to make of the alleged easy conceivability of shifting or absent qualities. Why think this shows anything interesting about the nature of conscious states,
4

I will say more in defense of this claim in chapter 5.

106 or even about their appearance? First, concerning how things appear to us, it is not clear that anything in the appearance of the qualities themselves indicates independence. Qualities are initially accessed as features of objects. Our nave conception has it that colors, for example, are qualities of mid-sized objects, not that they are qualities of experiences. When we realize that sometimes things are not as they appear, we understand that sensations can occur in the absence of the objects in the world. We come to recognize a distinction between how things appear and how they are, between appearance and reality. But this requires going beyond the way conscious qualities present themselves, and inferring their independence from cases of error and illusion. In addition, in interacting with other people, we may come to wonder if the world appears to them as it appears to us. The puzzlement over how to answer this question may lead one to conclude that there's something odd and isolated about conscious sensations. But again, this is an inference from personal interaction, not something that conscious qualities "wear on their sleeves." Independence isn't part of the appearance of conscious sensory qualities, but a fact inferred about them when we interact with other people and the environment. But what is the status of this sort of inference? Isn't it a natural move to make, given our makeup and experience? Isn't it, therefore, a part of our folkpsychological conception of consciousness? It does seem natural enough, and many people wonder about spectrum inversion and the like long before they set foot in a philosophy classroom. This indicates that it is plausibly part of our commonsense characterization of consciousness that conscious qualities seem

107 independent of the world in some sense, and that they are difficult to verify interpersonally. There is a seeming arbitrariness when we think about the connection between conscious qualities and the things they ordinarily attach to in veridical perception. Further, there is a kind of privacy attached to sensation. Whereas there may be a constitutive connection between belief and desire and publicly observable behavior, there is a temptation to think that conscious qualities are defined by how they appear to subject, by what it feels like for the subject to have them. I concur that this is how things seem from the commonsense perspective. But this does not entail that this is really how conscious qualities are--that they are really arbitrarily connected to their ordinary objects of perception and the rest of our perceptual processes or that they really are private and in-principle inaccessible by others. That goes far beyond what is licensed by folk psychology. While many find it intuitive to think that sensations are hard to get at from the outside, they wouldn't think it impossible to do so. This is evident to teachers of introductory courses on the philosophy of mind. Even Chalmers acknowledges that many people fail to grasp the alleged problems posed by conscious sensory states, and have to be taught that there is a substantial worry.5 Many people, that is, find it equally intuitive that a full-blown science of the mind will be able to pin down the nature of sensory qualities. Therefore, we require an argument that leaves the realm of our commonsense view of the mind. Folk psychology is generally silent on the underlying nature of the mind, what it's made of and how it works--it doesn't answer such questions one way or the
5

See Chalmers 1996, chapter 1.

108 other. While many people may find it intuitive that sensations are hard to get at, they'd be surprised to learn that no method developable in principle by science could ever do so. Folk psychology is neutral regarding this issue, and therefore argument is required to establish this claim.6 We can conclude that a materialist theory of consciousness must explain the ease of the inference to arbitrary and private sensations. It must explain what it is about our access to conscious qualities that makes this move appealing. But what of the easy conceivability of shifted and absent qualities? Does this indicate something further about conscious sensory qualities, something about their underlying nature? Without further argument, it does not. As we saw in the first two chapters, attempts to employ these conceivability intuitions to formulate the problem of consciousness are not convincing. In the cases we considered, there is always an unjustified leap from the conceivability of such scenarios to something stronger. While it is true that a materialist theory of mind cannot allow that conscious qualities vary independently of the mind's functional or physical makeup, such a theory can allow that this is how things appear to us, pretheoretically. This, I am contending, is all the appearance of independence amounts to--that such scenarios are easily imaginable. And I grant that this is a well-entrenched folk-psychological inference from the appearances, one that many find compelling. But, in the absence of further support, such intuitions do not indicate anything about the underlying nature of consciousness. Thus, if a materialist theory can explain why we find the independence inference compelling, and why it remains a strong pull even in the
6

Again, I will have more to say about this issue in chapter 5.

109 face of scientific explanation, it will have gone a long way to removing the conceptual barriers to a satisfying explanation of consciousness.

3.1.4 Indescribability The third feature of conscious states requiring explanation is almost by definition the most difficult to characterize. According to many theorists, including both those criticizing materialism and those endorsing the phenomenal concepts approach, conscious qualities are in an important sense indescribable. When we reflect on conscious experience, we become aware of qualities that defy informative description, qualities that cannot, it seems, be captured in illuminating terms. The claim of indescribability is widespread, and generally offered as an obvious pretheoretic "description" of the data. When we cognitively access our conscious states from the first-person perspective, we are aware of qualities that can't be captured in words, or at least can't be captured in words that would be comprehensible by people lacking the experiences themselves. In what follows, I will try to get clearer about what this claim amounts to, and to what extent it is part of our folk psychology. If I attempt to describe the experience of drinking beer to someone who has never had that pleasure, I can use words like "bitter" or "fruity" or "hoppy" to describe my taste experiences. But if I'm asked what bitter tastes like, I am pretty much at a loss. It seems that my words have bottomed-out, and I can find nothing more to say beyond the taste is not sweet or salty or sour. But if the questioner asks what those flavors taste like, I will have nothing left to say. It

110 seems that we lack a means of communicating the qualities of our most basic sensory experiences. But we seem to have a full understanding of these qualities; we have no trouble recognizing, remembering, or imagining such tastes. Still, it seems that we can't inform another about the real core of the experience--what the taste experience is like for us--unless they themselves have had the same, or very similar, experiences.7 I believe that this is indeed a well-entrenched bit of folk psychology, and its presence and strength require an explanation. But the claim is more slippery than it first appears. For any given object that we know of, there is a limit to how much we can say about it. Take the table I am writing upon. It contains a huge number of atoms, each in some active state, which collectively account for the observable features of the table. It is impossible for me or anyone else to fully describe the table at this level. Thus, even the table is in a sense indescribable, as is everything else we know of. So what is special about sensory qualities? Is it more than just this ordinary kind of indescribability? Obviously, those who endorse this feature of experience think that it is. They contend that in an idealized setting, a thinker with endless time and memory could know all about the table.8 But that is not the case with sensory qualities. There is something about them that no description could capture. However, the reliability of folk-psychological intuition when we move beyond ordinary circumstances is suspect. Folk psychology is geared to deal

7 8

See Jackson 1982; Chalmers 1996; and Graham and Horgan 2000, for example.

At least within in the limits set by quantum mechanics. This complication will not be relevant in what follows.

111 with the ordinary situations encountered by people, not to give clear answers about distant possible cases. The kinds of complex descriptions that may be required to fully capture our sensory qualities are not explicitly available in ordinary circumstances. And, indeed, the cases offered by antimaterialists to establish indescribability involve idealized thinkers, with unlimited time and memory, reflecting on a completed science of the mind. Such scenarios clearly go beyond the ordinary circumstances of folk psychology. Our intuitions about the possibility of fully describing conscious sensations, therefore, do not provide a reliable guide to the limits of scientific theory. That is not to say that we can't provide reason to extend our folk intuitions out to the limit. But, to repeat, this requires argument, not just the invocation of ordinary appearances. I will offer more to shore up this claim in chapter 5 when I address the "knowledge argument" in detail. But for current purposes, it is the appearance of indescribability that requires explanation, not actual indescribability. It may be that the best way to explain this appearance is to hold that it accurately portrays the nature of conscious qualities. But it may not. Prior to attempting to build a theoretical model of conscious sensations and our cognitive access to them, we cannot know which way is correct. In any event, there is certainly a well-entrenched folk intuition that requires explanation here. It can be summed up in the slogan "A blind person can never know what it's like to see color." Most folk will readily assent to this, at least at first glance. And many will maintain it even in the face of countervailing folk-psychological intuitions, intuitions sparked by reflection on

112 other sorts of cases. It has a strong pull on our commonsense characterization of the conscious mind.

3.1.5 Are These Really the Appearances that Create the Problem? We are left with three features of conscious experience requiring explanation: immediacy, independence, and indescribability. These three features, I contend, ultimately underwrite the attempts at characterizing a problem of consciousness surveyed in the first two chapters. According to Chalmers, we have a special "phenomenal" concept of consciousness, one picked out in terms of what it's like for the subject in conscious experience. But why should a characterization in terms of what it's like for the subject block a materialist theory of consciousness? Chalmers holds that this kind of concept picks out features varying independently of any functionally or even physically specified process. This entails a failure of location in a materialist ontology--the absence of a functional or physical characterization prevents the possibility of a reductive explanation and this demonstrates a failure of logical supervenience. At root here is a claim about independence. If sensory qualities are not independent in the way Chalmers claims, his antimaterialist conclusion does not follow. So, if we can give an explanation of the appearance of independence without endorsing actual independence, we can avoid Chalmers's conclusion. Levine argues that we have substantive and determinate first-person access to our conscious states and that this reveals qualities inexplicable in a materialist theory. Here the key issue is indescribability--if we could describe the qualities we access in functional

113 or physical terms, there would be no gap. But substantive and determinate access reveals indescribable properties; we are left with open questions and failed deductions.9 But again, if we reject the accuracy of first-person access on this point and offer an alternative explanation of the appearance of indescribability, the gap can be avoided. Finally, Block relies on both features to make his point. Indescribability is at the heart of p-consciousness: we can only "point" to it; it can't be informatively described. But the thought experiments employed to establish his conceptual distinction--the conceivability of pconsciousness without a-consciousness; the conceivability of scenarios where representational content shifts, but p-consciousness does not--largely turn on the intuition of independence. We are asked to imagine scenarios where causal role, function role, or intentional representation is present, but p-consciousness is lacking (or vice versa). It follows that if we can explain these appearances without endorsing actual indescribability and independence, Block's problematic notion can't get off the ground. And (often implicitly) at work in all three attempts to establish a problem of consciousness is the intuition of immediacy. Because of the seemingly direct cognitive access we have to our conscious states, there appears to be no room for error in our first-person conception of consciousness. Or at least there is a presumption that things are accurately accessed in the first-person perspective, barring extraordinary circumstances. This "presumption of innocence" underwrites claims of actual independence and indescribability. If our cognitive access is accurate and we seem to access independent, indescribable qualities
9

See Levine 2001, pgs. 96-104.

114 of consciousness, then we really do so, and a materialist theory must explain how a brain could have such features. But if we can explain the appearance of immediacy without licensing full-blown accuracy, it is open to us to hold that we are unaware of what is really going on behind the scenes in conscious experience. Things may appear immediate, independent, and indescribable, but in reality be dependent, fully describable brain processes, accessed in a mediated fashion. But will an explanation of these appearances really suffice to clear the way for a materialist theory of consciousness? One might argue that this is just to avoid the issue, to solve the problem by sticking one's head in the sand and wishing it isn't there.10 But this is not the case. In trying to discover just what the supposed problem of consciousness is, we've found that all the attempts to characterize the issue run from the presence of odd appearances, accessed from the first-person perspective, to the conclusion that there is a special problem, one different in kind from those previously tackled by science. It follows that if we can explain the appearances without endorsing the real presence of these problematic features, there is no good reason to accept that there's a special puzzle present. That is not to say that we need only explain the beliefs that people have about consciousness, and we are done. Rather, we need to explain why the conscious mind seems the way it does to us in first-person reflection. If this involves belief, so be it, but it is not obvious that this is all that's required. To the extent that we can capture just what the antimaterialist says is present in our commonsense conception of consciousness, we have the best chance of
10

See again chapter 1, section 1.3.

115 developing a satisfying explanatory theory. So long as nothing is accepted as data that defies explanation by its very characterization, we should be in position to develop this sort of theory. In part 2 of this chapter, I will present a popular materialist attempt to explain these appearances in materialist terms. This view, termed the "phenomenal concepts" approach to explaining first-person access, invokes directly referential mechanisms to explain our cognitive access to consciousness. The view has been developed and championed by a range of theorists, notably Brian Loar, David Papineau, and John Perry. In what follows, I'll present the approach in general, and then I'll present the specific views of Loar, Papineau, and Perry. I'll close by arguing that despite this detailed explication and defense, the approach ultimately fails to explain, in materialist terms, why conscious states appear as they do from the first-person perspective.

3.2 The Phenomenal Concepts Approach 3.2.1 What is the Approach? Phenomenal concepts are posited to explain how conscious mental qualities play a cognitive role in our mental lives--how we think about conscious qualities from the first-person perspective. They also allegedly explain, inter alia, in a way consistent with materialism, the intuitions that underwrite antimaterialist arguments against the possibility of an explanation of consciousness. According to the phenomenal concepts (p-concepts) approach, we cognitively access our conscious sensory qualities by using cognitive mechanisms that refer directly.

116 The idea of direct reference is inspired by work in the philosophy of language. Putnam, Kripke, and others argue that certain terms, among them names, indexicals, and natural kind terms, refer not by way of a cluster of descriptions, but by being causally connected to their referents.11 An important feature of directly referential terms is that the user of the term need not know what the referent of the term in fact is. We might be causally connected to a referent, but not know to what we are causally linked. Later on, perhaps, science may discover what our terms refer to. But we can cognitively interact with the object of reference prior to this discovery; we can think about it, recognize it among other things, and speculate on its nature. In the case of consciousness, this has obvious benefits in avoiding antimaterialist conclusions based on first-person access: we might cognitively interact with and refer to conscious states, but not know that their underlying nature is physical. The conceivability arguments that drive Chalmers's claims, for example, would be disarmed. Furthermore, the approach seems offer a good explanation of the three features of conscious experience explicated above. Directly referential mechanisms purportedly explain why our access seems immediate: we refer without employing any mediating theory or description. Further, because we refer in the absence of theory or description, conscious states might seem disconnected from other processes: we don't refer to them by referring to anything else, so they don't seem essentially connected to anything else. And conscious qualities seem indescribable because we access them nondescriptively; if no description is involved in conscious access, an explicit
11

Putnam 1975; Kripke 1980; Devitt 1981.

117 description of the referent might seem to miss crucial features we are aware of. Thus, p-concepts appear to offer a good route both to explaining our cognitive access to our conscious states and to defusing antimaterialist arguments. All versions of the p-concepts approach, as I'll characterize it here, share two important features. One is the employment of a nondescriptive element--a feature of the p-concept that allows it to pick out our conscious states nondescriptively. Different theorists unpack this element in different ways, as we'll see, but all of them agree that somehow p-concepts mentally "point" to their referents rather than describing them. The second feature is that conscious sensory qualities themselves play a constitutive role in p-concepts: they somehow make up or are necessarily involved in p-concepts. Again, there are a variety of ways to unpack this claim, but the central idea is that p-concepts themselves incorporate sensory quality and use them in the act of reference. A concept picking out our conscious states that fails to incorporate sensory quality in this way is not a p-concept. There is a preliminary worry that must be addressed before turning to specific examples of the approach. It might be contested that the p-concepts approach is not intended to explain the three features I've presented as the necessary target of a theory of first-person access. This would circumvent my criticisms of the approach, which are rooted in the claim that immediacy, independence, and indescribability must be explained. However, all three of the theorists whose work I will consider in detail--Brian Loar, David Papineau, and John Perry--endorse the claim that we have a distinct conception of conscious

118 mental states in terms of what it's like for subjects to have them. And this, I've argued above, in effect endorses the presence of the three features. Brian Loar, for example, accepts a characterization in terms of what it's like for the subject both for the sake of argument against antimaterialists and because he thinks it best captures our experience. He writes Antiphysicalist arguments and intuitions take off from a sound intuition about concepts. Phenomenal concepts are conceptually irreducible in this sense: they neither a priori imply, nor are implied by, physical-functional concepts. Although that is denied by analytic functionalists ..., many other physicalists, including me find it intuitively appealing (Loar 1997, 597). "Analytic functionalists" contend that our folk-psychological concepts can be fully characterized in terms of functional role, broadly speaking, including our folkpsychological concept of consciousness. The alternative characterization endorsed by Loar therefore defies characterization in those terms--, that is, it cannot be unpacked in causal, functional, or intentional terms. The role of "a priori" here is only to stress that it is a matter of figuring out our "pretheoretic" intuitions about our commonsense concepts--there need be no commitment to armchair analyses in terms of metaphysically necessary features.12 But even so, Loar is claiming that we have a concept distinct from causal, functional or intentional notions. What is left is a concept characterized in terms of the conscious "feel" of sensory states: what it is like for us to have them. But, as argued above, what makes this sort of characterization seemingly problematic for

12

See chapter 1, section 1.2.

119 the materialist is that the "feel" of a conscious sensory state seems directly accessible, and it seems independent and indescribable. In similar fashion, Papineau endorses a conception of consciousness solely in terms of what it's like for the subject and rejects analytic functional analyses of conscious states in terms of causal roles (2002, 40). He writes, When we use phenomenal concepts, we think of mental properties, not as items in the material world, but in terms of what they are like. Consider what happens when the dentist's drill slips and hits the nerve in your tooth. You can think of this materially, in terms of nerve messages, brain activity, bodily flinching, facial grimaces, and so on. Or you can think of it in terms of what it would by like, of how it would feel if it happened to you (Papineau 2002, 48, emphasis in original). And again, the features of a conception in terms of what it is like for us that create alleged problems are immediacy, independence, and indescribability. What's more, Papineau explicitly endorses Block's "inflationist" characterization of his view, as one committed to a concept of consciousness not analyzable in causal, functional, or intentional terms.13 As we've seen, this sort of concept is underwritten by the three features of appearance I've cataloged. Like Loar and Papineau, Perry rejects the claims of analytic functionalists who claim that our first-person folk-psychological concept of sensory states can be fully characterized in causal, functional, or intentional terms. He holds that we possess a concept that picks out our sensory states in terms of their "subjective

13

Papineau 2002, 49-50.

120 characters"--again unpacked in terms of what it is like to for the subject to have them. He writes, [A] state H... has a causal role, a certain syndrome of typical causes and effects, to use David Lewis's terminology. This is one aspect of H. And the state H also has a certain subjective character. This is another aspect of state H. There is no reason whatsoever apparent to common sense to suppose that the subjective character of H can be identified with this other aspect, H's typical syndrome of causes and effects. There is no reason that I can see that the physicalists should think that this is so (Perry 2001, 40). Perry offers this as a datum to be explained, but he does not say more to support it, except to recommend the reader to a recent work of Block's. He writes that Those who are seriously tempted by the idea that the properties of our brain states of which we are aware when we have an orgasm or taste a chocolate chip cookie are not only properties that have a causal role and serve some function, but simply are the properties of having that role or playing that function, should stop at this point and read "Mental Paint and Mental Latex" [Block 1995b] or its descendent "Mental Paint" [Block 2002] (Perry 2001, 39). But the papers of Block simply reiterate the main arguments given in his 1995 and 2001 papers surveyed in chapter 2. Nothing new is on offer, as far as making the case for his p-consciousness. In chapter 2, I argued that Block fails to establish anything beyond the fact that it appears that conscious states cannot

121 be analyzed in causal, functional, or intentional terms. So long as we explain this appearance--captured in my notions of immediacy, independence, and indescribability--we have addressed what needs explaining. While I believe that Loar, Papineau, and Perry would accept my characterization of what's troublesome about consciousness characterized solely in terms of what it's like for subjects, it's clear that they would go beyond what I am willing to accept. In rejecting so-called analytic functionalism, they take themselves to be endorsing a stronger claim about the appearances. They hold that there are intrinsic features of experience requiring explanation, features that move beyond my claim that it merely seems that states are independent and indescribable. Indeed, they see it as a virtue of the phenomenal concepts approach that it can apparently accept the stronger claim, a claim seemingly more in line with the antimaterialist's. But do they offer any new argument that this is anything more than mere appearance? I cannot find such an argument. In general, the theorists presented here all hold that their claim is intuitive--and thus plausibly coincides with our folkpsychological characterization of the data--and that it is a theoretical advantage to be able to accept as much as one can of one's opponent's position. So we have no new reason to enrich our characterization of the appearances. However, there is still the matter of the alleged theoretical advantaged garnered by taking on the data as the antimaterialist sees it. If the tack is successful, won't it provide that much more ammo to shoot down recalcitrant antimaterialist intuitions? It might, if the phenomenal concepts approach can make good on its

122 promises. However, I will argue that under scrutiny the view falls short, and what's more, it falls short on explaining just what it needs to explain most: the way things appear from the first-person perspective. That is the burden of the following two sections--the first lays out the phenomenal concepts approach in detail and the one after presents my counterarguments. However, I do not concede that by accepting the antimaterialist's characterization, we really are in a better theoretical position. It is a mistake to take-on more of an explanatory burden than needed. Accepting an intrinsic, indescribable feature of experience when there is no good reason to do so only canonizes, in the very data to be explained, the intuitions causing all the trouble. If we have a good explanation of why things seemed as they did prior to theorizing, nothing more is required. Accepting the presence of intrinsic features moves beyond the folk-psychological appearances and makes a strong claim about the nature of our conscious states. If one thinks the nature of the mind is fully on display in first-person reflection, this will seem reasonable. But we have already seen that this Cartesian view lacks support, and presents a first-person veto on a theory of consciousness which a materialist should not accept. But in any event, my arguments against the phenomenal concepts approach are independent of this point. The view fails to explain first-person access on its own grounds. This alone is reason to reject it, though it also suggests we should be wary of embracing the antimaterialist's intuitions at face value. This provides us with a first pass at the p-concepts approach. In the next section, I'll present the views Loar, Papineau, and Perry in some detail,

123 highlighting how each unpacks the two central elements of the approach. Then, in the final section, I'll present my criticisms of the view.

3.2.2 Loar's Recognitional Concepts of Experience In his seminal 1990 paper "Phenomenal States" (reprinted with revisions in 1997), Brian Loar elucidates the classic version of the phenomenal concepts approach. He contends that p-concepts are "type demonstratives" formed in response to one's own experience. They refer by picking out a sensation as "that type of sensation." They are demonstratives because they refer nondescriptively by employing the mental equivalent of the term "that"; they are type demonstratives because they can be used to successfully refer on multiple occasions to sensations of the same type. As noted above, Loar contends that these type demonstratives are not conceptually reducible to "physical-functional concepts," concepts that refer to sensations by way of physical or functional description. In addition, p-concepts, according to Loar, possess a special feature that distinguishes them from other examples of type demonstratives. Pconcepts, on his view, actively involve, in the process of referring, the sensory states they refer to. He writes, We might say that a phenomenal concept has as its mode of presentation the very phenomenal quality that it picks out. We might also say that phenomenal concepts have "token modes of presentation" that are noncontingently tied to the phenomenal qualities to which those concepts

124 point: particular cramp feelings and images can focus one's conception of the phenomenal quality of cramp feeling (Loar 1997, 604). That is, sensory qualities serve as their own mode of presentation in p-concepts. Sensory qualities are somehow intimately involved in our cognitive access to sensation; their active presence explains why things seem as they do in firstperson access. If we do not possess the requisite sensory qualities, we cannot form and use p-concepts. Because p-concepts refer in this special way, when we compare sensory qualities accessed from the first-person perspective with the same qualities as characterized by physical-functional theory, we feel something is left out. But according to Loar this does not entail that we actually are referring to two separate properties. We have a dualism of concepts, not a dualism of properties. He writes, A phenomenal concept... bears a phenomenological affinity to a phenomenal state that neither state bears to the entertaining of a physicaltheoretical concept. When we then bring phenomenal and physicaltheoretical concepts together in our philosophical ruminations, those cognitive states are phenomenologically so different that the illusion may be created that their references must be different (Loar 1997, 605). Because they refer directly and constitutively involve the qualities they refer to, p-concepts purportedly account for the three crucial features of conscious experience. Since p-concepts refer directly, subjects are not aware of any mediating inference or process when they cognitively access their conscious

125 states. We simply "point" to our phenomenal states, and the conscious quality we are aware of is actively involved in the act of pointing. This explains immediacy. Further, because p-concepts refer demonstratively, they need not employ any descriptive element. We refer to a quality as "that quality," not as a quality describe in a particular way. This purportedly explains both the appearance of independence and the appearance of indescribability. A quality accessed as "that quality," where we require no other descriptive information to pick it out, will seem from the first-person point of view to be disconnected from anything else in our mental lives. The quality will therefore seem independent. Further, because we refer to the quality as "that quality," we require no descriptive terms for the quality we access. By using a demonstrative like "that" we can refer to as rich or impoverished a quality as we like. And because the quality itself is allegedly involved in the act of reference, we need not redescribe it in any other terms. Its presence alone, when accessed by way of a type demonstrative, is enough for us to be cognitively aware of the quality. It is present in all its indescribable richness and we simply "point" to it. This explains the appearance of indescribability. Adding detail to his model, Loar contends that p-concepts are a special case of a wider class of concepts: so-called "recognitional" concepts. Recognitional concepts are "type demonstratives... grounded in dispositions to classify, by way of perceptual discriminations, certain objects, events, situations" (Loar 1997, 600). We can possess and employ recognitional concepts even if

126 we have no reliable descriptive information about a referent. Loar provides an example of a recognitional concept to make his position clear: Suppose you go into the California desert and spot a succulent never seen before. You become adept at recognizing instances, and gain a recognitional command of their kind. These dispositions are typically linked with capacities to form images, whose conceptual role seems to be to focus thoughts about an identifiable kind in the absence of currently perceived instances. An image is presumably 'of' a given kind by virtue of both past recognitions and current dispositions (Loar 1997, 600-601). When you have these abilities, you thereby have a recognitional concept of the succulent in question. Recognitional concepts are marked by a number of distinct features. According to Loar, they are independent of any technical or theoretical learning; they need not involve any comparison to recalled instances of the referent; they do not depend on any consciously accessible analysis into component features; and they are "perspectival" in the sense that they may fail to pick out the referent when it is accessed from a new perceptual perspective (Loar 1997, 601). According to Loar, all of these features help to explain why we fail to find compelling the identification of sensations accessed from the first-person perspective and sensations characterized by physical-functional theory. Because we don't we don't consciously apply p-concepts in virtue of learned theory, sample matching, or component analysis, their referents will seem independent and indescribable. Further, because they are constitutively tied to a

127 perceptual perspective, they will seem subjective and thus distant from perspective-independent concepts employed in scientific theory. All in all, recognitional concepts display features that should lead us to expect antimaterialist intuitions of conceptual irreducibility and independence. But it's clear that recognitional concepts need not imply anything about the ontological status of the referent, even when we later form a theoretical concept that picks out the same object. We do not accept a dualism of succulents; likewise, we should not accept a dualism of conscious sensations. But there are important differences between the ordinary sort of recognitional concept described by Loar and p-concepts. The key divergence lies in the "phenomenal mode of presentation" of the p-concept. As noted above, Loar contends that sensory qualities are intimately intertwined with p-concepts. Loar also puts this claim in terms of p-concepts possessing "noncontingent" modes of presentation. The key idea is that we pick out sensory qualities in terms of how they feel to us in conscious experience--in terms of what it is like for us to have them. The feel of conscious sensory states is a necessary component of a p-concept. If that feel is lacking, the concept is not a p-concept. Loar notes that the quality need not be generated by perceptual input; selfgenerated sensory images can also serve as the modes of presentation of pconcepts. So, according to Loar, p-concepts are a kind of type demonstrative: recognitional concepts. P-concepts are marked by two crucial features. First is the nondescriptive element. A p-concept in effect refers to a sensory quality as

128 "that quality." They are not conceptually reducible to physical-functionaltheoretical concepts, or to any sort of descriptive concept. They refer directly. Second is the role that sensory qualities themselves play in p-concepts. Pconcepts use sensory qualities as their own modes of presentation. Further, they refer by way of noncontingent modes of presentation. They refer by employing the conscious feel of the sensation to which they refer. When taken together, these features allegedly account for our cognitive access to our conscious sensory states, and they inter alia explain the three features of the appearances underwriting the intuitions driving antimaterialist arguments. Because p-concepts employ the very sensory qualities they are about, our access is immediate. And because they do not describe their referents, but instead point to them demonstratively, sensory qualities appear independent and indescribable.

3.2.3 Papineau's Quotation Model In his 2002 book Thinking About Consciousness, David Papineau presents a model of p-concepts similar to Loar's. He also takes as his starting insight the idea that some terms refer directly and that explains why antimaterialist intuitions are so stubborn in the face of materialist theory. Like Loar, he attributes these intuitions to a dualism of concepts, while denying any problematic dualism of properties. And his model of p-concepts invokes both a nondescriptive element and the active referential participation of sensory qualities. However, instead of recognitional concepts, Papineau employs quotation as a suggestive analogy for p-concepts. But the effect, and the final view, is largely the same. In addition,

129 Papineau stresses issues about identity and explanation that may add support to the position, in a manner reminiscent of Block's discussion presented in chapter 2. Papineau terms the intuition that underwrites antimaterialist arguments the "intuition of distinctness." He contends that because of the active employment of sensory qualities in first-person access, we have a conception of conscious qualities that seems radically distinct from those qualities as described in scientific theory. This conception is based in what it is like for us to have conscious sensory experiences, which carries with it a commitment to immediacy, independence, and indescribability. Whereas Loar uses recognitional concepts as a model, Papineau invokes quotation to help flesh out his view. He holds that p-concepts in some sense "bracket" the conscious sensory states they are about, in a manner similar to the way quotes bracket words and allow us to mention, instead of use, them. He writes [P]henomenal concepts are compound terms, formed by entering some state of perceptual classification or re-creation into the frame provided by a general experience operator 'the experience: - - -'. For example, we might apply this experience operator to a state of visually classifying something as red, and thereby form a term which refers to the phenomenal experience of seeing something red. Such terms will have a

130 sort of self-referential structure. Very roughly speaking, we refer to a certain experience by producing an example of it (Papineau 2002, 116).14 The "general experience operator 'the experience: - - -'" works in a manner roughly analogous to quotation. He continues, The referring term incorporates the things referred to, and thereby forms a compound which refers to that thing. Thus, ordinary quotation marks can be viewed as forming a frame, which, when filled by a word, yields a term for that word. Similarly, my phenomenal concepts involve a frame, which I have represented as 'the experience: - - -'; and, when this frame is filled by an experience, the whole then refers to that experience (Papineau 2002, 117). Papineau endorses a teleo-biological theory of reference to explain the reference relation. Our concepts refer to the things they have an evolved biological function to refer to. The evolutionary success of our ancestors' employment of these structures pins down what they refer to. Papineau notes that he views p-concepts as special cases of indexical constructions. However, he notes that they are different from other indexical constructions in that we cannot use indexicals alone to refer to experiences. In the first place, a simple indexical construction like 'this feeling' will fail to "specify which aspect of our current overall state of consciousness is being referred to" (Papineau 2002, 123). Thus, more is needed to explain how we refer to the
An act of perceptual classification involves categorizing a sensed object under concepts. For example, in perceiving a red apple, we token a red sensation and categorize the sensed object as an apple. The intentional content involved in an act of perceptual classification will not be crucial to explaining the three crucial features of first-person access, so I will ignore this additional feature of Papineau's account in what follows.
14

131 specific qualities we cognitively access. And in addition, Papineau holds that pconcepts "can only be formed using exemplars from the thinker's own mind" (Ibid). If this were not the case, according to Papineau, we could form pconcepts by pointing to exemplars in the minds of others, even if we ourselves had never had an experience with the quality at issue. This allegedly makes it too easy for Mary to learn about conscious sensory experiences inside her blackand-white room.15 She could refer to an experience of red as "that experience (the one Dr. Jones is having outside the room)." Again, Papineau claims that the analogy with quotation is instructive: quotation requires that the word quoted be present between the quotation marks. Likewise, p-concepts require the presence of the accessed quality within the mind of the thinker. What does this requirement of presence within the head of the thinker amount to? It's clear that the sensory qualities themselves must play an active role in p-concepts. And, indeed, like Loar, Papineau endorses the claim that sensory qualities serve as their own mode of presentation in p-concepts (see Papineau 2002, section 4.3). Otherwise, nothing would stop us from forming pconcepts even though we've never had experiences with the quality in question. And this would be to embrace a "deflationist" position on first-person access, one that holds first-person access is not explained by special concepts restricted to those who've had the requisite experiences (like Mary). So the experience itself must be present, or at least a recalled simulacrum of such an experience, one that appropriately resembles experience proper. Further, given the general nature of the 'the experience: - - -' operator, nothing would account for the
15

See chapter 1, section 1.2.2. See also chapter 5, section 5.3.

132 particular aspects of experience we can cognitively access. We'd be in no better position than someone using a "bare" demonstrative, blindly pointing at something and thereby picking it out. Thus, the sensory quality referred to must be present and active in p-concepts. An additional feature of Papineau's account bears mentioning. He argues that a materialist ought to hold that conscious sensory states are identical to brain states. This identity is justified on the basis of parsimony and explanatory power. Further, it provides the best explanation of the causal efficacy of the mental, an issue causing problems for the antimaterialist. But if we are asked, how it could be that conscious sensory states just are brain states, we need say no more. Sensory states just are brain states; a thing is what it is. This is like asking how Cicero could be Tully. There is no explanation beyond stating the fact that Cicero is identical to Tully. I discussed this issue briefly in chapter 2, when I surveyed Levine's and Block's respective positions. For the moment, I simply note Papineau's invocation of the idea; it will come into play later when I present my criticisms of the p-concepts approach. We are left with the following model. P-concepts are terms that refer directly, where direct reference simply means "not by description."16 They refer in a manner analogous to the way quotations refer to the words they quote: the word is present inside the quotes, and likewise, the state referred to occurs inside the p-concept. P-concepts have the structure 'the experience: - - -', where an act of perceptual classification, or an act of imaginative re-creation, fills the frame. When we fill a p-concept frame in this way, we thereby refer to some
16

See Papineau 2002, 87, footnote 6.

133 aspect of our current experience. Reference in general on this view is explained by teleo-biological function. Again, we see that p-concepts possess a nondescriptive element: p-concepts "point" to the experience they bracket. And again, we see that the sensory quality cognitively accessed plays a noncontingent role in the act of reference. Sensory qualities are a necessary component of p-concepts, explaining both their distinctly first-personal nature and how we refer to particular aspects of our experience. Papineau's view is therefore very close to Loar's; indeed, one might wonder what the real difference could be between a recognitional concept of experience that uses sensations as their own modes of presentation and a "general experience operator" that requires the presence of the sensation referred to in its referential operations. In section 3.2.5, I will argue that there is nothing of significance in this distinction, at least when it comes to the objections I raise for the p-concepts approach. But first, I will present Perry's version of the view, which does differ in ways that may matter.

3.2.4 Perry's Humean Ideas Perry's version of p-concepts is somewhat different from that of Loar or Papineau. He does not hold that sensory qualities serve as their own modes of presentation in p-concepts; rather, he holds that p-concepts involve what he terms "Humean ideas" of sensory qualities.17 A Humean idea is a "faint copy" of a sensation. Perry writes,

17

See Perry 2001, chapter 2; see also Perry 2004a, 179f.

134 Consider my experience-based concept of pain--the sort that people who have had pains and remember them have. At the core of this concept are my memories of pain, acquired from earlier experiences of pain--my Humean idea of pain, that in some uncanny way resemble the sensation itself. There is a lot more to this concept, including the word "pain," and lots of beliefs about what causes pain of various types, what one can do to relieve pain, and so forth. But at the core are these memories of experiences of pain (Perry 2004a, 179). Humean ideas are phenomenal--there is something it is like for us to cognitively access Humean ideas, whereas, according to Perry, there is nothing it is like for us to access theoretical concepts of mental states, like those invoked in materialist theory. So although he rejects the idea that sensory qualities themselves are incorporated into p-concepts, his view maintains the crucial constitutive phenomenal element indicative of the p-concepts approach. But what of the nondescriptive element characteristic of the p-concepts approach? Though Perry is widely known for his important treatment of indexical terms, unlike Loar and Papineau he does not appeal to direct reference to model p-concepts. Instead, he appeals to the nature of our conscious states themselves to account for our nondescriptive access. He writes, Having an experience, that is, merely being in a state that has a subjective character, makes the experience epistemically accessible to us. But this is not because it is causally upstream from our sensations or causally downstream from our intentions (Perry 2001, 48).

135 He continues, "[A] brain state that is known by inner attention is not known by causing a sensation, but by being one. We can be aware of our sensation because we have it; we are aware of it by attending to it. No intermediary sensation is required" (Perry 2001, 206). And, finally, ...Our experiential concept of pain is direct, in that we do not experience pain by experiencing some other sensation caused by pain; pain does not have appearances from which we infer it as the most likely or typical cause (Perry 2004a, 180). Sensations like pain are inherently accessible; being in them involves a kind of access to them, an awareness of them. And this awareness does not require any sort of re-representation, either by "inner perception" or by descriptive thought. It is part of their nature that we can cognitively access them simply by attending to them, rather than by representing them. Perry does not offer a theoretical explanation of this feature of conscious states; rather, he contends that there is no good reason not to hold that it's a physical feature, given the explanatory power and ontological simplicity of materialism. He writes, "I have no account of ["what it's like" properties] at all. I acknowledge that there are such properties, and I argue that we have no good reason to suppose they are not physical" (Perry 2004b, 225). Humean ideas share this crucial feature with the sensations they resemble: there is something it is like to be in them, and therefore, they are epistemically accessible. This is

136 an aspect of their "uncanny resemblance" to sensations. So, we do not need to describe our sensations to cognitively access what it's like for us to have them. We token p-concepts containing Humean ideas, and in virtue of their nature, the Humean ideas allow us to think about, in an immediate way, the sensations they resemble. This explains the nondescriptive element indicative of p-concepts. Perry offers a useful analogy to clarify his model of the mind. He suggests that we can think of cognition as involving "file folders" containing information. These file folders can contain a wide range of information, including Humean concepts derived from this sensory information, commonsense descriptions of causal roles, or "detached" theoretical concepts describing the world in contextindependent terms. P-concepts are file folders containing Humean ideas, though, as Perry notes, they usually also contain, among other things, our folkpsychological conceptions of the ordinary causes and effects of mental states. But in every case, in order to be a p-concept, a file folder must contain a "phenomenal core"--that is, a Humean idea. The Humean idea itself is inherently accessible; when it is present in an active file folder, we cognitively access a conscious sensation. There is another feature of Perry's model that requires attention. Perry invokes a particular sort of intentional content to explain certain epistemological features of p-concepts relevant to disarming antimaterialist arguments. According to Perry, the content of p-concepts is not exhausted by its "subject matter" content, content fully given by laying out the contribution of the concept to a thought's truth conditions. Instead, p-concepts, like a range of other concepts

137 possess what Perry terms "reflexive content," content determined by features of the thought itself.18 Many other types of concepts involve reflexive content; the clearest examples are indexicals like "I" or the label that serves to locate a subject on a map. Perry holds that we can know all the subject matter content associated with a thought, but still fail to recognize a priori its reflexive content. We might know descriptively all about a person, and still not know that I am that person. The knowledge I acquire when I learn that I am that person is cashed out in terms of new connections between thoughts. I already knew about a person writing a dissertation on consciousness, living in Brooklyn, etc. and I already knew that "I" refers to me. What I learn is that my concept of the dissertation writer and my concept "I" refer to the same individual. This is not new information about the world; rather, it is new information about my thoughts themselves. Further, it is not new information about the truth-makers of my concepts; rather, it is information about the way my concepts connect. It is new reflexive content. The notion of reflexive content allegedly helps Perry to explain why there appears to be an ontological gap between the mind as explained in materialist theory and the mind as accessed from the first-person point of view. Even though we pick out the same thing in both cases, we may not know that our phenomenal concepts refer to the same things as our physical-theoretical concepts. But it does not follow that there is an ontological gap between the referents of the concepts. Just as I may fail to recognize that a complex description of me indeed picks out me, or that a description of my location may
18

Perry 2001, 21-22, 103-144.

138 fail to inform me where I am, I may fail to realize that physical-theoretical concepts pick out the same things as my p-concepts. Though the Perry's view is more complex than the other two, the final effect is largely the same, for present purposes. We are left with the following picture. P-concepts, according to Perry, are mental "file folders" containing a phenomenal core made up of a Humean idea faintly resembling the sensations they are about. P-concepts are not, at their core, descriptive--they pick out their referents by resembling them and the Humean ideas they contain are by their nature directly accessible. The presence of a Humean idea in an active file folder allows for direct cognitive access to our conscious sensations. The differences between Perry's view and Loar's and Papineau's will not matter much in what follows. What is crucial is that there is a nondescriptive element--that is, that p-concepts refer nondescriptively--and that p-concepts contain a phenomenal element that plays an active role in making us aware of our conscious sensory states. In the next section, I will present my criticisms of the p-concepts approach. I will argue that the approach fails to explain the three features of first-person access spelled out above; indeed, it fails to explain why anything appears to us at all. At best, the view can only appeal to the inherent nature of conscious states to account for the appearances. This fails to bridge the explanatory gap. And while it may be argued that this doesn't refute materialism, it fails to undermine the root intuitions motivating antimaterialists. If we can find a model that does so, it is to be favored.

139

3.3 Criticisms of the P-concepts Approach All of the models of p-concepts surveyed are motivated in part by appeal to the linguistic phenomenon of direct reference. Kripke, Putnam, and others, argue that some of our terms refer not by way of a cluster of property-specifying descriptions, but simply by being appropriately caused by their referents. The paradigm cases of directly-referential terms indexicals, demonstratives, and natural kind terms. Because these terms can be used without knowledge of the underlying nature of the referent, they seem useful for a defense of materialism-a materialist can claim that we directly pick out our conscious states without knowing that their underlying nature is physical. Further, because we cannot determine a priori the reference of our directly referential concepts, we can easily conceive of zombies, inversions, and the like. In addition, we seem to have a good explanation of why there is something that Mary does not know in her black-and-white room. She knows about mental states under theoretical description, but does not know of them by way of directly-referential concepts. Still, this does not entail that there is some special nonphysical information Mary is lacking. She simply cannot move a priori from theoretical descriptions to directly-referential p-concepts. Thus, the knowledge argument is defused. But do directly-referential terms really offer a good explanation of our firstperson access? That is, do they explain how we are aware, in a seemingly immediate manner, of states apparently possessing independent and indescribable qualities? I argued in section 3.1 above that this is the condition of

140 adequacy for any satisfying materialist explanation of first-person access, one accounting both for the appearances of conscious states and for the antimaterialist intuitions driving the so-called problem of consciousness. I contend that the p-concepts approach fails to meet this condition. A detailed examination of how p-concepts are supposed to explain the appearances shows that the approach falls short. As we have seen, the three theorists considered above all invoke some sort of directly-referential mechanism to explain first-person access. By this, they explicitly mean that we are not aware of our conscious states by way of descriptive content. But consider how directly-referential terms function to inform us about aspects of the world. Indexicals like "here" and "now" refer to the place or time of utterance. But in order for us to understand the particular use of an uttered indexical, we must be able to ascertain the context of use. We must be able to tell just where here is and when now is. We can do this in one of several ways. The first is by way of perception. I look around and see that my office is the place that "here" was uttered, or that 7:26 PM was when "now" was uttered. In this way, I come to understand what the indexicals refer to. The second is by way of background beliefs that fix the referent. I may know from seeing the Brooklyn Bridge in a movie scene that an actor's utterance of "here" refers to Brooklyn, and that the time is around 1940, because I know that that bridge is in Brooklyn, those clothes are indicative of the 1940's, and that when people talk in movies, their indexicals pick out things in their surroundings. I know, for example, that the utterance of "here" does not refer to the movie theater in 2006.

141 A third possibility is that we "borrow" the reference from another person's use of the indexical term; that is, we possess mechanisms that allow us to ascertain, even in the absence of the relevant perceptual or doxastic processes, what another person means by their use of the indexical.19 It follows that in order to grasp the referent of an indexical, we must employ perception or background belief to pin down the relevant context of utterance, or we must burrow the reference. If we do not, the indexical will be "blind": it's like saying "here" after being taken to an unknown location while blindfolded. The term will still pick out the location of the utterance, but we will not learn anything, we will have no cognitive access to that location. Therefore, there must be some process that serves to make us aware of the specifics of what our indexical picks out. The mere utterance of the term, or the tokening of the thought, will not make us aware of where we are. But what gives us cognitive access to the referent in the case of pconcepts? A perception-like process, for one, seems a poor candidate. We do not see our mental states, nor do we have any organ in our brains that appears to work like the sensory transducers involved in perception. And indeed, all three theorists surveyed reject the notion that p-concepts are perceptual in nature; we do not, on the p-concepts view, cognitively access our conscious states by perceiving them.20 In addition, ordinary perception is marked by the presence of sensory quality: if no sensory quality is involved, we wouldn't intuitively think of a

19 20

See Devitt and Sterelny 1999, 58-59. See especially Perry 2001, 48ff, 206; 2004a, 180.

142 process as perceptual. But this suggests that in order to cash out the analogy with perception, we must posit higher-order analogs of sensory qualities, qualities that play an analogous role to the role sensory qualities play in ordinary perception. But we are never aware of any such higher-order sensations, nor is it clear what such sensations would be like. Without such an explanation, the appeal to perception to explain how p-concepts make us aware of our conscious sensations is empty.21 A reasonable hypothesis is that the mechanisms of inner awareness work by forming nonperceptual representations of our inner states. That is, they issue in thoughts about our conscious mental states. But this claim also poses problems for the p-concepts approach. If we know our inner states by thinking about them using descriptive thoughts, the central motivation for the p-concepts approach is lost. To accept this proposal is to drop the directly-referential element of p-concepts; the work of inner awareness is being done by description, undermining the central motivation of the view. So this route is not open to the pconcepts approach. We are left without an explanation of how p-concepts make us aware of our conscious states. There is one additional possibility that may explain how p-concepts make us aware of our inner states. According to some theorists, subjects can refer indexically--and with directly referential terms generally--without possessing any useful background descriptive beliefs about the referent of the indexical and without perceiving the referent. This phenomena, known as "reference

See Rosenthal 1997, 2002a, 2004 for a detailed defense of this claim, as applied to the higherorder perception model of state consciousness.

21

143 borrowing" is posited to allow reference fixing by subjects in certain situations. If a friend says, "George was there," I can understand what she meant even if I have no descriptive beliefs about the reference of "there", and even if I have no perceptual access to that location. I can then meaningfully pass along this information to another person, who likewise can comprehend, in this limited manner, what I am talking about. But reference borrowing offers no aid to the friends of p-concepts. Reference borrowing is a public, conversational process--it is because another person conversationally conveys this information to me, and thereby "passes along" the causal link to the indexed location, that I am able to understand what is said. But p-concepts work privately. By hypothesis, their acquisition requires no prior conversation; indeed, they cannot be conveyed in conversation. Mary, in her black and white room, cannot acquire p-concepts by conversing with her captors. However, she can reference borrow all she likes with ordinary indexicals, demonstratives, and natural kind terms. Thus, reference borrowing does not explain her acquisition of p-concepts or the process by which pconcepts make us aware of our conscious states. Finally, reference borrowing, by its nature, provides too "thin" a link to its referent to explain the special "presentational" sort of understanding allegedly afforded by p-concepts. Following Levine's terminology, p-concepts refer in a "substantive and determinate" way--the quality we cognitively access is somehow present in the concept itself. But reference borrowing only affords us a thin causal tag to the referent--it is a thing we take on someone else's word to be referred to, and one

144 that we are only linked to the referent by a thin causal connection. Indeed, reference borrowing is posited to explain how directly referential terms can be passed along without the sort of understanding that's present with the use of pconcepts. So, although reference borrowing offers an additional route to securing the referent, one that does not involve perception or descriptions, it fails to explain the awareness we have of our inner states.22 Indeed, all three theorists explicitly reject a simple indexical model for the reason just stated: indexicals themselves are too thin to secure reference to the particular qualities of experience we can be aware of. Indexicals are too coarsegrained to make us aware of the particular shade of red of a perceived apple, as opposed to its shiny skin or its round shape. Levine employs this point to criticize the view (as detailed in chapter 2 above). He contends that indexicals are blind-using one is like saying "here" with one's eyes closed. The indexical alone can tell us nothing of the particular mental states we are in, and therefore nothing of their seemingly independent and indescribable qualities. In Levine's terms, they cannot account for the "determinate" nature of our first-person access. And indeed, Papineau, for example, explicitly rejects a simple indexical approach for this very reason.23 Perhaps the p-concepts approach is better served by appealing to demonstratives like "this." Loar and Papineau explicitly appeal to demonstratives in their constructions--Loar holding that p-concepts are a species of "type demonstratives" and Papineau invoking the 'this experience: - - -' operator. But
22 23

Thanks to Michael Devitt for raising this possibility. Papineau 2001, 123f.

145 the same worry applies. We ascertain the reference of a demonstrative by way of perception, by employing a battery of background belief, or by reference borrowing. If I point to an object without looking and utter "this," I will not know what I am referring to until I take a look. Likewise, I may understand in a certain context that "this" refers to the book on the table because I know we have been talking about that book, and I understand that terms like "this" track objects previously picked out perceptually or descriptively. But perception is not a viable option, as argued above, and appealing to descriptive content gives up the pconcept game. If descriptions do the referential work, we are left wondering what contribution the demonstrative makes. If perception does the work, we are left without an explanation of the process that makes us aware of our conscious states. We are left with no explanation at all. And, again, reference borrowing is not an explanatory option. That process requires the transference of the causal chain of reference from one speaker to another. This is not plausible with the inner processes involved in p-concepts. Further, the pragmatic considerations invoked by some theorists to explain demonstrative reference likewise seem illequipped to explain our awareness of our conscious states. Those pragmatic considerations involve a speaker intending her audience to notice a salient feature of the relevant surround and her audience recognizing this "implicature." Since p-concepts operate solely for a single introspecting subject, these pragmatic heuristics are not available. Further, we would still require an explanation of how the subject became aware of the relevant feature, even if the implicatures were present--perception is not available to play that role as it is in

146 public conversation. We are left wondering how p-concepts, construed on the model of demonstratives, make us aware of our inner states. Finally, we can consider if natural kind terms might provide the needed explanation. Many hold, following Putnam and Kripke, that natural kind terms are directly referential. "Water", it is claimed, refers to water in virtue of being causally connected to water through a reference-preserving causal chain. And it might be argued that terms like "pain" are natural kind terms--it refers to a type of sensation with a single underlying nature.24 But with natural kind terms like "water," we still require an explanation of how subjects were able to use the term to pick out water independently of knowing its true underlying nature, H2O. One possibility is that we employ a cluster of descriptions, like "it's the clear, drinkable, odorless liquid filling our lakes and rivers, comes out of the tap, quenches thirst, etc." This, in Kripke's terminology, "fixes the reference" of the term. Another possibility is that by being in appropriate causal contact with the referent, either directly ourselves or by way of an appropriate referenceborrowing chain, we fix reference to water. These competing accounts are offered to explain the ordinary "pre-scientific" cognitive access subjects have to water.25 But what explains our pre-scientific access in the case of p-concepts? Again, it seems that the p-concept theorist cannot appeal to descriptions to explain how things appear to the subject. That allows descriptions that do the cognitive work, and gives up the game. And, as before, there isn't a plausible explanation of inner perception, even if the theorists surveyed were to change
24 25

Though see Lewis 1972, 1994. Kripke 1980.

147 their tunes and endorse such a move. And as Levine stresses, our first-person concepts do not at all seem like "blank labels" attached to a thing we know not what. We have a "substantive and determinate" conception of conscious states accessed from the first-person perspective. Mere causal connection or reference borrowing fails to explain this sort of access--they are too thin to account for our access. Neither perception nor descriptive thought is available to fill the gap. We have no explanation of why things seem as they do from the first-person perspective, and thus we have no good way to blunt antimaterialist intuitions. It should be noted that there is a rich and expansive literature on direct reference. In following the literature on consciousness, I have only scratched the surface of these debates. Some of the central issues concern the cognitive and linguistic requirements for reference borrowing, the "understanding" that a subject who has borrowed reference in this way possesses, and the semantic or pragmatic nature of our linguistic behavior involving indexicals, demonstratives, and natural kind terms (and any other terms that might refer directly, either fully or in part). But it seems to me that there is little in this literature to block the arguments I've made against the p-concepts approach. As I've argued, reference borrowing does not seem like the right sort of phenomenon to explain he distinct sort of awareness we appear to have of our inner states. Its public, conversational nature and its cognitive "thinness" make it ill-suited to fill the gap in the p-concepts approach. Further, the debates on the semantic or pragmatic nature of directly referential terms seems orthogonal to present concerns. First, even if the nature of directly referential terms is explained in semantic theory, we

148 still need an explanation of what plays the role usually afforded to perception or reference borrowing in the p-concepts case. Secondly, pragmatic concerns do not appear useful in this context. Pragmatics is again a public, conversational affair and p-concepts possess a kind of privacy that makes them poor candidates for referential aid from pragmatic theory. There is no interlocutor to whom we can attribute intentions and conversational expectations; nor is there an explanatory stand-in for perception, which allows us to coordinate our conversational understanding in public speech. So however this debate goes, there isn't anything to help p-concepts. Finally, if, at the end of the day, demonstratives are somehow reducible to descriptive terms, than there is even less reason to posit a directly referential link for p-concepts. If the best model we have for directly-referential terms turns out to be a descriptive model, the link between p-concepts and their referents will seem even more sui generis and in need of explanation. This, of course, is only a brief glance at the debates at issue in the literature on direct reference. Nevertheless I hope it serves to make clear just what is required for the p-concepts approach and the paucity of materials available to it at present.26 But perhaps I am raising the explanatory bar too high for the p-concepts approach. It may be that p-concepts have the function of making us aware of our conscious states--that is their functional role, or what they evolved to do. But this only pushes the questions back a level. We can still ask how it is that p-concepts
As mentioned, there is a large literature on these matters. On reference borrowing, see, e.g., Devitt 1981, 2001; Devitt and Sterelny 1999, 58-59; Blackburn 1988. On the debate over semantics and pragmatics, see, e.g., Devitt 2004; Lepore and Ludwig 2000; Reimer 1991; Borg 2000; Neale 1993; etc. Many important papers on these issues can be found in Ludlow 1997 and Martinich 2000. See also Ostertag 1998 for papers on descriptions.
26

149 play this functional role. And here parallel worries emerge. Our ordinary ways of being aware of something involve either perception or descriptive thought. If I am aware of a tree, that is because I perceive it or because I am thinking of it as present. But neither option seems a good model for the functioning pf pconcepts. Perception fails to provide a workable model. But the other way of ordinarily being aware of our conscious states, by way of thought, doesn't help either. Our ordinary way of being aware of objects in thought is by representing them in descriptive concepts. As stressed repeatedly above, this is to undermine the core idea of the approach. If descriptive content is unavailable, we are back to demonstrative concepts. And this simply brings us back where we started.27 But perhaps the proponent of the p-concepts approach can claim that it is in the nature of conscious states that we are aware of them--nothing more is needed to explain our awareness of them. That is, it's the conscious state itself, or a phenomenal state resembling a sensation, that accounts for our awareness by way of p-concepts. This is how Perry explicitly proceeds, for example. And this move dovetails with the idea, defended by Papineau, that identities do not require explanation. If conscious states are identical to physical states, then asking for an explanation of how this could be so is like asking how Clark Kent could be Superman. He just is, and there's nothing more to say. Perhaps this offers an answer to my criticism. I ask how p-concepts make us aware of our conscious states, and the answer is we are aware of them because p-concepts contain conscious states, which themselves, by their very nature, provide that awareness.
27

The above arguments against reference borrowing apply here as well.

150 However, this tack leaves legitimate questions, despite what the theorists contend. We can still ask what it is about conscious states that accounts for our awareness of them, for example. This is not asking how Clark Kent could be Superman; rather, it is asking how this individual, whatever we call him, can fly. To say that it is just in the nature of Superman that he flies, or that it's in the nature of conscious states that we're aware of them, leaves us with legitimate questions about the mechanisms involved. And, as it turns out, even proponents of the p-concepts approach, Ned Block in particular, contend that there are legitimate theoretical questions in the offing. This is the root of his "harder problem" presented in chapter 2. Because we do not know what accounts for this awareness, we are left unsure of how to project our concept of consciousness to other creatures. And, as I argued there, this simply amounts to the presence of an explanatory gap. We have no explanation of consciousness, or of the awareness essentially wrapped up with it. Here I am in sympathy both with Block and with Levine, who contends that our awareness of conscious states is just another facet of the explanatory gap. And there is an additional way to see the worry. Consider the state of chemical theory prior to the advent of quantum chemistry in the middle decades of the twentieth century. At that time, there was no satisfying explanation of why any particular set of perceivable features accompanied a chemical compound. This was the central motivation for C. D. Broad's emergentism, touched on in my discussion of Chalmers's view in chapter 1. If we have no explanation of why NO3 smells the way it does, and we are just told, "It is in the nature of ammonia

151 to smell that way," emergentism will seem a viable option. But that didn't mean there were no further legitimate questions to ask. On the contrary, this puzzle drove scientists to develop better chemical theory, and it stands as one of the confirming instances of quantum mechanics that it can provide this explanation in the realm of chemistry. With that in mind, I believe that the p-concepts approach falls short. We are left wondering how they account for the way things appear to us in firstperson access. We have no story about why it is that conscious states seem to appear to us directly, and seem to possess independent and indescribable qualities. The p-concepts approach falls short just at the very point where antimaterialists dig in their heels and claim that no explanation has been given, so materialism has problems with consciousness. But we can, I believe, do better. In the next chapter, I will propose a model of first-person access that both explains these appearances and provides an explanation of the mechanisms underlying that access--mechanisms fully concordant with materialism. I will argue that, despite claims to the contrary, we in fact access our states descriptively, by way of a theory. But that theory is applied nonconsciously, without the subject being aware that this is what she is doing. Because the theory is applied in this manner, the access seems direct, and the qualities we are aware of seem unconnected to other processes and incompletely described by materialist theory. I will also stress an analogy with the phenomenon of so-called "expert awareness," where experts seem to be able to "just see" things posited by theory. And I will briefly suggest what kinds of

152 empirical evidence might support the model. Then in chapter 5, I will look at the empirical evidence more closely and argue that a degree of support is indeed forthcoming. I close by returning to Mary in her black-and-white room, to address any remaining worries about the adequacy of the model, and to lay to rest, as much as possible, the antimaterialist intuitions underlying the so-called problem of consciousness.

CHAPTER 4: A MODEL OF FIRST-PERSON ACCESS

4.1 Descriptions and the Doctrine of Cartesian Modes of Presentation 4.1.1 What are Descriptive Concepts? A materialist theory of first-person access must explain the seeming immediacy of access, and the apparent independence and indescribability of what is accessed. Antimaterialists contend, as do proponents of the p-concepts approach, that these appearances cannot be explained by way of descriptive concepts. To evaluate this claim, we need a clearer characterization of just what descriptive concepts amount to. Here, we can use antimaterialists and proponents of p-concepts as a guide. This will both provide us with a working characterization and help ensure that we avoid charges of subject changing down the road. To begin, Chalmers holds that we have two distinct sets of mental state concepts: phenomenal concepts and psychological concepts. Psychological concepts, on his view, are descriptive. He writes This is the concept of mind as the causal or explanatory basis for behavior. A state is mental in this sense if it plays the right sort of causal role in the explanation of behavior. According to the psychological concept, it matters little whether a mental state has a conscious quality or not. What matters is the role it plays in a cognitive economy (Chalmers 1996, 11).

-153-

154 Chalmers contends that most concepts resemble psychological concepts in this way. He continues, For the most interesting phenomena that require explanation, including such phenomena as learning and reproduction, the relevant notions can be analyzed functionally. The core of such notions can be characterized in terms of the performance of some function or functions (where "function" is taken causally rather than teleologically), or in terms of the capacity to perform those functions (Chalmers 1996, 44, emphasis in original). Even physical concepts like heat can be cashed out in this way on Chalmers's view, as, roughly, "The kind of thing that expands metals, is caused by fire, leads to a particular sort of sensation, and the like... Heat is a causal-role concept, characterized in terms of what it is typically caused by and of what it typically causes, under appropriate conditions" (Chalmers 1996, 44-45, emphasis in original). Causal-role concepts, as characterized here, are descriptive. They can be cashed out in terms of a description of the typical roles a thing plays, the roles that make the thing what it is. Loar, Papineau, and others, stress the distinction between p-concepts and theoretical concepts, and again, the key to the distinction is the presence or absence of descriptions. Loar contrasts p-concepts with "physical-functional" concepts. Physical-functional concepts are characterized by descriptions, whereas p-concepts are not: "Phenomenal concepts are conceptually independent of physical-functional descriptions and yet pairs of such concepts

155 may converge on, pick out, the same properties (Loar 1997, 602). Papineau contrasts p-concepts with "material" concepts, but the point is the same. He writes, Material concepts are those which pick our conscious properties in the third-personal causal world. Most commonly, these will be role concepts, by which I mean concepts which refer by describing some causal or other role, such as pain's role in mediating between bodily damage and avoidance behavior (Papineau 2002, 48, emphasis mine). Again, the crucial difference between these concepts and p-concepts turns on descriptions. Papineau contends that "phenomenal concepts refer directly, and not by way of description. Phenomenal concepts don't pick out their referents by invoking certain further features of those references, but in their own right, so to speak" (Papineau 2002, 87). He continues in a footnote, "When I talk about 'direct reference,'... I will simply mean reference that is not by description (Ibid, footnote 6). I will follow these theorists in taking the descriptive concepts at issue to be, broadly speaking, "role concepts," concepts picking out their referents in terms of the performance of a causal/functional role. In more detail, descriptive concepts apply to an object, state, or property when that object, state, or property has enough of the features listed in the description, in this case whether the object state, or property plays the right causal/functional role as spelled out by our commonsense folk psychology. I will also sometimes speak of a "descriptive theory" in what follows. By that term, I mean a collection of descriptive concepts

156 that refer to a specific domain, are interdefined in terms of one another, and generally make reference to nonobservable posits, defined in terms of causal/functional role. And, as argued on chapter 1, the evidence about when a concept properly applies is empirical--there is no analytic/synthetic distinction available to pin down the meanings of our descriptive concepts a priori. I will have much more to say about the specifics of the roles that matter below. In what follows, I will argue that all the theorists looked at so far are mistaken in holding that our first-person access cannot be explained in terms of descriptive concepts.

4.1.2 The Doctrine of Cartesian Modes of Presentation The crucial point at issue is whether or not we can tell, from the first-person perspective, what sort of concepts we are employing in introspection--to the point, whether these concepts employ descriptive modes of presentation or not. We can label the position held by the antimaterialists and the proponents of pconcepts the "doctrine of Cartesian modes of presentation," or CMoP for short. This position holds that the concepts employed in first-person access possess modes of presentation whose referential mechanisms can be known by way of introspection. All the theorists surveyed embrace CMoP in one way or another. I reject CMoP and contend that for all we know from the first-person perspective, the modes of presentation of the concepts involved in first-person access may be descriptive.

157 What reasons can be given in favor of CMoP? The main line of support comes from the appearances of first-person access. First, the access seems immediate. This is in contrast to what usually occurs in the ordinary application of descriptive concepts. For example, when we pick out heat by way of a description, like that offered by Chalmers above, we are usually aware that we are picking out the phenomenon by way of the presence of properties given in the description. It is the physical phenomenon possessing the properties of expanding metals, being caused by fire, leading to a particular sort of sensation, and so on. Or consider our concept of "pen." A pen is an object designed for writing in ink or other fluids, employing a nib, etc. If an object matches enough of this description, then it is a pen and we pick it out as one. We don't usually seem to be in direct contact with the "intrinsic nature" of the pen; rather, the object fits our description. This seems in contrast to what goes on in first-person access. There, we do seem to be in immediate cognitive contact with the intrinsic nature of our conscious sensations. The process seems so different that it's reasonable to posit a distinct form of reference. Likewise, when we access our states from the first-person perspective, the states seem to have independent and indescribable qualities. The appearance of independence--the seeming disconnection of the sensory qualities from other processes--seems to undermine the idea that we are employing descriptions. Importantly, descriptions pick out their referents in relational terms, terms that relate the targeted referent to other events, processes, states, objects, etc. For example, description in terms of functional role places the targeted phenomenon

158 in a network of relations which fully characterize the object of reference. For example, beliefs, on this view, are mental states usually caused by perceptions or other beliefs, which interact with our desires to dispose us to act. If a state is so related to the other processes and states listed in the description, it is a belief, full stop. There is no mention of any intrinsic1 features of the state; filling the relationally characterized role is enough. However, it is argued that phenomenal concepts refer to the intrinsic character of conscious states: their "feel," what it is like for the subject to have them. Picking out our conscious states in this fashion delivers seemingly independent features of them, which is just what we want. And how could descriptions do this, given that they refer in explicitly relational--functional role-terms? Given our purported dissatisfaction with relational analyses of qualities accessed from the first-person perspective, it's reasonable to conclude that we do not employ descriptive concepts. In addition, we seem to cognitively access indescribable properties. When we are aware of the painfulness of pain or the redness of a red sensation, other terms seem to fall short of characterizing the qualities we pick out. But descriptions, clearly, involve describing something in other terms. The failure of such descriptions to informatively convey what we access seems to strongly tell against the presence of descriptive concepts. To sum up, the proponents of
1

By "intrinsic" I will mean "nonrelational property" throughout. It is difficult to pin down a precise characterization of this term. It is often analyzed in terms of potentially "lonely" properties that can exist in otherwise empty possible worlds. But disconnecting our conception of an object, state, event, etc. from everything else in the universe arguably undermines our ability to have reliable intuitions about its nature. In any event, I mean to use the term to distinguish the modes of presentation of descriptive concepts from the modes of presentation of the sorts of phenomenal concepts presented here. On the meaning of "intrinsic" see Weatherson 2006; and Lewis and Langton 1998.

159 CMoP hold that we pick out our conscious states directly by way of what it's like for us to have them. This seemingly immediate access puts us in cognitive contact with apparently independent and indescribable qualities, and this alone provides good reason to accept CMoP. But this is not enough to establish CMoP. Recall that any proposal about how to characterize consciousness and first-person access must fit with our commonsense folk psychology. And folk psychology fails to validate CMoP. To reiterate, all that common sense delivers is that it appears to us that our access is direct and that it brings us into cognitive contact with independent, indescribable qualities. It is silent about the underlying mechanisms accounting for our access. Folk psychology is not in the business of delivering the machinery of access--what goes on, as it were, behind the scenes. All that it delivers in this context is how things seem to us from the first-person perspective. And while our access may not seem descriptive, this is not enough to rule out the possibility that it in fact is. Folk psychology gives no argument for CMoP. All the theorists surveyed run illicitly from the appearances to the underlying nature without providing the required evidential or argumentative support. Therefore, I conclude that CMoP is not established, and for all that the appearances show, we may well access our conscious states by way of descriptive concepts. In addition, there are positive reasons to reject CMoP. It amounts to the dubious Cartesian claim that the mind is transparent to itself, that, to paraphrase Armstrong, all the goods are in the store window, and we can't be mistaken about what's there. There are no plausible natural mechanisms that deliver this sort of

160 transparent, infallible access, and indeed, few of the theorists surveyed would endorse the claim in this bald form. But that means at best they are suggesting we take the appearances at face value. This may seem a good move if one is an antimaterialist, or if one believes that the p-concepts approach can be made to work. But the previous chapter showed that this is a flawed strategy, in the absence of considerable additional explanatory information. Given the difficulties made clear in my criticisms of p-concepts, we ought to reject CMoP. Its close affinity to dualism should have been a red flag in any event. But there are more problems with CMoP. For one, there are a range of circumstances where we are unaware of the underlying processes involved in a range of perceptual sorting tasks. In these situations, subjects confabulate the intentional states they are in.2 There are empirical cases where we don't know what mental states we are in when we pick things out, despite our claims to the contrary. I will have much more to say about confabulation in chapter 5; for now, it is enough to note that CMoP is under empirical pressure, and we shouldn't accept it in the absence of positive evidence. In addition, there are more ordinary cases where folks misidentify the mental states responsible for their actions. In cases of repression and denial, we are unaware of the beliefs and desires driving our actions. One does not need to accept Freudian theory to recognize such cases, and they have made their way into our folk psychology for that reason. For example, I may judge that a person is mean and ugly because, outside of my awareness, I am jealous of him and believe he is trying to impress my wife. The unconscious states accounting for
2

Nisbett and Wilson 1977; as noted, I will say much more about these results in chapter 5.

161 my actions will be inaccessible to me. For those states, CMoP will not hold. I will not be able to tell, for those states, whether they are involve descriptive concepts or direct reference or what have you. One can contend that this isn't the correct way to read these cases. Perhaps such an active feeling of jealousy would have to be conscious. But one can't argue from CMoP to this conclusion without begging the question. And since CMoP closes off a line of behavioral explanation, it is not a neutral and innocent doctrine to take on board. Finally, there are plausible cases of so-called expert perception which undermine CMoP. Expert perception occurs when an expert is able to perceive, in a seemingly direct manner, more detail and richness, or a wider range of relevant perceptual features, than a novice confronted with the same scene. In chess, for examples, experts are said to "just see" the relative weaknesses and strengths of various board positions. But it is often the case that they themselves cannot explicitly relate what features of the board they are picking out. But it's reasonable to assume that they are employing the learned theory of chessplaying they've developed over the course of becoming masters. In that case, they are unaware of the descriptive contents they are employing; instead, it simply seems to them that the strength or weakness is directly accessed, not by way of description. But, despite how things appear to them, we have good reason to say that nonetheless they are picking out things descriptively. I will have much more to say about cases of expert awareness in section 4.2 below, and in chapter 5. For now, it is enough to see that there are empirical pressures to reject CMoP, especially if no positive argument can be offered in its favor.

162 There is, however, an additional reason one might be drawn to CMoP. This reason is motivated by the work of Saul Kripke on the nature of identity and it touches on the issue of the modes of presentation involved in first-person access. Kripke (1980) argues that identity claims involving two "rigid designators"--terms picking out the same referent in all possible worlds where the referent exists--are necessary. Accompanying this conclusion, Kripke offers an explanation of why it seems that we can imagine possible scenarios where the identity is false. He contends that in those cases, we are imagining a contingent mode of presentation of the referent together with some other referent. For example, though Kripke claims that it's necessary that heat is identical to mean kinetic molecular energy, we seem to be able to conceive of heat in the absence of mean kinetic molecular energy. But, according to Kripke, what we are imagining is something that feels like heat, even though it isn't. The feel of heat is the contingent feature by which we humans pick it out. But the feel is not a necessary feature and can occur in the absence of the referent. Thus we are not conceiving of a world where heat isn't mean molecular kinetic energy. But when it comes to the identity of sensations and brain states, this means of "explaining away the appearance of contingency" is not open to us, according to Kripke. He contends that sensations are picked out, from the firstperson perspective, by a necessary feature--their conscious feel. Pains, according to Kripke, are states that essentially feel painful. But if that's the case, what are we to make of conceivable scenarios where pain is not accompanied by any brain state? We cannot say that we are imagining a contingent feature of

163 pain--pains are essentially characterized by their feels. So we must really be imagining pain. But if we can conceive of this situation, and can find no non-pain world to stand in for what we are conceiving, we really must be imagining pain in the absence of brain states. That is, there is a possible world where there are pains and no brain states. But this entails, according to many, the falsity of materialism. In response, the proponents of the p-concepts approach hold that we do not pick out pain by way of a contingent mode of presentation; rather, we indeed pick it out by way of its feel. But relational descriptions do not, so it seems, refer in this fashion. We can imagine, so it seems, pains that play none of the usual functional roles. So we must refer to pains directly by way of their feel. But this does not vindicate CMoP. First off, Kripke offers no further support for his claim that we must refer to pains by way of their feels. He simply states that it is so. But there are a range of plausible folk-psychological cases of pains that occur in the absence of any feel, cases of nonconscious pains. As noted previously, we may have a headache all day, though it only intermittently makes its presence known in consciousness. A reasonable way to describe the situation is that we had a headache all day, though we didn't always feel it. This is certainly one way folk characterize the situation; there is nothing incoherent or contradictory in dong so, from a commonsense point of view. Likewise, we may have a pair of new shoes that pinch our feet all day. They may even cause us to limp. But we may be unaware of the pain for much of the day, even as we favor one foot over another. Again, a good way to describe the situation is that our

164 feet hurt all day, but the pain wasn't always conscious. It is only if we have some prior theoretical view about pains--that they cannot occur nonconsciously--that we will rule out such characterizations. But this goes beyond folk psychology and thus requires an argument. Kripke offers none. Note that I am not saying this is how we must interpret these cases; rather, I'm saying this is a good folkpsychological reading of the cases, and it is one that avoids Kripke's antimaterialist argument. Taken together, those facts put the burden of proof upon Kripke to establish his claim about the essence of mental states. Thus, we are not driven by Kripke to endorse CMoP. Indeed, it's clear that he employs CMoP in order to get his argument started; he offers no new reason to accept the doctrine. In addition, considerations adduced in my discussion of Chalmers in chapter 1 are relevant here. There, I argued that we have to assume an analytic/synthetic distinction in order to justify an essentialist claim like Kripke's. What's more, I argued that future scientific research might prompt us to alter our concepts of consciousness and sensation so that Kripke's claim will no longer have intuitive bite (to the extent that it does now). In such a case, I imagined that we might have reason to accept one or another proposed descriptive characterizations of consciousness, like those offered by Rosenthal, Dretske, or Dennett. Rosenthal holds that conscious states are ones we are aware of. Dretske holds that they are first-order representational states poised to affect our beliefs and desires. Dennett holds that conscious states are globally accessible--they have "fame in the brain." If one of these conceptions becomes our most entrenched folk characterization, then in imagining a being who is

165 aware of her states, or representing in the proper poised manner, or who has globally accessible states, we are thereby imagining a being with conscious mental states. It will not seem intuitive that such things should come apart-zombies of this kind will seem inconceivable, just as a creature with all the same dispositions to act but lacking my beliefs will seem inconceivable. Kripke's essentialist claim seems at odds with this possibility. He simply states that feel is the essence of pain, and leaves it at that. But in the absence of further argument for this claim, we need not worry about support for CMoP from this direction.3 I conclude that CMoP is not well-supported. In the section after next, I will present a model of first-person access according to which we access our conscious states by way of description. I will argue that we employ a descriptive theory to pick out our conscious states from the first-person perspective, one that we apply automatically, outside of conscious awareness. This offers an account of the three crucial features of first-person access a materialist theory must explain. Because we are unaware of applying the theory, the access seems direct. And because we are unaware of the descriptive concepts we are employing, we will not be aware of the relational information we use to pick out our states. This delivers a plausible explanation of both independence and indescribability. But in the next section, I'll present an analogy offered by expert perception, in order to provide an intuitive framework for the model.

See also Devitt 1996, chapter 1 for additional reasons to reject CMoP.

166 4.2 Expert Perception In this section, I will offer a suggestive model for our first-person access, one that explains the appearances in terms of descriptive referential mechanisms. The model is given by the phenomenon of so-called "expert perception." In expert perception, experts purportedly perceive things that novices do not, and do so in a seemingly direct manner. What's more, I will argue that the way things appear to the experts as they engage in this sort of perception is relevantly like the way things appear to us in first-person access. They seem to immediately access independent and indescribable features, at least in some cases. And it's even the case that sometimes, experts cannot provide an informative description of the features they access, despite clear evidence that they do in fact access features amenable to description. This is not always the case, but it occurs enough to provide a suggestive analogy for first-person access. While the examples do require a degree of interpretation and reasonable extension, I believe that the most plausible reading presents a good case of descriptive referential access that nonetheless feels immediate and independent, and has indescribable features. Experts are better than novices at reasoning about and solving problems in their domain of expertise. But in addition, they possess qualitatively different ways of thinking about and perceiving things in that domain. A 2000 study by the National Academy of Sciences Committee on Developments in the Science of Learning, entitled "How People Learn: Brain, Mind, Experience, and School," lists several relevant features characteristic of expert ability. They note that

167 Experts notice features and meaningful patterns of information that are not noticed by novices, ... experts have acquired a great deal of content knowledge that is organized in ways that reflect a deep understanding of their subject matter, ... experts are able to flexibly retrieve important aspects of their knowledge with little attentional effort, .... [and] though experts know their disciplines thoroughly, this does not guarantee that they are able to teach others (Bransford et al 2000, 31). It's not just that experts can solve problems that novices cannot; rather, it's that experts can perceive features of the problem that elude novices altogether. Further, this skill is automated to the point where the expert herself will often not be aware of thinking about the problem. Instead, the expert will "just see" the solution or relevant area of concern. Finally, it is often the case that the expert herself cannot convey knowledge of how she does this to others. She will be at a loss to say how it is that she picks out the features she does, and how she meaningfully carves up the problem when she first perceives it. Indeed, at times it will seem mysterious how the expertise is achieved, even to the expert. This is not always the case, but it happens enough of the time to be suggestive for our purposes, as I'll argue below.

4.2.1 Chess and Chunking A well-researched example of expert perception comes from the domain of chess. De Groot's classic study of the cognition of chess masters, Thought and

168 Choice in Chess, provides good examples of how experts perceive things differently than novices. Summarizing this idea, he writes We know that increasing experience and knowledge in a specific field (chess, for instance) has the effect that things (properties, etc.) which, at earlier stages, had to be abstracted, or even inferred are apt to be immediately perceived at later stages. To a rather large extent, abstraction is replaced by perception... As an effect of this replacement, a so-called given problem situation is not really given since it is seen differently by an expert than it is perceived by an inexperienced person (De Groot 1978, 33-34). Alexandre Linhares, in a recent paper on chess psychology and chess-playing computers, provides a provocative first-person report of this sort of perception from a chess grand master. Linhares tells us The Cuban World Chess Champion Jose Raul Capablanca once remarked on his personal, subjective experience: "I know at sight what a position contains. What could happen? What is going to happen? You figure it out, I know it!" In another occasion, talking about the numerous possibilities that less-skilled players usually consider on each board position, he bluntly remarked: "I see only one move: The best one." (Linhares 2005, 135). It is this sort of perception--subjectively immediate, yet obviously imbued with a rich background of expert knowledge--that provides a suggestive model for firstperson access.

169 A wide range of studies has demonstrated the speed and skill of perception by chess masters, but some offer more direct evidence of the expanded perceptual range of experts, beyond noting their improved reaction times and recall ability. A study by Reingold and colleagues shows that chess masters are less susceptible to a version of chess "change blindness" where pieces are altered on a computer-screen chessboard. Novices are more likely to report that nothing has changed in such scenarios than masters. To the novice, the board before and after the change will seem the same; the master sees them as different. Further, Reingold and colleagues demonstrated a "Stroop-like" effect for chess masters not present in novices. The Stroop effect occurs in tasks where an automated perceptual cue interferes with an assigned task. For example, trying to say what color ink a word is written in is slowed (or incorrectly identified) when the word itself is a word for a color different from the ink. In a similar manner, chess masters asked to search a board for checks had slower reaction times or answered incorrectly when shown a board with an illegal configuration resembling legal checks. One reading of this result is that experts' background knowledge set up expectations of certain positions, making it difficult for the masters to see things as they really were. Novices, lacking that knowledge, were not slowed or misled by the illegal positions.4 What accounts for this difference in perceptual ability? In a range of papers, Herbert Simon and his collaborators contend that it is the presence of cognitive structures known as "chunks" which allow the expert to treat complex features as single cognitive units. Chess masters possess as many as 50,000
4

Reingold et al, 2001. I will offer more detail on these studies in chapter 5, section 1.2.

170 chunks, allowing them to rapidly perceive an enormous range of complex features with automatic ease. These chunks operate behind the scenes, away from the conscious awareness of the experts. Simon terms the automatic process of expert problem solving "intuition." He characterizes it as follows: Experts generate accurate observations or even solutions without thinking more than a few seconds and without appearing to examine the situation closely. Intuition is a leap, and the person taking the leap is generally not aware of how he or she did it. How does intuition work? Long experience leads to chunking so that familiar patterns emerging in a situation immediately suggest a possible move (chess), a possible condition (medical diagnosis), a possible fault (electronic troubleshooting), or a possible risk (finance). Intuition grows out of experience that once called for analytical steps. As experience builds, the expert begins to chunk the information into patterns and bypasses the steps (Prietula and Simon 1989, 122). Chunks are individuated descriptively. They describe complex, abstract relational features of relevant perceptual experiences, and provide a template for new acts of recognition and recall. They act automatically, behind the scenes of conscious awareness, and lead to seemingly immediate access to complex relational features. Further, these features are not experienced as being relational; rather, they just appear "intuitively" to the expert as unified percepts, as intrinsic, independent units. And, as noted by Prietula and Simon, the expert may not know how they are doing it--an attempt at describing the process will fail

171 to fully inform an interlocutor. There will seem to be something indescribable going on.

4.2.2 Music, Wine, Chick Sexing, and Attractiveness A wide array of other domains of expertise has been studied in cognitive science. Among them are music perception, wine tasting, medical diagnosis, problem solving in physics, chick sexing, and even the "expert" perception of criminals while casing a prospective opportunity for shoplifting.5 I will touch on several of these areas to support my claim that expert perception offers a suggestive model of first-person access. I'll also consider some more everyday, those less rigorously studied examples of what is arguably expert perception, notably the perception of female attractiveness across cultures. Taken together, the examples make a good case for the possibility of descriptive first-person access. Another well-studied example comes from the psychology of music. Musical experts--mainly trained musicians and composers--possess a range of perceptual abilities lacking in novices. Musical experts can hear patterns and instrumentation missed by novices. For example, trained musicians are much less likely to be taken in by the so-called "octave illusion," where two tones an octave apart are heard as a single tone. What sounds like one tone to novices sounds like two tones an octave apart to experts. Further, expert composers notice and attend to more complex relational features of musical pieces than do

On Music, see Brochard et al. 2004; Gromko 1993; on wine, see Solomon 1990, 1997; on medical diagnosis, see Schmidt and Boshuizen 1993; on physics problems, see Chi et al. 1981; on shoplifting, see Weaver and Carroll 1985.

172 novices.6 In a slightly different sort of study, it was shown that subjects growing up exposed to African music could pick out complex rhythms missed by subjects growing up exposed to Western music.7 This indicates the learned nature of musical expertise, as well as to the interesting perceptual alterations that such expertise affords. Finally, anecdotal evidence suggests that experts hear parts of the orchestra, and the various instruments of ensemble music in ways that outstrip the perceptual abilities of novices. Prior to learning about the various instruments in the orchestra, a novice will fail to perceptually distinguish between oboes and bassoons, for example. Upon training, such differences in the sound become apparent. And the situation is the same for the bass or piano parts in a jazz performance. Until one learns of the differences in the parts, the sound will not seem segmented into those parts. Again, we see that the expertise developed is automatic. Experts do not consciously infer the presence of the complex features they access; rather, they simply hear them. Also, it is plausible that the presence of background descriptive theory does at least part f the work of making such perceptual elements consciously accessible. Students learn music theory, or are explicitly told about the orchestra and its pieces. With this knowledge in mind, students approach examples of music looking for such elements. Further, they are instructed where and when to listen for the relevant features. Over time, the features begin to emerge from the music. The intervention of learned theory clearly has an affect on this process. In the absence of such instruction, the
6 7

Brennan and Stevens 2002. Eerola et al. 2006.

173 process would be much slower and erratic, or may not happen at all. And because the information can be imparted in lessons, it is plausible to hold that it has descriptive content. Recall that a mark of the nondescriptive content at issue is its failure to be meaningfully conveyed--words cannot capture it. Wine-tasting expertise offers another example. Expert wine tasters can accurately pick out a wider range of samples than can novices. Experts can also consistently perceive and group wines by subtle features missed by novices. For example, experts can group wines according to the level of tannin; what's more, experts are reliably consistent in their groupings; experts generally agree about which wines are more tannic than others. Novices will fail to note such differences--most wines will just seem bitter to them and they will fail to differentiate along that dimension. And the same holds for many of the more subtle characteristics for which wine experts are often mocked--fruitiness, oakiness, "a hint of tobacco and pomegranates," etc. Experts are also more susceptible to being misled by their knowledge. They will take up non-gustatory cues unnoticed by novices and will for that reason misclassify wines, to the point of over-rating cheap wine in expensive bottles, or even mistaking reds for whites in certain contexts. This suggests that background theory--again, descriptive content--affects their perception, for good or ill. And, again, such access is seemingly immediate, without the presence of conscious inference, and it leads to perception of seemingly independent and indescribable features. A satisfying description of jamminess in wine, say, may seem to fall short of what the taste

174 actually seems like to experts. Such a description will seem to leave something out.8 Another example, addressed to some degree in the philosophical literature, is the abilities of expert chick sexers. This case is somewhat different from the proceeding ones because explicit theory does not apparently play a useful role in the development of expertise. Chick sexers learn largely by watching experts run through a number of chicks, and then by trying it themselves with the expert looking on and correcting their errors. Expert chick sexers can sex an up to an astonishing 10,000 chicks a day, to an accuracy rate of 98%. At the speed required to achieve these numbers, it will usually not be apparent to the experts just what features of the anatomy of the chick are being accessed in order to determine sex. Chick sexers refer to this ability as, again, "intuition," and it may seem mysterious, to the point where the sexers themselves will be unable to account for their own abilities. Still, it is clear what they are seeing which allows them to determine sex. They perceive very subtle variations in the ventral folds of chicks, features which can be given a description in relational terms. If a certain aspect of the ventral fold is raised or bumpy, than the chick is a male. But it will not seem to experts that this is what they are doing. They just "sense" the maleness of a chick, and continue with their sorting. It is not that can say nothing about what they are looking at--that they can't tell if it's the head or tail that they're focused on, for example. But in terms of the specific features they access in extremely rapid sorting, experts report that they

Solomon 1990, 1997; Morrot et al. 2001. I will address empirical studies of wine tasting expertise in chapter 5, section 2.

175 cannot tell what it is they are seeing as they sort the chicks. And even when they slow down and try to explicitly pick that feature out (for instructional purposes, say), they cannot always be sure they've done so accurately. But their overall accuracy in the sorting of chicks remains extremely high nonetheless. The fact that the feature accessed can be given a relational characterization obviates the need to posit a special "intrinsic maleness" detected by the expert chick sexers. Instead, a good explanation of what's going on borrows from Simon's model of chess expertise. Experts develop chunks of theoretical information which allows them to automatically access the relevant features when they are present. Because the theory acts behind the scenes, the experts themselves are unaware of it and have trouble capturing what they are doing in informative terms. But nonetheless, it is still reasonable to posit descriptive, relational information to account for their expertise. The experts may employ such structures without being aware of it. Even though the theory employed by the experts isn't explicitly described and taught, because of the relational features accessed it makes sense to posit relational descriptions to do the referential work.9 A final example comes from more everyday experience. Still, it is arguably a good example of expert perception, and one familiar to all of us. It thereby serves to provide intuitive support for my model. Cross-culturally, men find women with a certain ratio of hips to waist attractive.10 If this "golden ratio" is present, the woman in question will seem attractive to men, whatever their culture. Here, as in the chick-sexing example, it is plausible that because they
9

See Martin 1994; on chick sexing "intuition" see 224ff. Fan et al. 2004.

10

176 are accessing a relationally describable feature--hips-to-waist ratio--they are employing relational descriptions to do so. But this is not at all how it seems to the perceivers in question. They just see that a woman is attractive, that she has the certain "je ne sais pas" that makes a woman beautiful. And what's more, they will reject the claim that all they are seeing is a particular ratio of hips to waist. Rather, they'll contend that there's some special intrinsic feature of the woman that accounts for her attractiveness, something more than just the relation of her hips to her waist. And the offered description of hips-to-waist ratio will not seem to capture what they are seeing. It will seem to leave something out, something defying description that they know when they see. But, for all that, it's plausible that all that's going on is the application of automated, nonconscious descriptive concepts. It may not seem that way to the men in question, but it fully explains the data. Again, it is clear that these automatic descriptions are not explicitly learned; indeed, it's not unreasonable to suppose that such referential mechanisms are an innately determined aspect of our psychology. Still, this does not in any way rule against them being descriptive. We simply may be built to automatically detect hip-to-waist ratio, however this process is instantiated in us. And given the relational nature of the stimuli perceived, it's plausible that the underlying referential mechanisms are descriptive. However, it may seem that there is still a crucial difference between cases of expert perception and first-person access. While experts can, with enough prompting, provide some sort of explicit relational description of what they are accessing, that never seems to be the case in first-person access.

177 While this is perhaps prima facie true (I believe that we can describe much more than we think in our conscious experience), the difference is one of degree, not of kind. First, there are a range of cases where experts cannot make explicit their theorizing. This occurs in even in chess, which is especially amenable to description. Still, masters will not have an explanation of what it is that they saw that made them think their queen was in danger, or that an opponent's king was weak. A complex description of the board will not seem to capture how the master did it, from his point of view (see the Capablanca quote above). Further, in wine tasting, music appreciation, and chick sexing, there are many cases where words do not seem adequate to capture the features at issue, despite the fact that we can accurately pick out the relationally-described feature being accessed. And the case of attractiveness makes this point most salient--even though it is hips-to-waist that men access, that doesn't seem to capture what they perceive. But we have good theoretical reason to characterize the object of their perception in relational, descriptive terms. This case is closest to the introspective case, so it is not surprising that we have less and less to say about what features we are accessing. These are the things we are built to access, so to speak, and these are the descriptive concepts we learned earliest. Given these developmental facts, it's no surprise that we find ourselves at a loss to describe what we access. But, again, good theory trumps first-person intuition regarding the underlying mechanisms of our access. Expert perception provides

178 a good model of access, so it is reasonable to extend the descriptivist view to cases of introspection. To summarize, expert perception is plausibly underwritten by descriptive theory.11 The application of this descriptive theory is automatic--it occurs without the experts being consciously aware that they are applying the theory. Because of this, the features that the experts perceive appear immediately; experts are not aware of the referential mechanisms they are employing, so the perception appears direct and unmediated by inference or other processes. Finally, because of the automatic and nonconscious nature of the application of the expert's descriptive theory, the features they thereby access appear both independent and (often) indescribable. Experts simply see the features they access. Because they are unaware of the theory application, the features appear disconnected--they simply appear to the experts. The features seem independent. Further, because experts often cannot reconstruct the theoretical reasoning employed in their perception, explanations of that reasoning will seem incomplete. They will seem to leave out an indescribable, intrinsic core of the perceived feature. It is clear from chess expertise that the theory employed by chess masters is descriptive: there is little temptation to posit intrinsic qualities of the chessboard or of chess masters' perception. The same plausibly holds true for musical expertise and wine tasting. Musicians and music appreciators, as well as wine experts, learn explicit terms for the features they come to automatically access. This explicit training provides terms to "hang" their new perceptions
11

See section 4.1, pages 1-4.

179 upon. In the absence of such explicit training, the subtle features accessed by experts emerge much more slowly or even not at all. This indicates the facilitating effect of descriptive theory and its employment in expert perception. And even in chick sexing and the appreciation of attractiveness, it is reasonable to hold that nonconscious theory is at work. Chick sexers access relational features: the various relations between subtle aspects of chick anatomy. And the same holds true in the appreciation of physical attractiveness. The ratio of hips to waist is not an intrinsic feature of those judged to be attractive; rather, it is a feature fully amenable to relational description to which we respond as if it were simple and unanalyzable. If a relational feature is being accessed, it makes

sense to posit a relational description to do so. And while chick sexers and the appreciators of physical attractiveness may reject this characterization, it is only if we embrace CMoP that we would accept their claims as reason to reject the descriptive explanation. I conclude that expert perception offers a good model of descriptions providing seemingly immediate access to seemingly independent and indescribable features. It remains to be seen if we can transfer this idea to firstperson access. But before attempting to make good on that proposal, I will briefly consider some contrary readings of expert perception. Perhaps I have misread this evidence and it does not offer suggestive support for my model.

180 4.2.3 Expert Qualia and Dreyfus It might be argued that what occurs in expert perception is the creation of new intrinsic qualia. As the expert develops her perceptual expertise, intrinsic qualities of experience that were not present before emerge. For example, it may be that wine tasters, over the course of repeated exposure to new wines, develop qualia which are then accessed directly, either because of the selfpresenting nature of qualia, or by way of p-concepts. Likewise, in the perception of physical attractiveness, it may be that subjects possess qualia responsive to the relational feature of hips to waist ratio. Then it would follow that it is not the descriptive structures responsible for immediate, independent, indescribable qualities; rather, it would be the presence of new immediately accessible, independent, indescribable qualia--intrinsic qualities of mental states not describable in terms of causal/functional role. However, there seems to be little to recommend this strategy beyond prior adherence to CMoP and the qualia directly accessible qualia it reveals. The more esoteric the example--chess, for instance--the less plausible such a move is. Further, it offers no explanation of how such qualia are formed in response to experience, nor a story about how the presence of explicit descriptive instruction facilitates the process. It seems implausible to posit special "chess qualia" unless one is already sure that qualia must be present. And while it may seem more intuitive in the cases of music, wine, and physical attractiveness, the plausible analogy between these cases and other more obviously theorymediated cases of expert perception puts pressure on this claim. What else can

181 be offered in support beyond a pledge of allegiance to CMoP and qualia? Indeed, it is in the nature of such a proposal that no positive evidence can be given beyond the testimony of perceiving subjects. But, as we have seen, this is not licensed by folk psychology--we do not ordinarily take folks to be authoritative about the underlying processes at work in conscious perception. Thus, additional argument or evidence is required if we are to accept such claims. None is forthcoming, so we can continue to operate under the assumption that the descriptive reading of expert perception is correct. An additional challenge to my reading of expert perception comes from the work of Hubert Dreyfus (2005, manuscript). Dreyfus argues that in the development of expertise, no mental representation at all is required in the achieving the expert's final competence. Rather, they develop a set of "motor affordances," abilities to automatically cope with and properly anticipate situations by using an intertwined range of perceptual cues and bodily reactions. If experts do not represent features in their domain of expertise at all, then nonconscious descriptive mental representations can't be mediating their perception. However, Dreyfus's conclusion rests on an overly-strong reading of the term "mental representation." Dreyfus holds that experts are not using mental representations because they no longer engage in conscious application of taught rules and heuristics. Because they act automatically, without any deliberate conscious effort, he concludes that no mental representation is present.

182 But this is too quick. It is certainly the case that something has changed in the minds of experts. Why couldnt it be that the representations recede from consciousness and become automated in their application? It is only if we hold that such representations must be conscious that we can rule out their presence simply by reflecting on the before and after experiences of trained experts. And the best explanation of the new automatic skills of experts is the presence of such nonconscious representations. The presence of nonconscious descriptive representations explains why new perceptual features emerge, and why those features correspond to those in the theory explicitly learned by the novice. What's more, it's not at all clear how affordances account for the way things appear to the expert. Why would possessing a range of bodily-mediated skills alter the way things look to a subject? It's not that things look the same to the expert; rather, they look crucially different: the relevant features now "pop out" of a scene. Affordances work at all levels of motor coordination. In order to walk through a crowded room, or to avoid stumbling on a rocky beach, we must coordinate subtle movements of our bodies with ever-changing aspects of our environment. But these acts are generally not accompanied by conscious awareness of the features of the environment we are successfully traversing. So we are left wondering what accounts for the way things consciously appear to the expert. And here it is plausible that they are aware of the features in their domain of expertise by representing them, albeit nonconsciously. Finally, it is not fully clear that Dreyfus's proposal need be in conflict with mine. I am not committed to a theory of mental representation that holds that

183 representations must be conscious, nor that it must be possible that they be explicitly accessible to the subject. Instead, I hold that mental representations are characterized by the role they play in a subject's mental economy. The best explanation of a subject's behavior and experience may require positing representations. But in doing so we need not take on board the features of mental representation that Dreyfus finds objectionable. And, in any case, Dreyfus must offer a theory of the changes that occur on the subject's end when expertise is developed. This is likely to require positing some changes and additions to their minds, no matter if we call such things "representations" or not. So long as one avoids an overly-intellectualized version of such states, I do not think Dreyfus and I will be in conflict. And it should be noted that Dreyfus's proposal offers no aid and comfort to friends of qualia and p-concepts (whether reducibly explainable or not). Dreyfus would accuse me of positing too much structure, while qualophiles posit even more. I conclude that the descriptivist reading of expert perception is correct. It offers a way to understand the purportedly difficult features of the appearance of conscious states, while clearing the way for a fully worked-out descriptivist explanation of why things appear as they do. The next task is to sketch just what the nonconscious descriptive theory underwriting first-person access amounts to. It may seem that even if I am correct about expert perception, I still have not discharged the main burden: offering an explanation of how things seem to us in introspection. Here, one might worry I will not be able to produce intuitively plausible descriptive characterizations of our conscious states, states that we

184 seem to pick out in terms of what it is like for us to be in them. I now turn to this important task.

4.3 A Descriptivist Model of First-Person Access 4.3.1 Conditions of Adequacy In this section, I will sketch out a descriptivist model of first-person access, one that explains both how we cognitively access our conscious states and why those states appear as they do. There are several requirements on such a model. First, it must explain how we are aware of our conscious states at all. This requirement, recall, posed considerable problems for the p-concepts approach. Second it must explain the particular features of conscious states we can be aware of. Third, it must explain why first-person access seems direct and why the features accessed seem independent and indescribable. Fourth, the model ought to be empirically confirmable. In what follows, I'll lay out the particular sorts of descriptive contents the model employs to explain our awareness of our conscious states, and I'll reiterate why the model, following upon the suggestion derived from our examination of expert perception, delivers the right appearances. Finally, I'll suggest possible lines of empirical evidence that might support the view. I'll defer considering whether such evidence is actually available until chapter 5. In more detail, a descriptivist model must provide at least some characterization of the sorts of descriptions employed in first-person access. If it does not, we will not allay the worry that there are special intrinsic, indescribable

185 cognitively-accessed features of consciousness, features that no descriptivist model can account for. The model must therefore provide an account of relational descriptions for the seemingly intrinsic features of conscious states we can access. Further, the model must account for the features of the appearance of first-person access we've focused on in the proceeding several sections. It must explain why first-person access seems immediate, and why we seem to cognitively access independent and indescribable features of consciousness. In addition, the model ought to provide an explanation of the so-called "transparency of introspection" (Harman 1990). Introspection, according to some, does not make us aware of features of our conscious states as such. Rather, it makes us aware of features of perceptual objects, though in a more focused and attentive manner. When we focus upon our experiences, all we access are more features of the objects we perceive. There is no reason to hold, on this view, that we are aware, in the seemingly direct manner of first-person access, of any features properly belonging to our mental states. The transparency of introspection should not be taken at face value. We can be aware of conscious qualities as features of our mental states as such. We can be aware, that is, in a seemingly direct manner, of our mental states and their features. It is only if we hold that this awareness must involve features that seem qualitatively different from those involved in perception that we would conclude that we can never be aware of our mental states and their qualities. But this is to assume that the only model for introspective awareness is a perceptual model, one involving distinct higher-order qualitative features.

186 Instead, we can embrace a thought-like model of access, one employing a battery of theoretical descriptions. When I pick out my states in this way, I can be aware of them as such, but without there being any difference in the qualities I am aware of. Introspection is transparent--no new qualities emerge--but I am still aware of my mental states as such.12 Still, there is something requiring explanation in the claim of transparency. When we introspect upon our conscious states, we are not thereby aware of new qualities. It does not seem that something new is added to the experience-Harman is quite correct that we simply seem to pick out the same qualities, perhaps with a more attentive focus. Further, there seems to be a very tight fit between the qualities we can be aware of in perception and the qualities we can introspect. Once a subject can introspect, at about 3.5 years of age, when she learns to perceive a new quality she automatically seems to be able to introspect herself perceiving that quality. There does not seem to be an additional learning phase where she needs to develop the ability to introspect that quality. So even if the transparency thesis is not literally true, there are appearances here requiring explanation. A model of access ought to provide that explanation.

4.3.2 Characterizing the Model Turning now to the characterization of the model, the phenomenon of expert perception suggests that seemingly direct access to apparently independent and indescribable features can be achieved by the automatic application of a

For extensive criticisms of the perceptual model, see Rosenthal 1997, 2002a, 2004. See also Shoemaker 1996.

12

187 descriptive theory--a collection of domain specific, interdefined descriptive concepts, involving nonobservable explanatory posits. Thus, to cash out this analogy, we must sketch out the automatically-applied theory at work in firstperson access. The preliminary step in this task is to note the range of similarities and differences between qualities cognitively accessible in introspection. Here, as elsewhere, we need to stick to the commonsense characterization of the appearances provided by folk psychology. First, however, we should note a distinction between qualities perceived as features of perceptible objects and qualities accessed as features of our mental states. People will describe an object as red or green, sweet or sour, rough or smooth. But they will also describe their sensations in these terms. We commonly say we are having a red experience or an experience of a sour taste. Further, we ordinarily recognize that things aren't always as they appear to us; we recognize a commonsense distinction between appearance and reality. To explain what happens when we misperceive or hallucinate, we recognize that there must be features of mental states accounting for our mistaken perceptual beliefs and accounting for how things seem to us in misperception--we need to posit features of mental states to account for how things seemed to us when we formed our incorrect perceptual belief. Thus, we ordinarily recognize that the qualities we seem to perceive are in some sense features of our mental states, and can be accessed as mental qualities. While there may be theoretical reasons to deny the existence of either one of these groups of qualities, our folk psychology plausibly commits to both of them. We can label the two groups "perceptual

188 qualities" and "mental qualities," respectively. In what follows, I'll pick out perceptual qualities--qualities that perceptual objects seem to have--by using the ordinary quality words. Thus by "red," I shall mean "perceptual red," by "sour," I shall mean "perceptual sour," and so on. To refer to mental qualities, I'll affix the term "mental" to the ordinary quality words: "mental red," "mental sour," and so on. Turning to the descriptive terms themselves, intuitively we can be aware of the various qualitative similarities and differences holding amongst our conscious sensations. We can be aware, in a seemingly direct fashion, that mental red is

more similar to mental purple than to mental green. Likewise, we can recognize that mental sour is more similar to mental bitter than to mental sweet; mental tinny sounds more similar to mental bright than to mental muffled; and mental rough feels more similar to mental sharp than to mental dull. We can even recognize then similarities and differences holding amongst our bodily sensations: sharp pains feel more similar to stabbing pains than to dull pains; tickles feel more like caresses than like scratches, and so on. All of these descriptions of similarity and differences avoid reference to any intrinsic quality of experience; rather, they are specified in comparative terms.13 These relations of similarity and difference form what are termed "quality spaces" for the qualities. The various relations of similarity and difference holding among the sensory qualities define a quality space. A quality space can be determined by considering the judgments of subjects: the judgments of
In terms of a 3-place similarity relation: X is more similar to Y than to Z; or a 4-place similarity relation: X is more similar to Y than Z is to W. See Clarke 1993.
13

189 subjects provide the various dimensions along which a quality can differ. In this way, we can form quality spaces of color sensations, tactile sensations, olfactory and taste sensations, and even sensations of shape and size. All the accessible similarities and differences that hold among sensations can be plotted into quality spaces. This, in turn, provides a means of relationally describing each quality: we describe its relative position in the relevant quality space. A sensory quality is mental red as opposed to mental green because of its location in the mental color quality space. Likewise, a taste sensation is mental sour as opposed to mental sweet because of its location in the mental taste quality space. These relational descriptions provide unique descriptive identity conditions for all the qualities we can cognitively access from the first-person perspective.14 Here, then, is a sketch of how we might access conscious qualitative state. We automatically apply, in response to the presence of conscious sensations, a descriptive theory specifying qualities in terms of their location in commonsense mental quality spaces. Let's say we perceive a red apple. We token a perceptual state comprised of the intentional content that a red apple is present and that is possesses a range of qualitative features, including mental red, mental round, mental shinny, and so on. We then focus our attention on our conscious state, and, using the theory--which picks out the qualities of our conscious state in terms of location in various quality spaces, picks out intentional content and mental attitude in terms of causal/functional role, and so on--represent ourselves as being in a state that is about a red apple, possessing

See Shoemaker 1975; Rosenthal 1991, 1999a, 1999b, 2005a; Clark 1993; Palmer 1999, for extensive elaboration and defense of these claims.

14

190 a quality that is more like mental purple than mental green, more like mental orange than mental yellow, more like mental violet than mental blue, that is "hotter" than mental green sensations, etc. It is also more mental spherical than mental rectangular, more like mental bright than mental dull, and so on. In this way we determine a unique location in several mental quality spaces, including the mental color space, the mental visual space shape, the mental surface quality space, and so on. And because we are representing ourselves by way of descriptions, we have an explanation of why we are thereby aware of that state and its various features. We cognitively access our conscious sensations.15 But one might worry: what of the allegedly intrinsic features of the qualities of apples that we seem to access, the intrinsic qualities that allow us to place the qualities in spaces at all? Indeed, Joseph Levine contends that any relational story of the kind I am offering will leave out this crucial feature. He writes, It seems that when we start with an intuitively intrinsic property and reanalyze it as a relational property, that process involves an intrinsic residue... Perhaps what seems so problematic about the case of making qualia relational is that there isn't a plausible intrinsic substitute. The intrinsic buck seems to stop here (Levine 2001, 101) Here, the lessons learned from the example of expert perception come to the fore. The qualities we access seem intrinsic because we are unaware of the theory we are applying and the nature of the mechanisms involved in its deployment. There seem to be independent, indescribable features of
15

Cf. Rosenthal 1999a, 2005a.

191 experience because we are unaware of the relational descriptions we are in fact applying. And, as in the case of expert perception, this does not entail that such features are actually there. They appear to be there, perhaps, but this does not reflect the mental reality. The model therefore accounts for the features we want. First, it offers a satisfactory explanation of the mechanisms of awareness--of why we are aware of our conscious states. We are aware of them because we descriptively represent them. And because we have plausible theoretical models of descriptive representation, the view is not saddled with the problems plaguing the p-concepts approach. What's more, we have a good explanation of the features we can access. First-person reflection on conscious sensory qualities reveals qualities that possess a range of qualitative similarities and differences. We are aware that mental red is more like mental purple than mental green. This is explained by the fact that we pick out mental red in just these terms. This is not a descriptive add-on, as in the p-concepts approach; rather, it is part and parcel of how we pick our conscious qualities out. And we have a plausible explanation of the appearance of the three purportedly troublesome features of conscious sensations accessed from the first-person perspective. Because the theory's application is automatic, we are unaware of its presence. Therefore, the access seems immediate. And because we are unaware of the theory's presence, we simply receive the results of the theory's application: we simply "see" mental red, without thereby becoming aware of the processes and theoretical (though nonconscious) inferences we

192 employ in the act of awareness. The qualities seem independent. And because we are unaware of the descriptions the theory employs, and because simply hearing an explicit characterization of those descriptions doesn't by itself allow for automatic application--that is, being told more like mental purple than mental blue does not conjure up a picture of mental red--an explicit characterization seems to leave something out, something that cannot be described in informative terms. If it were so describable, we would, it is thought, be able to instantly know what it's like for folks to see red just by hearing the description. So there must be some indescribable quality of sensation present. But this does not follow. We dont instantly "know what it's like" simply by hearing the theory. But this does not entail the presence of intrinsic qualities; instead, it just means that we don't possess the ability to automatically apply the theory. Still, we may in fact be applying such a theory unbeknownst to us, from the first-person perspective. I'll have more to say about the issue of indescribability and "what it's like" in the final section of chapter 5. For now, it is enough to see how the descriptive theory accounts for the appearances. There is a remaining issue, however. It might be argued that while my model does a good job of explaining how we cognitively access conscious qualities, I've offered no explanation of how it is that we pick out these qualities as conscious. And, indeed, Chalmers and the other theorists surveyed so far hold that we have a special "phenomenal concept" of consciousness, one that defies characterization in relational terms. If my model holds that we pick out conscious sensations, how do we describe them as conscious? If we cannot

193 provide a plausible relational characterization, there will remain an undischarged intrinsic element in the heart of my model, and the problems will reemerge at this level. But there is a plausible relational characterization at hand, one that explains how we pick out our states as conscious and fits with commonsense folk psychology. Following the work of David Rosenthal, we can hold that conscious states are states we are aware of ourselves as being in. Rosenthal argues that we wouldn't intuitively consider a state we are unaware of being in as conscious.16 And this accurately tracks the way we ordinarily distinguish conscious from nonconscious states. To run through the examples one more time, if I am jealous, but unaware of myself as being jealous, we wouldn't hold that my jealousy is conscious, even if it is the dominant cause of my behavior. And this hold true even in the case of sensory states. I may have a headache all day long, but the headache may fade in and out of consciousness. A reasonable commonsense way to characterize the situation is that I had a headache all day long, but wasn't always aware that I did. And when I wasn't aware of the headache, we'd often say the headache at that time was not conscious. There is nothing particularly jarring or incoherent, from a folk-psychological perspective, about characterizing things this way. And this is good evidence that such a characterization fits with commonsense folk psychology. It's not that this is the only way we have to characterize the situation, but it isn't conceptually inconsistent or incoherent to do so.

Rosenthal 1986, 1997, 2002a, 2004, 2005b. Many of these papers are collected in Rosenthal 2005b.

16

194 And, what's more, this way of characterizing things fits well with our ordinary ways of thinking about introspection. If I'm in no way aware of a state, it isn't available for introspection--I won't notice it to introspect in the first place. With this in mind, I suggest that we pick out our conscious states as conscious by picking them out as states we are aware of ourselves as being in. But one might worry that the concept of awareness itself is "phenomenally loaded." Doesn't awareness itself bring with it the idea that we are conscious, and thus we are still left with an intrinsic remainder? Arguably, it does not. Looking at the examples introduced to show that our folk concept of consciousness can be relationally characterized, we can see that folk psychology licenses cases of awareness of something without consciousness. To take Armstrong's shopworn example, I may drive for a long distance, and become lost in thought about philosophy as I do. I may drive for miles without consciously seeing the road, and then I may "snap to" and realize I wasn't conscious of the road for miles. But I didn't drive off the road, and I may well have negotiated a range of traffic obstacles. In this case, we'd ordinarily say, I was in one sense aware of the road, but not consciously. Further, there are a range of cases from ordinary life and from cognitive science, which point to the same conclusion. In cases of subliminal perception, I may not consciously see the object flashed on a screen, but it makes sense to say I was aware of it because it altered my subsequent behavior. Or a quickly flashed and then "masked" word may go by outside of conscious experience, and still influence my behavior in disambiguating words or completing word stems.17 We'd ordinarily say that I was aware of the stimuli, but
17

See Marcel 1983a, 1983b.

195 not consciously. Or again, I may have new shoes that pinch my feet all day, and may even limp to avoid putting pressure on my feet. But the pain itself need not be conscious all day long. Yet to explain my limping behavior, we'd say that I was aware of the pain, just not consciously. Again, there is nothing folkpsychologically incoherent about such a characterization, indicating that our commonsense view of the mind allows for awareness independent of consciousness. Awareness, therefore, does not automatically bring consciousness into the picture. We can thus pick out our conscious states as states we are aware of ourselves as being in without introducing anything intrinsically phenomenal. There is therefore no need to bring in a special intrinsic phenomenal concept of consciousness. We have a good relational characterization present in folk psychology, one that fits with ordinary ways of thinking and talking about consciousness, and one that is relational in nature. So, to return to the example above, introspecting on the perception of a red apple involves automatically applying a theory which holds, in effect: "I, myself, am in a state I'm aware of being in, that's about a red apple, and has qualities more like mental purple than mental green, more like mental spherical than mental rectangular, etc." I will now briefly turn to the issue of the so-called "transparency of introspection." This requires a detour through a possible developmental story for the model. The transparency of introspection, to repeat, is not literally true: we can be aware of the qualities of our conscious sensations as such. But it does point to several important features of the way introspection presents itself to us

196 that should be addressed. We do not seem to become aware of new qualities in introspection; they just seem to be the same ones we are aware of as belonging to objects in perception. And there seems to be a tight connection between what we can perceive and what we can introspect: once we have the general ability to introspect, it seems that when we perceive a new quality, we can also introspect it. This close tie between perception and introspection is explained on my model in the following way. First, as young children, we develop concepts for the qualities we perceive. A plausible story about this process is that we develop concept which place the qualities in relational quality spaces: red is more similar to purple than to green, and so forth. In this way, we come to be aware of the qualities possessed by objects in the world. Around age 3.5 years, there is a reliable transition that occurs in children. At this age, they pass what in psychology is called the "false belief task." They come to realize that others may not have the same beliefs about the world that they do. This involves developing and applying an appearance/reality distinction. Children comes to realize that things may not be as they appear, and that one may believe something on the basis of appearance that turns out not to be the case. This in turn prompts the development of a theory to explain peoples' (and their own) behavior in cases of perpetual error. They posit internal states with qualities corresponding to the perceived qualities of objects in the world. This explains why things appear as they do in cases of misperception, and why people form incorrect perceptual beliefs. That is, they develop the theory posited by my model, a theory that picks

197 out their conscious states in terms of their relational features, including features characterized by their location in quality spaces. But they need not develop a whole new range of concepts for the qualities of their mental states. Rather, they co-opt the concepts they've already developed to pick out the qualities of perceived objects. These concepts pick out perceptual qualities by their location in quality spaces, and, due to the automatic application of the theory and the lack of conscious access to the theory itself, it seems to the children that they are directly accessing intrinsic features of objects. In turn, when they co-opt these concepts to apply to mental states, they maintain both the relational quality space characterization and the automatic application, with its attendant effect on the appearances. They are thereby aware of qualities of their mental states that are characterized in just the same relational space as the qualities of perceptual objects. The qualities appear the same, in terms of their qualities. And what's more, because they are unaware at any level of the application of this theory, the qualities accessed in both cases seem to have intrinsic features. Nothing new appears, and thus introspection seems transparent in the sense specified. And whenever they develop a new perceptual concept--one that picks out the qualities of perceptual objects--it is a short and automatic step to co-opting this concept for introspection. So long as they possess a theory that picks out their conscious states, they can slot in the concepts for qualities developed at the level of perception. This explains the tight fit between what we can perceive and what we can introspect.18

18

Cf. Rosenthal 1999b, 2005a.

198 This, of course, differs from the positions of Harman, Dretske, and Tye on transparency. These theorists take transparency to be a key justification for a representational theory of consciousness, notably a "first-order" (FOR) version of such theories. FOR holds that a mental state is conscious when it has the right representational content and when it is disposed to interact with our beliefs and desires in the right way. Tye, for example, holds that conscious states are PANIC states: states marked by poised, abstract, nonconceptual, intentional content. For Harman, Dretske, and Tye, transparency removes the need to posit "intrinsic qualities of experience," because the only qualities we ever access in experience are qualities of represented objects, qualities that can be fully accounted for in intentional, representational terms. While I accept that transparency presents an important fact about how things appear to us which any theory must account for, I reject a strong reading of the representationalist claim--that we are never aware of our mental states or of their qualities as such. I hold that we are aware of our mental states as such, by way of a descriptive theory, but not in a way that requires the presence of qualities that appear different from the qualities we access as features of perceived objects. I do accept a weaker reading of the representationalist claim, that we do not access intrinsic qualities of experience. I hold that the qualities only seem intrinsic, and that, I believe is generally compatible with a representationalist approach. But it is not the strong conclusion that Dretske and Tye, for example, want--one that rules out any "higher-order" theory of consciousness. Though I believe my

199 model of first-person access is compatible with first-order representationalism, I believe these views have other fatal flaws.19 This concludes my sketch of a descriptivist model of first-person access. We have a grasp of how it makes us aware of our conscious states at all, how it makes us aware, in relational terms, of the qualities we are aware of in firstperson access, and how the model accounts for the purportedly problematic features of access. Because we automatically apply the theory, our access seems direct, and the qualities we thereby access seem independent and indescribable. But we need not take the appearance of these features to indicate anything problematic for a satisfying explanation of consciousness. To reiterate, mental appearance need not provide a clear window to mental reality.

4.3.3 Empirical Evidence A final issue must be addressed in this section. What sort of empirical evidence might be offered in support of the model? If no empirical evidence can either confirm or disconfirm the model, that would be an unhappy result. Everything would then turn on the initial characterizations of mental processes; in a sense, everything would be definitional. There is nothing in principle wrong with this: so long as we have properly justified our characterizations in the terms of commonsense folk psychology, the approach is warranted. But there would be little else to sway those who are unsure of the model, or who accept various antimaterialist intuitions. If, on the other hand, we can point to a range of
Harman 1990; Dretske 1995; Tye 1995, 2000. For "higher-order" views see Armstrong 1968, 1980; Rosenthal 1986, 1997; Lycan 1987, 1996. For criticisms of FOR see Rosenthal 1997, 2002; Carruthers 2000.
19

200 empirical evidence that provides independent reason to accept the model, we will be in a stronger position. And while the interpretation of empirical evidence is itself to some degree hostage to the initial characterizations at work (either tacitly or explicitly), to the extent that the evidence is reasonably interpreted in a manner that supports my model, some measure of support will be attained. It is often that case that if one does not have a particular axe to grind in consciousness debate, then one will interpret empirical results without recourse to intrinsic qualities and indescribable features of experience. Of course, this could just show that empirical researchers are ignoring important features of experience or illicitly assuming materialist characterizations. But it also may be that the best reading of the results points in that direction. And I will argue that this is indeed the case. But for now, I will only lay out the sorts of things that could empirically confirm my model. I'll leave the task of interpreting actual results to chapter 5. There are two main lines of evidence for my model. The first is supports my rejection of the introspectionist methods employed by Chalmers, Levine, Block, and the defenders of the p-concepts approach. This evidence is most famously cataloged in Nisbett and Wilson's 1977 review paper, "Telling More than We Know." I'll go over this evidence in detail, as well as some more recent results in the same vein in the next chapter. For now, this does not show the empirical confirmability of model as much as the empirical plausibility of my theoretical ground-clearing for the model. But that may be even more important, given the role of "pretheoretic" intuitions in this domain. The second line of

201 evidence comes from the expert perception literature. This, I will argue, offers direct empirical support for my model, both by justifying my reading of the suggestive analogy offered by expert perception, and by offering actual evidence of descriptive concepts in first-person access. I will revisit two well-studied areas of expert perception, chess expertise and wine tasting expertise. At this point, one may be tired of experts, but digging into the empirical mud a bit deeper is a useful task, both to clarify just what the data shows, and to help make clear the possibilities and pitfalls of empirical research in this area. In any event, there is clearly empirical data available to confirm or refute my model. But, as should be clear from the discussion of expert perception above, a high-degree of theoretical stage-setting is necessary to interpret data about first-person access and consciousness. This does not render such data useless, but it does make the evidential situation less direct than one might want. But if we do not attempt to tie our theories to empirical data, we might as well just be making all this up. And that would be an unhappy, though perhaps not unfamiliar, situation for empirically-minded philosophers to be in. With that in mind, I'll turn to the empirical evidence in chapter 5.

CHAPTER 5: EMPIRICAL EVIDENCE, AND THE KNOWLEDGE ARGUMENT

In this chapter, I will consider the empirical evidence that can be brought to bear in support of a descriptivist model of first-person access. In addition, I will consider empirical evidence offered in support of my general claim that firstperson introspective access is a guide at best to how things appear in conscious experience, but not a guide to the underlying nature of our conscious states. This claim is crucial both to my criticism of Chalmers, Levine, and Block in the first two chapters and to my defense of a descriptivist model of access. If we can tell simply by introspecting that conscious states are nonfunctional or nonphysical, or if we can tell that we are not using descriptive concepts in accessing those states, my view is in trouble. I have already argued that our commonsense folk psychology does not license this kind of access. But it might be thought that this is a rather weak claim, one easily overridden by careful consideration of phenomenology and obvious inference from that phenomenology. I have attempted to show that this is not the case, but it would be to my advantage if additional evidence could be brought to bear. In the first two part of this chapter, I will show that there is good evidence both that introspection is not a reliable guide to the underlying nature of our conscious states, and that we actually employ a nonconscious descriptive theory when accessing our conscious states from the first-person perspective. I will begin by laying out the necessary theoretical background for considering this evidence. Then I will present empirical results showing that introspection is in

-202-

203 many cases unreliable, and that we at times confabulate what conscious mental states we are in. This, I will argue, puts pressure on the friends of introspection to delineate just when introspection is a good guide to anything about the mind, in particular when it is a good guide to the mechanisms operating behind the scenes of conscious experience. Then I will present data on expert perception, from the literature on chess expertise and wine testing expertise, to shore up the informal case presented in chapter 4 and to show that the best reading of the data indicates the presence of descriptive concepts. The last part of the chapter returns to philosophical territory. I will consider the so-called "knowledge argument" against materialism, to show how my model deals with this most compelling of antimaterialist intuition pumps, and to assuage any lingering worry that I haven't fully put to rest the intuitions underwriting the attempts to characterize a special explanatory problem of consciousness.

5.1 Evidence Against the Accuracy of Introspection 5.1.1 Common Sense and the Accuracy of Introspection Before turning to the empirical cases, I want to reiterate that commonsense folk psychology does not license infallible introspective access. If one is committed to the idea that introspective access is infallible, or to the idea that introspective access is epistemically privileged even if there are particular circumstances were it can go wrong, the presumption will be that the empirical evidence is misguided or limited in scope. And what's more, the burden of proof to establish introspective error will be correspondingly high. The challenger of introspection

204 will have to show that a process ordinarily seen as infallible has gone astray--she will have to challenge an ordinarily unimpeachable witness. It will be relatively easy for defenders of introspection to counter these claims; the standards of proof will be high. But if introspection isn't presumed to be an infallible source of information about the mind, then the evidentiary situation is quite different. Instead, we will be looking for reasons to accept the deliverances of introspection, rather than challenging a well-established witness. The burden of proof will be on those who hold that introspection is useful for learning about the mind. Their task will be to say just when introspection is reliable, and just what sorts of things it can tell us about. The evidence from cognitive science looks very different in this context. These "pretheoretic" presuppositions are fixed by common sense, so it is important to be clear about how folk psychology characterizes introspective access. As stressed in earlier chapters, there are a range of ordinary cases which show we can be wrong introspectively about what's going on in our minds. To run through the cases again, I may be jealous and unaware of my jealousy (and likewise for emotions like anger and happiness or moods like depressed or manic). While one may argue that, for theoretical reasons, these cases are best classed as conscious, this goes beyond our ordinary ways of speaking about things. People at times refer to cases of nonconscious jealousy or anger--they find nothing incoherent or contradictory about doing so. And though this of course isn't the last word on the matter, it shows that folk psychology does not rule such cases out. What's more, we may have a headache all day which only

205 sometimes makes its presence known in consciousness. Or we may have shoes that pinch all day, causing us to limp. Saying our feet hurt all day, though the pain was no always conscious, is not a bizarre or incoherent way of speaking. And in considering cases from cognitive science, like subliminal perception or masked priming, it is certainly an open possibility that such cases involve nonconscious sensory states. There is nothing definitional ruling this interpretation out. In addition, there are many more moderate cases of introspective error accepted in folk psychology. I may initially take myself to be in one sensory state, and then realize I am in another. Consider Shoemaker's "fraternity hazing" case, where a pledge standing in front of a fire, in sight of red hot pokers, has a piece of ice placed on his neck.1 At first, the pledge will seem to feel searing heat, and only a moment later will he realize that the sensation is one of cold. Or I may take a sip from a glass of orange juice that I believe to hold milk. Even though I like orange juice, the taste will seem disgusting to me--I will introspect myself as having a disgusting taste sensation. Only a moment later will I recognize that it's orange juice I'm tasting, and then I'll realize that the taste is one I know and like. Introspection, under the influence of expectation, can be in error about what sensations we are experiencing. Again, this is not to say this is the only way to unpack these cases. Rather, it's a viable folk-psychological option, one not ruled out before hand as incoherent or contradictory. Such cases may be odd, but they are not in direct conflict with commonsense theory.

Shoemaker 2003.

206 What's more, it is widely recognized by friends and enemies of introspection alike, that when we are tired, distracted, drunk, on drugs, etc. we make introspective errors, just as we make perceptual or reasoning errors. If I am tired or drunk, I may misjudge my current state and wrongly assume I'm fit to drive. Or I may have forgotten that I took an antihistamine and operate heavy machinery as if I were fully alert. And we can make more specific introspective errors in such cases. I may not notice painful bumps and bruises until the night after a bender. Or I may misjudge how clearly I can see the dart board in a drunken game of darts. Again, this is a plausible folk-psychological interpretation, and it is accepted even by those who hope to delineate a range of introspective cases that are free from the possibility of such errors. Finally, it is not part of folk psychology that we can tell, simply by introspecting, the underlying nature of our conscious states. This certainly goes far beyond the ordinary dictates of common sense. Most folks will say that they have no idea how the processes and mechanisms of the mind work, though they may feel that the process is mysterious. But this is just to say, following Armstrong, that we are not aware of the mind as physical (or functional); it is not to say that we are aware of it as nonphysical (or nonfunctional). This shift isn't licensed by common sense, and it is not part of our folk-psychological conception of the mind. And while many hold views about the soul and the possibility of an afterlife, this is not given in the appearances delivered by introspection. One may have other reasons for holding to these beliefs, but introspection alone does not support them. Generally speaking, people have little idea about the

207 underlying nature of the mind, and they are willing, in certain cases, to hold that nonconscious processes and mechanisms are responsible for everyday actions. Introspection certainly informs us about how things seem from the first-person perspective, as I've stressed throughout. But how things work behind the scenes is a different matter. In summary, commonsense folk psychology is not committed to the infallibility of introspection. There are a range of cases where, from our everyday perspective, things can go wrong when we introspect. What's more, folk psychology is silent about the underlying nature of the mind and it does not hold that introspection gives us access to this nature. It does give us access to how things seem to us from the first-person perspective, but it how things really are is another matter. There is an additional sense in which introspection might appear infallible. Introspection gives us access to how things seem. Can we be wrong, then, about how things seem to us? If not, then there does seem to be a level of infallible access. And this does, at first glance, seem in accordance with common sense. I might be wrong, perhaps, that I am in pain. But can I really be wrong that I seem to be in pain? How could that be? Indeed, I have already asserted, in chapter 3, section 1, that there is compelling folk-psychological inference we make concerning the accuracy of our first-person access--there seems no space for error to occur here, no intervening medium or process to go astray. But even if we accept this claim--that we can't be wrong about how things seem to us--what does that buy us in the debates about the nature of

208 consciousness? All this amounts to is a statement of the appearances to be explained. I have characterized the problem of consciousness in just this way throughout. It is the problem of accounting for a particular group of appearances. But this in no way licenses one to read from how things seem to the underlying mechanisms that account fro these "seemings." And that step is what is at issue. To say that I seem to be in a state with indescribable qualities is one thing, saying that I actually am in a state with indescribable qualities is another, even if we are infallible about how things seem to us. So this move does not aid the friend of introspection. Further, I will contend that the experimental results at issue all challenge the link from how things seem in introspection to the underlying mechanisms of mind, even those mechanisms cached in folkpsychological terms. So even if we are to grant this limited infallibility, it seems of little use in defending introspection. But there are reasons to doubt this limited infallibility nonetheless. While it may seem correct to say I can't be wrong that I seem to be in pain, can we say the same for "I can't be wrong that I seem to be in states with irreducibly intrinsic qualities"? Why should we accept this as infallibly given, even at the level of "seemings"? There appears to be a limit to how far we will go, commonsensically, on granting that someone is infallible about how things seem to them. We would want to know what such a person is talking about when they say it seems to them that they are in states with irreducible qualities. And we might well say to them, following Armstrong, "Are you sure that the states simply do not seem extrinsic, rather than positively seeming intrinsic?" And we might

209 conclude that this person is reading too much into how things seem to them. There is clearly a limit to how far the infallibility of seemings goes. And that indicates that we are open to outside refutation (and confirmation) about how things seem to us, in some cases. But where do we draw this line? In the face of odd empirical results, given that we sometimes question peoples' authority even on how things seem to them, why not think that we could even be wrong that we seem to be in pain? So the limited infallibility that can be gained in this manner is of no help to the issues at hand, and, what's more, it's not clear we should even allow the limited cases, given that not all seemings are infallible, even in commonsense folk psychology. I conclude that the burden of proof is on the friends of introspection to tell us in what circumstances introspection is reliable, and what sorts of information it delivers in those cases. I am not contending that introspection is never useful or is always in error about the mind; on the contrary, I have relied upon it in fixing the data a theory of consciousness must explain. But if we are to move beyond the appearances, we require justification, and in particular, we require justification for the specific introspective claims made in setting out a problem of consciousness. Obviously, there will be more to say about interpreting the results. In the section after next, I'll present a number of experimental results, beginning with Nisbett and Wilson's classic 1977 paper "Telling More Than We Can Know: Verbal Reports on Mental Processes." I'll also consider some of the criticisms of Nisbett and Wilson's work from the psychological literature. Then I will present

210 some more recent studies in support of Nisbett and Wilson's initial conclusion. But first, I will show that all of the theorists considered so far in this dissertation accept Nisbett and Wilson's results in one form or another. However, they all hold that the results do not threaten their particular uses of introspection. I'll argue that despite their claims to the contrary, the empirical results do indeed undermine their efforts to set up a special problem of consciousness.

5.1.2 Accepting the Evidence In this section, I'll argue that Chalmers, Levine, and Block all accept the empirical evidence against the accuracy of introspection. However, they all believe that the evidence can be cordoned off, leaving their particular uses of introspection untouched. I'll begin with Levine and Block, who are explicit about their acceptance of Nisbett and Wilson's results. The evidence is more circumstantial with Chalmers, but a relatively clear case can be made for him as well. In a late section of his 2001 book, Levine notes the empirical evidence against the accuracy of introspection, writing I see no reason to doubt any of the vast body of data showing just how wrong we can be about whats going on in our minds; in fact, I argued... that when it comes to essences, or natures, we have no special epistemic access even in the case of our own mental states (Levine 2001, 138). He adds a footnote to this passage stating, "Nisbett and Wilson (1977) is the locus classicus" (193, fn. 10). Here, Levine sounds a rather skeptical note on the deliverances of introspection, especially when it comes to the underlying nature

211 of our conscious states. But Levine employs introspection to establish the presence of an explanatory gap. He contends that we have "substantive and determinate access" to qualities not amenable to materialist explanation. Clearly, he does not intend his skepticism of introspective data to extend to this claim. Below, I'll contend that it does; for now, it's enough to note Levine's acceptance of Nisbett and Wilson's work in at least some contexts. Block's naturalistic approach purports to take the results of empirical science seriously; indeed, he holds that the methods of the natural sciences are the only reliable guide to finding out how the world works. His criticism (with Robert Stalnaker) of Chalmers and Jackson's aprioristic approach to metaphysics turns largely on an invocation of such naturalistic principles: Chalmers and Jackson's method fails because it does not take into account considerations of simplicity and the like that drive scientific theory. However, he acknowledges that the methodology he employs to characterize p-consciousness relies on introspection. He notes, "My approach is one that takes introspection seriously. Famously, introspection has its problems (Nisbett and Wilson, 1977...), but it would be foolish to conclude that we can afford to ignore our own experience" (Block 1995, 404). Using introspection, Block concludes that pconscious states possess qualities that cannot be captured in functional, causal, or intentional terms. He attempts to counter the empirical worries by noting that in the absence of good evidence, a number of poor lines of evidence can sometimes add up to a defensible claim. For now, I will note that Block is alive to the presence of the empirical evidence, and tries to triangulate on reliable data in

212 the face of the worries. Below, I'll argue that, despite this tack, his approach is threatened by Nisbett and Wilson's results. Chalmers 1996 book does not explicitly mention Nisbett and Wilson and related empirical findings. However, in a recent article detailing Chalmers's view on phenomenal concepts, he acknowledges that our introspective access is not always accurate. While he does argue for "limited incorrigibility" of belief formed by way of certain phenomenal concepts, he notes that Phenomenal concepts and phenomenal knowledge require not just acquaintance [a special justification-entailing relationship], but acquaintance with the right sort of cognitive background: a cognitive background that minimally involves a certain sort of attention to the phenomenal quality in question, a cognitive act of concept formation, the absence of certain sorts of confusion and other undermining factors (for full justification) and so on (Chalmers 2003b, 251). In taking note of "certain sorts of confusions," Chalmers recognizes the possibility of introspective error. These "confusions" are brought out by Nisbett-and-Wilsontype cases. And there is another crucial point at which Chalmers acknowledges the empirical pressure on his view of the reliability of first-person access. He holds by definition that "direct phenomenal beliefs," beliefs involving concepts partially constituted by the phenomenal qualities they refer to, cannot be in error. But he allows that we can form "pseudo-direct phenomenal beliefs," beliefs that seem to the subject to involve the requisite phenomenal concepts but do not. He writes

213 If there are many pseudo-direct phenomenal beliefs, and if there is nothing cognitively or phenomenologically distinctive about direct phenomenal beliefs by comparison, then direct phenomenal beliefs will simply be distinguished as quasi-direct phenomenal beliefs [beliefs that only seem to involve concepts constituted by the quality picked out] with the right sort of content, and the corrigibility claim will be relatively trivial. On the other hand, if pseudo-direct phenomenal beliefs are rare, or if direct phenomenal beliefs are cognitively or phenomenologically distinctive, then it is more likely that incorrigibility will be non-trivial and carry epistemological significance (Chalmers 2003b, 245-246). That is, if we can't tell from the first-person perspective when we are using fullblown phenomenal concepts, the incorrigibility of introspection amounts to the claim that when we are using direct phenomenal concepts, we have incorrigible knowledge of our states. But we can never be sure from the first-person perspective that we are using direct phenomenal concepts, rendering the claim epistemologically uninteresting. Chalmers is skeptical of the possibility of justifying a stronger "higherorder" incorrigibility claim, licensing incorrigibility with respect to the concepts used to access our conscious states. So the key questions become, how rare are cases of confusion, and to what extent those confused cases seem indistinguishable from veridical cases by the subject? I will argue below that Nisbett and Wilson's results show that pseudo-direct phenomenal concepts are not rare, and that there is no way for the subject to distinguish those cases from

214 the first-person perspective. This undermines Chalmers's attempt to locate a secure epistemic venue for introspection, and since introspective access is crucial to his delineating of an alleged folk "phenomenal concept of consciousness," his view is threatened by the empirical results. In conclusion, all three theorists appear open to challenge by the empirical data. It remains to be seen if that challenge can be made good. In the next section, I'll present Nisbett and Wilson's results, and then I'll consider some responses to their claims from the empirical literature. After addressing those concerns, I'll argue that the results do indeed threaten the uses of introspection made Chalmers, Levine, and Block.

5.1.3 The Empirical Results The use of introspection as a means of gathering data about the mind has been under assault in psychology since the early part of the twentieth century. However, in a number of areas, subjects' reports on their mental states still played an important role, notably in social psychology. In 1977, Richard Nisbett and Timothy Williams presented an extensive review article, cataloging a number of studies challenging the accuracy of introspection in even seemingly straightforward circumstances. In addition, they presented data from several experiments crafted to further support their anti-introspectionist claims. The overall impression by many researchers of the results is summed up by philosopher William Lyons:

215 Although contemporary psychologists, philosophers, brain scientists, and those working in artificial intelligence refer to the "data of introspection" from time to time, a growing body of work in psychology should be unsettling to anyone who still maintains a belief that "introspection" is some kind of private access to occurent mental or brain states or events, which in turn bestows special status, if not unique reliability, on "introspective evidence" (Lyons 1986, 100). The focal point of this "growing body of work" is Nisbett and Wilson's (N-W) review. Not everyone, of course, accepts N-W's conclusions, and even those who do (Levine, for example) do not take it to undermine their particular use of introspection. So it is important to see just what N-W are claiming, and to what extent their data support their claims. Further, for my purposes, it is important to see just how far those results might extend into the domain of consciousness studies. N-W argue that their results undermine confidence in the data of introspection. They write, People often cannot accurately report on the effects of particular stimuli on higher order, inference-based responses. Indeed, sometimes they cannot report on the existence of critical stimuli, sometimes cannot report on the existence of their responses, and sometimes cannot even report that any inferential process of any kind has occurred. The accuracy of subjective reports is so poor as to suggest that any introspective access that may

216 exist is not sufficient to produce generally correct or reliable reports (Nisbett and Wilson 1977, 233). This is a strong declaration against the reliability of introspection. However, they moderate their initial claim by stating that their results apply only to introspective access to "higher order mental processes," the mechanisms of the mind issuing in inference-based behavioral responses. They allow, in an aside, that we may have accurate introspective access to the products of these processes, but not the processes themselves. Still, in a number of experiments, they conclude that subjects confabulate what mental states they are in; that is, they interpret themselves as being in mental states on the basis of theory in order to best account for their behavior. And this, in turn, raises important questions about access even to the products of mental processes. If we confabulate what state we are in when that state is not present, then we will fail to accurately access the product of our higher order mental processes. So a stronger reading of N-W's results follows from their data. N-W survey a wide range of empirical studies from the psychological literature, including studies from research on causal attribution theory, cognitive dissonance, access to change of evaluative or attitudinal state, subliminal processing, problem solving, and the effects of the presence of others on helping behavior. Reviewing over two dozen previously published studies, they conclude, "In the majority of the studies, no significant verbal differences [before and after the observable change in mental state] were found at all" (1977, 235). Further, N-W write that this research "seems to indicate that (a) subjects

217 sometimes do not report the evaluational and motivational states produced in these experiments; and (b) even when they can report on such states, they may not report that a change has taken place in these states" (1977, 237). They conclude, The explanations that subjects offer for their behavior in ... attribution experiments are so removed from the processes that investigators presume to have occurred as to give grounds for considerable doubt that there is direct access to these processes. This doubt would remain, it should be noted, even if it were eventually to be shown that processes other than those posited by investigators were responsible for the results of those experiments. Whatever the inferential process, the experimental method makes it clear that something about the manipulated stimuli produces the differential results. Yet subjects do not refer to these critical stimuli in any way in their reports on their cognitive processes (1977, 238239). N-W also present several of their own studies designed to tease out further evidence for their conclusion. I will present three studies in more detail below, one done by Latan and Darley on the effects of bystanders on helping behavior, and two of N-W's own studies. This should give the flavor of the research, and provide enough material to evaluate their claims. Latan and Darley present a large number of experiments showing that "in a wide variety of settings... people are increasingly less likely to help others in distress as the number of witnesses or bystanders increases" (Nisbett and

218 Wilson 1977, 241). For example, the more people who overhear an individual in another room having an (apparent) epileptic seizure, the lower the probability that any given individual will rush to help (Ibid.). Latan and Darley, interested in the fact that subjects seemed utterly unaware of the influence of bystanders, debriefed subjects at the end of the experiments in order to evaluate what the subjects were thinking. They report We asked this question every way we knew how: subtly, directly, tactfully, bluntly. Always we got the same answer. Subjects persistently claimed that their behavior was not influenced by the other people present. The denial occurred in the face of results showing that the presence of others did inhibit helping (Latan and Darley 1970, 129; reported in Nisbett and Wilson 1977, 241). In addition, Latan and Darley described their experiments to other subjects, and asked how they would respond in that sort of situation. The observer subjects all agreed with the test subjects in their appraisal: the presence of bystanders would not influence their helping behavior. The first N-W experiment I'll report is their most famous: the identical sock test. 52 subjects were shown an array of four identical socks laid out one by the other and asked to pick out the sock of the highest quality. As it turns out, subjects display a distinct "left-to-right position effect"; that is, they tend, by an almost 4:1 ratio, to chose the sock on the far right as being of the highest quality. When subjects were asked their reasons for choosing the sock they did, no

219 subject ever spontaneously mentioned the position of the sock in the array as a relevant factor (1977, 243-244). And N-W report that when asked about a possible effect of the position of the article, virtually all subjects denied it, usually with a worried glance at the interviewer suggesting that they felt either that they had misunderstood the question or were dealing with a madman (1977, 244). Subjects offered alternative reasons for their selections--better weave, softer fabric, nicer color, etc. But these were not the reasons they chose the sock; a simple right-side bias explains their behavior. Subjects confabulated the mental states they were in while selecting; according to N-W, this confabulation was based on a publicly acquired theory for explaining behavior (1977, 248-251). The second N-W study deals with erroneous reports about the influence of a person's personality on reactions to his physical characteristics. N-W term this the "halo effect": the influence of the warmth or coldness of a person's personality on a subject's ratings of attractiveness of appearance, speech, and mannerisms. Interestingly, subjects interpret the results in the opposite direction: they hold that the appearances dictate their appraisal of warmth or coldness of personality. Subjects were shown an interview with a professor "who spoke English with a European accent" (1977, 244). Half the subjects saw the professor answering in a "pleasant, agreeable, and enthusiastic way (warm condition)." The other half saw an "autocratic martinet, rigid, intolerant, and distrustful of his students (cold condition)" (Ibid). Subjects then rated the professor's physical appearance, his mannerisms, and his accent. N-W report,

220 "Most of the subjects who saw the warm version rated the [professor's] appearance, mannerisms, and accent as attractive, while a majority who saw the cold version rated these qualities as irritating" (Ibid.). Subjects in both conditions "strongly denied" any effect of their like or dislike for the teacher on their attractiveness ratings. They conclude, Thus it would appear that these subjects precisely inverted the causal relationship. Their disliking of the [professor] lowered their evaluation of his appearance, his mannerisms, and accent, but subjects denied such an influence and asserted instead that their dislike of these attributes has decreased their liking of him! (1977, 245). Subjects were unaware of what actually caused their evaluative reactions, and confabulated what went on as they made their appraisals. But is the evidence presented enough to support N-W's claim about the unreliability of introspective access? It is of course very difficult to set up experiments that really pull apart the various processes responsible for subjective experience and the processes responsible for observable behavior. The mind is a black box; we must posit theoretical variables as best we can, and then set up experiments to test those theories. But the difficulty of matching behavior, first-person reports, and theory is very great. There is always slack in these sorts of empirical claims. Still, we can look at the data as a whole, and try to find the best theoretical fit, acknowledging that nothing is knockdown here. And N-W do present results that are surprising, even if one eventually finds a way to fit them into a framework leaving a substantial place for introspection.

221 They are surprising because we do not expect to be so little aware of what is really driving our behavior. Our commonsense view generally has it that we know why we did what we did, and we know what things in our environment and in our minds are responsible for our actions. N-W's data threatens this picture. But does it really undermine the use of introspection in theorizing about the mind? I contend that the best reading of the data does indeed put substantial pressure on those who would appeal to introspection as a source of data. N-W do seem to present good cases of introspective error. And what's more, perhaps the most important element of N-W's work, from my perspective, is the claim fact that we at times confabulate what states we are. This especially should give the friends of introspection pause, because subjects are not aware that they are confabulating. Everything seems fine to them; nothing appears amiss. But if we cannot tell that we are confabulating--that is, if we cannot tell when we are employing a covert theory to explain why we did what we did--how can we be sure that we aren't always applying such a theory? As Chalmers admits, if there are no cognitive or phenomenological signs of "pseudo" acts of introspection, how are we to justify introspection in general, from the inside? But there are substantial challenges to N-W's work, most notably a critical review article published in 1988 by Peter White I will review White's complaints in what follows. I will contend that either his charges can be effectively met, or the alternative readings he offers provide no real help to those who wish to rely on introspection, particularly concerning the problem of consciousness.

222 White offers two main criticisms of N-W's work. First, he argues that the distinction between process and result offered by N-W is not sufficiently clear.2 If White is right, the force of N-W's critique of introspection is difficult to gauge. NW hold that we lack accurate introspective access to higher order processes; they allow that we have access to the products of these processes. But if the distinction cannot be made clear, what are their results about? This is perhaps a fair charge against N-W; they are not particularly clear about this distinction. However, given the evidence for confabulation, there appear to be good cases of inaccurate access to the products of the processes. If I am in not in a mental state, and confabulate myself as being in one, then I do not have accurate access to the product of my mental processes. The processes result in a state that accounts for my behavior, yet it seems to me that I'm in a different state altogether. And this is what occurs in confabulation: subjects interpret themselves as being in mental states they are not in, in order to make sense of their own behavior. Thus, even if we are unclear about where to draw the line between process and product, N-W's results still have bite, in particular when we consider the uses of introspection made by the theorists considered in this dissertation. But we can, despite White's worry, draw a workable line between process and product. Folk psychology notes a distinction between occurent thoughts and perceptions, on the one hand, and whatever it is that accounts for those occurent thoughts and perceptions on the other. I have been relying on this throughout-folk psychology, I have argued, does not license access to the "underlying
2

White 1988, 15-17.

223 machinery of the mind." And while this is explicitly a folk distinction, we can employ it when considering N-W's results. And when we do, we find surprising things. We are less aware of what's going on in our minds than we thought. The rough-and-ready distinction between occurent conscious states and what accounts for the presence of those states allows us to appraise N-W's work. And however things fall out, we are aware of less than we thought prior to learning of their work. But White presents a more substantive worry about N-W's work.3 He holds that it relies on the supposition that if we access a state, we can accurately report on it. This, he contends, is not explicitly argued for in N-W, and if the supposition does not hold, we cannot take a failure to report to indicate a lack of access. But, pace White, this claim is effectively dealt with by many of the studies. The most plausible alternative reasons to doubt verbal reports are a failure of short-term memory (this point is pressed by Simon and Ericsson 1980), or social pressures from experimenters to misreport what we access. The sock study, as well as the "cold professor" study, both make limited demands on shortterm memory. The subjects are asked, at the time they are viewing the stimuli, to report on their reasoning. If this is too much for short-term memory to handle, than there is room to doubt almost anything delivered by introspection. And the issue of social pressure is finessed by properly hiding from subjects the goal of the experiments. In fact, the pressure seems to be to answer as accurately as possible--the pressure ought to make the subjects more accurate, if they have reliable introspective access. And extensive post-experiment debriefing with
3

White 1988, 17-22.

224 subjects provides a good control on honest reporting. If subjects were fudging their answers for some reason, that ought to show up in careful questioning, especially when the researchers make it clear what they want to find out. If the subjects are trying to please the experimenters, why do they continually deny the researchers' theses, even when they are explained to them by the researchers? But in any event, even if N-W are guilty of failing to fully account for these variables, there seems little to be gained by friends of introspection. If memory effects are present even in N-W's studies, how can we rule them out in the sort of introspection that occurs in considering the problem of consciousness? And in general, if verbal reports are not a good guide to what we introspect, how can we trust the introspective reports of Chalmers, Levine, and Block? They hold the thesis that at least in some cases we can accurately report what we introspect. So they need an explanation of why things go wrong in the cases N-W present. And what's more, in the years since N-W's initial paper and White's response, researchers have continued to demonstrate the unreliability of introspection in a number of settings. A wide range of research into subliminal perception, particularly in the masked priming paradigm, shows that even complex perceptual stimuli can be processed outside of awareness. And the phenomenon known as "change blindness" demonstrates that people are surprisingly poor at detecting change in perceived stimuli, even with highly salient stimuli in the center of focal vision.4 To wrap up my review of the empirical evidence challenging the accuracy of introspection, I will briefly mention two more recent studies, one by Bargh and colleagues on the nonconscious
4

Simons and Levin 1997.

225 influence of automatically processed stimuli, and one by Johansson and colleagues on what they term "choice blindness." Once again, the evidence is not knockdown, but it shows that the more researchers investigate this issue, the more surprising gaps in introspective awareness they discover. And it serves to show that the work started by N-W was not a blind alley; on the contrary, it has continued to provide fruitful data on the mechanisms of the mind. In a series of papers, John Bargh and his collaborators demonstrate a number of interesting scenarios where subjects are unaware of the effect certain stimuli have on their subsequent behavior.5 And, as in the N-W studies, when asked, subjects deny that those stimuli have any impact on how they act. The research concerns the automatic, nonconscious activation of stereotypes used in social situations. For example, subjects were shown a number of words related to stereotypes of the elderly--for example, "Florida," "sentimental," "wrinkle." Bargh and colleagues report, "Participants primed with elderly-related material subsequently behaved in line with the stereotype--specifically, they walked more slowly down the hallway after leaving the experiment" (Bargh and Chartrand 1999, 466). In addition, subjects exposed to elderly-related stereotype words forgot more words off a memorized list than did control subjects. In another study, Bargh and colleagues subliminally presented subjects the faces of young African-American males. Later, those subjects played a game with others and were rated to be more hostile than a control group. What's more, the subjects themselves rated their partners in the game as more hostile than did nonprimed participants. The researchers note,
5

Bargh et al. 1992, 1996; reported in Bargh and Chartrand 1999.

226 For the primed participants, their own hostile behavior, nonconsciously driven by their stereotype of African Americans, caused their partners to respond in kind, but the primed participants had no clue as to their own role in producing that hostility (from Bargh, Chen, and Burrows 1996; reported in Bargh and Chartrand 1999, 467). Again, we see stimuli outside of awareness determining action, and, more importantly, subjects lacking introspective access the processes causing their behavior. These studies do not directly bear on the accuracy of introspection; rather, they show that researchers are continuing to discover features of the mind out of the reach of introspection, features that, prima facie, one would think were accessible. In this way, they add evidential weight to N-W's general antiintrospectionist claim. Finally, I will touch on one more recent study, by Petter Johansson and his collaborators.6 The researchers demonstrate what they term "choice blindness," a failure to detect a quick change of an item deliberately chosen by the subject. Subjects are shown pairs of pictures of women's faces and are asked to select which one they find more attractive. They are then handed the picture they chose, and asked to explain why they made that choice. However, on some trials, the picture they chose was quickly switched with the other picture and then handed to the subject. Subjects noticed a switch in only 13% of the relevant trials. Most of the time, they proceeded as if nothing was amiss and went on to explain why they chose this face--nice smile, interesting earrings, etc.--even though it wasn't the one they picked. The time between the initial choice and the
6

Johansson et al. 2005a; see also Johansson et al. 2005b.

227 presentation of the switched card was less than two seconds; subjects had as long as they liked to look at the switched card and explain their choice. Johansson and collaborators write, In the current experiment, using choice blindness as a wedge, we were able to "get between" the decisions of the participants and the outcomes with which they were presented. This allowed us to show, unequivocally, that normal participants may produce confabulatory reports when asked to describe the reasons behind their choices (Johansson et al. 2005a, 118119). This experiment is slightly different than the others surveyed. But it is indeed an odd effect, showing again that we aren't always able to access the reasons behind our behavior, and that when we cannot, we often confabulate to fill in the gaps. But from the subjects' point of view, nothing is amiss. It seems to them as if they chose a face for the very reasons they are now reporting. But this is not their original choice, so these reasons were not present seconds before when they selected the other face. This concludes my brief survey of empirical evidence challenging the accuracy of introspection. But how does this data apply to the methodologies of Levine, Block, and Chalmers? Surely, one might argue, these scenarios are special cases, at some distance from careful reflection on the most basic of conscious experiences. Even if we go astray in our reasoning about the causes of complex behavior, or the reasons we chose one sock over another, why think that anything these three theorists have done is under threat? In the next

228 section, I will argue that the empirical work does indeed put pressure on the friends of introspection, at the very least shifting the burden away from those who doubt the reliability of introspective access and to those who would employ it in theorizing about the mind.

5.1.4 The Effect of the Empirical Results Levine, Block, and Chalmers, in one way or another, all accept that there are conditions in which introspection fails to be reliable. But, of course, they feel that however introspection goes astray, their methods hold up. And this may seem to be the case: N-W-type results, it may be argued, are obtained only in abnormal situations, well removed from normal cases of introspection. But, even if this is the case (and I am dubious that it is), N-W's results alter the dialectical situation. When taken in concert with the claim that commonsense folk psychology does not license introspective incorrigibility, the results place the friends of introspection on the defensive. The burden of proof is on them to show us why we should accept their introspective claims. First, recall that everyday folk psychology allows that we can be in error about our own states. There are a range of ordinary cases where a good folkpsychological interpretation holds that people make mistakes about the current contents of their consciousness. And what's more, when the folk hear of the empirical results, one open interpretation is that we make introspective errors. There is nothing incoherent or inconsistent about making such a claim, demonstrating the neutrality of common sense on this issue. Finally, folk

229 psychology does not license introspective access to the underlying workings of the mind. We may believe that things don't seem functional or physical, but this is quite different than holding that they actively seem nonphysical. We just don't know; this sort of information is not accessible from the firstperson perspective. So it is not enough for the friends of introspection to simply appeal to folk psychology and leave it at that. Folk psychology does not deliver the goods. But one might claim that folk psychology is relatively unambiguous about introspection of the active, occurent mental states we are in, particularly when it comes to conscious sensory experience. And this is the place where the antimaterialists stake their claim. Reflection on the content of conscious sensory experience is the key step in characterizing a "hard problem" of consciousness. But, as I have stressed, even in cases of conscious sensory states we may be in error. Expectations may alter how things seem to us when we drink an unknown liquid or take in a new visual scene. And fear and anxiety can cause us to misinterpret nonpainful stimuli as painful.7 So the verdict is out on these cases. Now bring in the empirical results. Here we see, again and again, that we have access to much less than we think. If these cases were in line with ordinary appraisals, they wouldn't be so surprising when we first encounter them. So, given that folk psychology doesn't deliver a blanket endorsement of the incorrigibility of introspection, the added weight of the empirical cases shifts the burden of proof to those who employ introspection in their theorizing. It isn't

See Rosenthal (1997, 2002a, 2005a) on cases of so-called "dental fear," for example.

230 reliable in many everyday situations, and we keep finding new and surprising ways it goes wrong. Why accept it at all? But if the burden shifts to those who wish to employ introspection, the dialectical situation looks very different. What evidence could be offered, at all, by friends of introspection in support of their claim? If they endorse a strong firstperson methodology, friends of introspection have effectively cut off introspection from any supporting empirical data. That claim amounts to: for certain cases, people cannot be wrong about what they are introspecting. But then any purported counterevidence is ruled out by definition. But if they open the door to counterevidence (a requirement of scientific theorizing), how are we to go about evaluating their claim? We could perhaps set up situations where subjects are shown a stimulus and asked to introspect on its features. They will then report on what it is that they introspect and we will appraise their reports for accuracy. But this is to endorse the discredited methodology of early introspectionist psychology. The introspectionists soon ran into intractable arguments about just what accurate introspection revealed. That course is a dead end. Perhaps one could introduce the brain-scanning tools of modern neuroscience to settle the issue. If the right areas light up, then the introspection is accurate. That, in the long run, is likely the way to go. But there is a prior methodological stumbling block. How do we know which lit-up brain areas count as accurate introspection? We need a prior standard to judge what counts as correct or incorrect on a brain scan. And this requires the prior development of theory. Many materialist theories offer clues to how one should look at brain

231 scans. However, introspectionists like Chalmers and Levine reject those theories before they get to the stage of calibrating brain scanners. So nothing is gained by the neuroscience gambit. We are left wondering how friends of introspection can support their claim. And if the use of introspection in general is unjustified, the claims of Levine, Block, and Chalmers are undermined. Levine holds that we have "substantive and determinate access" to our conscious states, and this reveals qualities that cannot be explained in a materialist theory. But why think Levine is correct about the sort of access he has and the results it delivers? Likewise with Block, who employs introspection to develop his characterization of pconsciousness as irreducible to any causal, functional, or intentional notion. But, again, why should we trust his introspective beliefs? Perhaps he is incorrect, and in fact conscious sensory states can be characterized in those terms. Why think you can tell just be introspecting that they cannot? Block holds that several poor lines of evidence can lead to good evidence--surely, the poor lines of evidence are triangulating on some real feature of the mind. But why think the poor lines of evidence show anything at all? And if introspection is suspect, why think that alternative characterizations of these lines of evidence aren't preferable? And Chalmers acknowledges that if "pseudo-direct phenomenal concepts" are common, his claim of introspective incorrigibility is undermined. But the empirical results seem to show just that--a range of cases where subjects confabulate their mental states and don't realize that this is what they are doing. And what's more, his initial characterization of phenomenal consciousness rests on introspection.

232 He asks us to reflect on our concept of consciousness and consider a range of purportedly possible scenarios. But what else does he have to go on here but the deliverances of introspection? We consider what it's like for us to see red or to be in pain (introspection), and consider whether a zombie or a spectrum invert is possible. But if introspection is not a reliable source of data, why think this processes tells us anything about the nature of the conscious mind? I conclude that the empirical results against introspection, taken in concert with folk psychology's failure to license incorrigibility, undermine the methodology of Chalmers, Levine, and Block. This is not to say that introspection has no role at all to play in theorizing about the mind. On the contrary, I've employed it throughout. But it is limited to telling us how things appear to the conscious subject. And this alone does not deliver the stronger claims needed by the antimaterialists. So despite quotes to the contrary, N-W-type results threaten the claims of the theorists considered here. They need to justify their uses of introspection if they hope to spell out a special problem of consciousness in this way.8 In the next section, I'll consider positive evidence for the presence of descriptive concepts in first-person access. The claims defended in this section will again be relevant, though the studies have independent weight. I will look in more detail at some data from the literature on expert perception, in particular
And again, no aid and comfort is gained by saying that we can't be wrong that we seem to be in pain, to be seeing red, etc. One, that is not enough to get us to the underlying mechanisms accounting for how things seem, and two, such claims of infallibility do not plausibly extend to claims like "I seem to be in states with indescribable qualities." That goes beyond what is licensed by folk psychology. But even if we were to accept that claim, a materialist theory explaining why things seem that way would have discharged its explanatory burden. See pages 206-208 above.
8

233 chess expertise and wine tasting. I'll also briefly counter an alternative reading of that data defended by Michael Tye.

5.2 Empirical Evidence for a Descriptivist Model of First-Person Access 5.2.1 Chess Expertise In the previous several sections, we've seen that the initial arguments I developed against Chalmers, Levine, and Block have a measure of empirical support, from the work of N-W and others. But is there any positive evidence that can be offered for my model of first person access? In this section, I'll provide more detail of empirical results from the study of expert perception. I'll argue that, when taken in conjunction with the evidence of the previous sections, this data does provide empirical support for my approach. Once again, however, the data is less than decisive. But all things considered the data support a descriptivist model. I contend that research on expert perception provides evidence for the presence of descriptive concepts in first-person access. First, it provides evidence that experts perceive a range of perceptual features that novices miss. Second, taking the transparency of introspection into account (as I characterized it in chapter 4), we can conclude that experts can introspect more features of their relevant experiences than novices. My characterization of transparency held that if a person can perceive a feature, then she can introspect herself perceiving that feature as well. This holds for experts as well. Note that those who reject a descriptivist model--proponents of the p-concepts approach, for

234 example--generally accept the transparency of introspection. So experts can plausibly introspect more features than can novices. Given that the most plausible explanation of expert's ability in general is the presence of nonconscious, automatically applied descriptive theory, we therefore have evidence for the presence of descriptive concepts in introspection. The remaining question becomes, is that all there is to introspection? And this issue largely turns on just how accurate we take first-person access to be in this matter. Here, N-W-type results come into play: we've seen that introspection is suspect in many cases, and unsupported by positive evidence. Thus, we can reject the worry that we've left out some aspect of experience (namely, its intrinsic character), and we can accept the data as supportive of the descriptivist position. Admittedly, this is not a straight run from the empirical results to support of my model, but that is the nature of empirical evidence in this domain. The initial theoretical groundwork must be laid before we can interpret the empirical data. The domain of chess is the best studied area of expertise. Beginning with de Groot's landmark work Thought and Choice in Chess (published in Dutch in 1946), and continuing with Chase and Simon's important model developed in the 1970's, expert skill at chess has been the subject of numerous experiments. The early studies focused especially on the memory capacity of chess masters as compared to less skilled players. De Groot, for example, showed high-level chess players a range of complex board positions and found that they could reproduce them to an accuracy rate of 93% for positions containing up to 25

235 pieces (de Groot 1978). Chase and Simon (1973) reproduced de Groot's results, but also contrasted the performance of experts with those of less-skilled players. Summarizing these results in a later work, Simon and colleagues write The subject is shown a position from an actual chess game with about 25 pieces on the board for 5 to 10 seconds, and is then required to reproduce the position from memory. A master or grand master can perform this task with about 90 percent accuracy; a weaker player will do well to replace five or six pieces correctly on the board. Next, the experiment is repeated with 25 pieces placed at random on the board instead of in an arrangement from a game. The expert's performance now falls to the level of the novice (Larkin et al. 1980, 1336). A more recent study by Gobet and Simon shows that experts can recall multiple board positions in the same task; that is, experts can hold a number of complex board positions in memory at once and recall them with a high degree of accuracy. Again, each position contained about 25 pieces. In this task, masters, experts, and "class A" players9 briefly viewed board positions on a computer screen (5 seconds), and then had to reproduce them on the screen using a mouse. In one section of the trial, 5 positions were flashed in a row; the task was to reproduce as many of the positions as possible. As expected, masters far outperformed novices on this task. Masters had an accuracy rate of around 50% in the five-board condition, while class A players only were accurate to 20% (Gobet and Simon 1996, 9, experiment 1).

Chess players are ranked by ELO rating. Masters rate above 2200, experts above 2000, and the class A players in the study averaged around 1880.

236 What can we conclude from these memory studies? First, in order to encode a stimulus in memory, it must be perceived. The key features to be recalled must be salient in perception. Otherwise, the proper encoding cannot take place. And, indeed, this is Simon's main contention concerning chess expertise, as summarized by Frantz: Chess masters examining a previously unknown board position taken from an actual game immediately--within 2 s--shift their eyes to the most relevant part of the board. This means that they immediately grasp or "see" the most important relationships on the board. Simon concludes that it is sufficient to state that a chess master's performance is based on a knowledge of chess and an act of (subconscious) pattern recognition (Frantz 2003, 270). So if chess masters can recall far more board positions, they must be able to recognize and encode far more perceptual patterns. But can they also introspectively access these perceived patterns? Given the transparency of introspection, it seems likely that they can. Simply attending to their act of seeing while they are looking at the board would be enough--why think that they cannot do this? Indeed, in considering the next move, it may often be the case that masters are focused inward, checking their thoughts for error, rather than out at the board. Of course, this does not entail that they would thereby recognize how they picked out the salient patterns, or be aware of the nonconscious rules and memory structures at work in the act of perception. Rather, they would be aware of a perceived board position, in which they could "just see" that certain elements

237 were important or suggestive of good future moves or reminiscent of famous gambits. These features would seem as intrinsic to the introspected perception as to the perceived board--as Capablanca remarked, "I know at sight what a position contains. What could happen? What is going to happen? You figure it out, I know it!" (Linhares 2005, 135). But why think that the mechanisms posited to explain chess expertise are at all suggestive of what occurs in first-person access? Here we can consider the mechanisms posited by the various theorists at work in the expert perception literature. While there may be disanalogies between chess expertise and introspective access, if a successful descriptivist model exists for expertise, that will at least suggest that descriptions could do the work of access. And to the extent that expert perception shares the three important features of access noted in chapters 3 and 4, the analogy is strengthened. The best-developed view is Simon's "chunking" hypothesis, which holds that experts group perceived patterns together into a single usable unit, thereby expanding the range of material that can be handled in reasoning and working memory. Charness and colleagues summarize Simon's theory as follows: The fundamental unit of their... theory is the chunk, which is composed of a group of pieces related by type, color, or role (e.g., attacker-defender, etc.). Through extensive study and practice, players build up structures in [long-term memory] of pieces that are frequently encountered together, along with information about their relations to one another, to the board, and to the position as a whole. Expert players then use this knowledge in

238 [long-term memory] to encode and manipulate more chess-related information in a given mental operation than do less skilled players, who utilize smaller information units or chunks (Reingold et al. 2001b, 504). The encoded information is stored in an abstracted form; chunks are not specific to individual games. Instead they are more general representations of positions, made salient by repeated experience in play and knowledge of chess theory. What's more, the chunked representations involved in chess expertise can be fully characterized in relational descriptions. Nothing is left out of an explicit characterization of the chunks, and any alleged intrinsic features of the representations are irrelevant to their use in skilled play. There is no theoretical temptation to posit awareness of intrinsic features to account for chess expertise, and, indeed, Simon's theory operates in the domain of information-processing cognitive science. An important element of support for Simon's view is a computer model of expert chess play. If intrinsic features were seen as crucial to the process, there would be substantial questions concerning the completeness of the computer simulations. But that is not how the models are challenged in the literature; rather, researchers offer contrary evidence taken from studies of reaction times and memory loads. Simon's theory clearly qualifies as a descriptive model of chess expertise, as I've developed the idea here. And rivals theories of chess expertise do as well. Simon's initial hypothesis held that chunks worked by being activated in short-term "working" memory. Because they serve to bring together a range of information in a single processing unit, chunks were hypothesized to allow one to get around the

239 famous 5+/-2 limit on working memory. By packing more information in chunks, one can expand the range of tasks one could address without violating the limits of working memory. However, Ericsson and Kintsch (1995) propose instead that expert performance actively involves long-term memory. They posit a "long-term working memory" structure, where relevant information is accessed by way of "retrieval cues" active in short-term working memory. They offered evidence of the role of long-term memory in a number of expert-related memory tasks. Simon eventually emended his model, in collaboration with Gobet, to include larger "templates," recursive, modular representations that could take chunks as arguments (Gobet and Simon 1996). For my purposes, the details of this debate are not important. What matters is that neither view involves the active participation in expert skill of the intrinsic qualities of representations. All are explicitly worked out in an information-processing framework, and proponents of both views employ computer models to support their view. This is not conclusive: it is open to the anti-descriptivist to hold that something fundamental is missing from these explanations of expert performance. But as it stands, research into expert perception in chess falls clearly into the descriptivist camp. In addition to data taken from memory studies, research on chess expertise employs data from eye tracking and looking times. This allows us to consider more directly the perceptual abilities of experts--no detour is required through memory. Reingold, Charness, and colleagues demonstrated, in a "check detection task," that chess experts have a larger "visual span" when looking at chess positions; that is, they fixate less on specific pieces in the center of focal

240 vision, but access pieces from outside of foveal vision without as many shifts of focus (Reingold et al. 2001a). To further evidence expanded visual span, the researchers tested experts and novices on a chess "change blindness" task. In a "splatter paradigm" design, viewed pieces were quickly changed while an occluder flashed on the screen. Novices were less likely to notice that any pieces changed. Reingold, Childress, and colleagues attribute this difference to the expanded visual span of experts. Because experts automatically take in more information, they are more likely to notice a change in piece position. In addition, the researchers noted "in the check-detection task, experts made fewer fixations than less-skilled players and placed a greater proportion of fixations between individual pieces, rather than on individual pieces" (Reingold et al. 2001a, 54). They hypothesize that experts encode more information about the relations of pieces, accounting for their focus. This also lends support to the idea that the chunks employed by experts employ relational, rather than intrinsic, information about chess pieces. Reingold, Childress, and colleagues conclude Our study extends the findings [of Chase and Simon] by showing that experts have an advantage in extracting perceptual information in an individual fixation. For check detection, a task that is well defined and for which positional uncertainly is minimized, the expert extracts the necessary interpiece relations from both foveal and parafoveal regions. The larger visual span of experts in this task results in fewer fixations per

241 trial, and a greater proportion of fixations between, rather than on, individual pieces (Ibid.). As in the previous studies on memory, we see evidence of automatically applied descriptive concepts in chess expertise--the presence of background descriptive information best account for looking behavior and relative immunity to change blindness. What's more, we can again plausibly extend these claims to introspection. If experts can perceive the important relations between pieces and can better note changes in a change blindness test, then given the close tie between perception and introspection afforded by transparency, we can conclude that these features are introspectable as elements of their current experiences. This suggests that the acquisition of descriptive theory alters what one can introspect. But perhaps the evidence offered by complex models of chess expertise is just too distant from the seemingly basic operations of first-person access. Chess expertise involves a learned intellectual skill, while introspection is just given to us. And chess is well-modeled by computers because it is a rule-based game, one that is well constrained and clearly described. But the sorts of features we can become aware of in introspection--the rich red of an experienced sunset or the slicing pain of a paper cut--just don't seem capturable in this way. So whatever my purported evidence may show, it surely can't be the whole story about first-person access. Chess expertise is too isolated and too intellectualized a skill to serve as a model for introspection.

242 I think that this worry results largely from putting too much faith in what introspection can reveal to us about the operations of our minds. Chess expertise is useful because it is a well-defined problem space, but for all that, it still plausibly sheds light on first-person access. But there are areas of expertise closer to home, lying in between chess expertise and the introspection of paper cuts. Wine tasting is a domain closely linked with our basic sensory and introspective capacities, and it has been empirically studied. It offers a more intuitive and suggestive body of evidence for a descriptivist model of first-person access. I'll examine that evidence in the next section.

5.2.2 Wine Tasting Expertise Generally speaking, when a person first tastes wine, one wine pretty much seems like another; there is little more than a basic "winey" taste, and the occasional awareness that Manischewitz is sweeter than most. But as one develops an appreciation for wines, the experience begins to change. Subtle differences that used to go by unnoticed become salient. One eventually comes to differentiate wine along several gustatory dimensions, over time sorting wines into the classes drawn by experts. Or at least that is an intuitive picture. But some feel that the entire practice of wine tasting is one big, pretentious put-on. The highfalutin terminology of wine tasters is widely mocked; the language of the wine snob in Thurber's famous cartoon conveys the idea: "It's a naive domestic burgundy without any breeding, but I think you'll be amused by its presumption." Indeed, one may wonder if there's anything at all to expert wine tasters and their

243 involuted vocabulary. A recent study conducted in France found that experts had trouble telling a white from a red wine in certain circumstances (Brochet 2002), leading the Times of London to lament, "Wine 'experts' know no more than the rest of us."10 Fortunately for my purposes, there is data supporting the claim that wine tasters, like other experts mentioned here, perceive things differently from novices. And what's more, even the studies showing the susceptibility of expert wine tasters to error are useful. They help shed light on the plausible mechanisms of wine tasting expertise. Turning to the data, in a survey article, Angus Hughson and Robert Boakes catalog a number of findings that shed light on the nature of wine-tasting expertise (Hughson and Boakes 2001). First of all, they report that experts and novices do not significantly differ in their very basic absolute thresholds for chemosensory stimuli. But they report that "wine experts have been found consistently superior to novices . . . in supra-threshold discrimination," discriminations between detected stimuli (2001, 104). In addition, psychologist Gregg Solomon reports that experts are able to outperform novices on a discrimination task called the "triangle test" (Solomon 1990, Experiment 3). Subjects are given three glasses of wine, two of which contain the same wine. The task is to identify the odd wine of the three. Solomons test used four "cheap white Bordeaux, all very bland and virtually identical in visual appearance" (1990, 509). Novices performed "virtually at the level of chance" while "experts . . . did perform significantly better than chance" (Ibid.).

10

Downey 2002.

244 Furthermore, Hughson and Boakes write that "in a matching task . . . wine experts show higher accuracy than novices in selecting from a set of alternatives the wine that matches a sample given a short time before" (104). They also report Solomon's suggestion that experts are better at the matching task because of "more consistent use of verbal descriptors" (104). Hughson and Boakes conclude, "Wine expertise may rely on a combination of conceptual knowledge and enhanced olfactory sensitivity through expectation" (106). Furthermore, they "conclude that [wine experts] are able to discriminate between and match sets of wines that novices find almost indiscriminable" (107). Solomon goes further, writing, "The picture that emerges of novice tasters from these studies is that they can appreciate and convey, in information-processing terms, not much more than a two-bit wine world" (Solomon 1990, 512). He concludes, The studies have shown experts to be better at matching the descriptions of wines written by other experts to the wines about which they were written, to perform better than novices on a psychophysical test of wine discrimination, and to agree significantly at ranking wines on the dimensions of tannin, balance, and sweetness, where novices could only rank the wines for sweetness (1990, 514). Again, we see that experts can differentiate perceptual stimuli that novices cannot. But what accounts for this ability? Here, the presence of sophisticated vocabulary is important. Subjects learn new words for the tastes and aromas distinctive of a particular wine. At first, this may not help; the wine may still seem like all the others. But over time, the presence of a verbal label helps to make

245 salient what is distinguishing about the wine. If no new vocabulary is acquired, the process of differentiation and categorization is much more difficult--the words tell us what to "look" for. Word learning indicates the development of concepts, and if the concepts can be learned from a book or in a class, the concepts must be descriptive. Nondescriptive concepts cannot be conveyed in this manner, by definition.11 Indeed, it is hard to know how to account for the facilitating effect of vocabulary learning on a nondescriptive model. Of course, nondescriptivists like Papineau can embrace the idea that descriptive content does some of the work of making us aware of our conscious sensations. But we are left wondering what is left for the nondescriptive concepts to do. And what's more, the allegedly embarrassing results of Brochet concerning the susceptibility of wine tasters to error also suggest the presence of descriptive concepts in wine tasting expertise. In one part of Brochet's study, wine tasters were given white wine that had been colored red by a tasteless, odorless dye. This wine was consistently described as "rich," mellow," "oaky," etc.--words associated with red wines. Then the same wine, without the dye, was presented. This time, the tasters used white wine vocabulary: "crisp," "flowery," and so on. In a similar test, Brochet put the same white wine in two bottles, one with a fancy "gran cru" label, the other with a cheap table wine label. Experts described the taste of the wine in the fancy bottle as "mature" "rich" and "balanced," while the table wine was described in more negative terms. 12 Clearly, our gustatory experience of wine is not encapsulated, and is open to
11 12

See chapter 3, section 3.4. Brochet 2002. See also Brochet and Dubourdieu 2001; Morrot et al. 2001.

246 cross-modal influence and expectation effects. Other studies have verified that color influences taste judgment (Pangborn, et. al., 1963), and Solomon notes that "experts who incorrectly classified a pinot gris as a chardonnay . . . ascribed chardonnay features to the sample" (Solomon, 1997, 53). Solomon concludes his 1997 paper by stating, "What these studies . . . suggest is that wine expertise, however it is acquired, involves a conceptual change" (56). One might worry that this sort of error undermines the validity of the entire process of expert evaluation of wine, but instead, it plausibly sheds light on the mechanisms at work in wine tasting expertise. Experts' access to the taste of the wine is mediated by theoretical information. A key component of that information is the visual appearance of the wine. Visual stimuli tend to dominate other stimuli, especially those of olfaction and taste. Because of this, when experts see the color of the wine, it triggers expectations as to the taste and aroma of the wine. Further, knowledge of labels, vineyards and other background information prompts experts to alter their judgments on wines. If they were accessing these sensations by way of referentially-direct p-concepts, it is unclear how they would make this kind of error. And it also provides more evidence for the common occurrence of Chalmers's "pseudo-direct phenomenal beliefs," those mental states potentially trivializing his incorrigibility claim. Once again, we can conclude that because experts can perceive a wider range of features than novices, given the transparency of introspection, they can introspect their experiences of these perceptions. Indeed, much of wine tasting plausibly amounts to introspection on gustatory sensations. But if acquired

247 descriptive theory underwrites expert's perceptual abilities, it's plausible that it underwrites their introspective abilities as well. The qualities appear the same to us in perception and introspection--that is the force of the transparency claim. If descriptive theory does the work in one place, why not in the other? The remaining worry is that something is left out of the description. But this claim requires a justification of introspective access in the face of N-W-type results. I conclude that the data on wine tasting expertise supports a descriptivist model of first-person access. There is one final worry to address concerning the wine tasting data. Michael Tye, in defending his first-order representational (FOR) theory of consciousness, argues that wine tasting does not offer evidence of a descriptivist model of first-person access. Instead, he holds that the activation of descriptive concepts causes first-order sensory states, which are themselves inherently accessible. These states are conscious, and can be accessed by way of referentially direct p-concepts. At no point, according to Tye, do we get evidence of descriptive concepts accounting for our first-person access to what it is like for us to taste wines. The only thing that we can conclude is that descriptive concepts are implicated in the causal chain leading to awareness of the wine experience, not that they account for our cognitive access to what it's like for us.13 But Tye's claim appears ad hoc; it is motivated only by a desire to save the FOR theory of consciousness, not for independent reasons. What's more, N13

Tye 1995, 114-115; Tye 2000, 60-61.

248 W demonstrate plausible cases of confabulation in first-person access. Such cases seem indistinguishable to the subject from nonconfabulatory cases. What reason do we have to believe that this cannot occur in wine tasting cases, or that when it does, sensory states must be present? The main reason available is that it seems that we are in direct contact with our sensory states, and that this contact is not mediated by description. This, recall, is CMoP--the doctrine that we can tell introspectively what sort of referential mechanisms are involved in first-person access. But, as we saw in chapter 3, there is no plausible evidence in support of the doctrine, and it has troubling consequences for the materialists in any event. But in addition, CMoP is undermined by N-W's results. N-W results are widely taken to demonstrate a lack of access to mental processes, even if one backs off the stronger reading I've defended. CMoP licenses access to the processes that account for our first-person access, and so it is plausibly undermined by N-W's claims. Or at the very least, Tye (and other supporters of CMoP) owe us an explanation of why this is not so. Finally, Tye's claim requires an effective theory of p-concepts, one that explains how such concepts make us aware of our conscious states. Tye endorses a view very much like Loar's and Papineau's, and as we have seen, those views failed ultimately to explain the mechanisms of awareness involved in p-concepts. The same holds for Tye's position, mutatis mutandis. I contend that his challenge to the wine tasting data falls short, and we can conclude that such data does in fact offer support for a descriptivist model of access.

249 This concludes my survey of the empirical data. The empirical results challenging the accuracy of introspection do indeed threaten the methodologies of Chalmers, Levine, and Block, and they offer independent support for my critique of their views, and for my general conclusion that the heart of the socalled problem of consciousness turns on explaining how things appear to us in first-person access. I also find that the empirical data from the study of expert perception provides a degree of support for a descriptivist model of access. And even though there is a theoretical slack in interpreting these results, I believe that expert perception is best explained by the presence of descriptive concepts, and that this claim can be extended to cover introspection as well. I've concluded my survey of the problem of consciousness, and my defense of a solution (or dissolution) to that problem. But one may still feel that I've somehow missed the point or changed the subject. Chalmers, Levine, and Block were talking about the rich experience of a summer sunset or the throbbing pain of a toothache. I've offered chess experts, wine tasters, and chick sexers. Could we really be talking about the same thing? Have I redefined "world peace" as a ham sandwich, in order to feel better about the geopolitical situation? To try and put these worries to rest once and for all, I will return to the philosophical intuitions that got us started. Perhaps the most effective antimaterialist intuition pump is Jackson's knowledge argument, featuring Mary, the famous color-blind super scientist.14 And it might seem that my view has particular troubles with Mary. I hold that first-person access involves a descriptive theory. But Mary, ex
14

Jackson 1983. For the ever-growing literature on Mary, see, for example, Jackson, Ludlow, Nagasawa, and Stoljar 2004.

250 hypothesi, knows all the descriptions. So she can learn my theory, and thus, it seems, know what it's like to see red even in her black-and-white room. In the final section of this chapter (and of this dissertation), I'll show how my view deals with Mary, with the hope (perhaps forlorn!) of putting the issue to rest.

5.3 The Knowledge Argument Mary is a super scientist raised in a black and white environment. She has never seen red before. However, she has access to a completed science, one that covers everything from particle physics to psychology. In particular she has access to a complete materialist theory of the conscious mind. She knows all the physical facts, including those dealing with conscious color experience. And given her prodigious memory and analytic skills, she is able to absorb and understand this material, and to recall it at need. Now imagine she is let out of her room for the first time and happens upon a red rose. For the first time, she has a conscious experience of red, in all its glory. But when she has this experience, does she learn something new? She already knew all the physical facts about what would occur: she knows how the particles in her brain would react, how the neurons would fire, how the causal mechanisms of her psychology would operate. But doesn't she learn for the first time what it is like to see red? Surely, she didn't know this before. But she knew all the physical facts. So what she learns must be something other than a physical fact. Many physicalists hold that it is important to assert that there is a sense in which Mary learns something new when she sees red for the first time,

251 something she couldn't have known in her room. This is one of the central motivations behind the p-concepts approach. Its proponents hold that Mary learns an old fact under new concepts, and what's more, she couldn't have acquired this concept in any way but by having an experience of red. So there seems to be a good sense in which Mary learns something new, but not in a way that violates physicalism. However, we've seen that the p-concepts view has internal problems. But the descriptivist seems to have the opposite problem: it is a workable explanatory theory of first-person access, but not one that explains what Mary learns. My model holds that first-person access involves the automatic application of a nonconscious descriptive theory. But Mary could learn this theory in her room. So what does she learn when she emerges? It seems she already knows all there is to know. This may seem a highly counterintuitive result. Surely, it's claimed, Mary doesn't know what it's like to see red. I seem to be forced to deny this "obvious" truth. But there is more for the descriptivist to say here. First, there is a big question as to just what "knowing what it's like" means. It may mean gaining some propositional knowledge, some knowledge of fact about the world. In that case, I contend that Mary indeed already knows what it's like to see red. But perhaps "knowing what it's like" means acquiring something nonpropositional, not a fact at all. Perhaps it means acquiring some know-how or ability. If that is the meaning, than I agree that Mary does not know what it's like to see red in her room, but this sort of lack is clearly no threat to physicalism. Whichever way we go, here is my response to the knowledge argument. Mary knows all the facts in

252 her room. She knows that what it's like for one to see red is to see a quality fully characterized by relationally-specified location information in a commonsense quality space. She knows that mental red is more like mental purple than mental green, that it is "hotter" than mental blue, that it is the color of roses and fire trucks and tomatoes and Joseph Levine's computer diskette case. She knows everything there is to know about mental red, because she knows all the scientific facts and she knows the complete, explicit version of the nonconscious, automatically applied theory that folks use to access their mental red states from the first-person perspective. But there is indeed something she lacks in her room, something she can only plausibly gain upon emerging. She lacks the ability to automatically apply the theory. That ability requires the presence of red sensations as a trigger. So the requisite trigger is not present until she sees some red. Then she will be able to automatically apply the theory, and come to directly access her conscious red sensation.15 But surely, one might argue, at this point, she learns something new, something she wasn't expecting. But that is not the case. Mary isn't surprised. She notes that she is directly accessing a state with a quality more like mental purple than mental green, "hotter" than mental blue, that is the color sensation caused by perceptions of the usual red things, and so on. And she also notes that she can't help but think that there's more to it than that, that this relational

Daniel Dennett (2005) argues that Mary could even acquire the ability in her black-and-white room--she is, after all, a super scientist. Why couldn't she figure out how to trigger the ability in her room? I am sympathetic with this claim, and I have no argument to rule it out. However, I'll stick with my reading in order to assuage those who are holding out for a special before and after difference in Mary. Thanks to Daniel Dennett for pressing this point in discussion at ASSC 9, Pasadena, June 2006.

15

253 description has somehow "left out what it's like." But she knows quite well that this is an illusion brought about by the nature of the mechanisms of first-person access, by the automated process and the workings of the mind she cannot directly access. She is impressed by the strength of this illusion--it's as compelling, if not more so, as the Mller-Lyer illusion or the "devil's pitchfork." But she is not surprised. It's just as the completed science said things would be. My defense against the knowledge argument holds that Mary already knows all the facts, but lacks an ability, a bit of know-how. It thus qualifies as a version of the so-called "ability hypothesis," defended by David Lewis and independently by Laurence Nemirow.16 Lewis holds that Mary lacks the ability to recognize, to remember, and to imagine the colors she has not experienced. These abilities she gains upon being released and seeing red for the first time. Lewis is not explicit about the cognitive mechanisms underwriting these abilities, but so long as Mary gains no new knowledge, it does not matter. In fact, near the end of his 1988 paper "What Experience Teaches," which details Lewis's ability hypothesis, Lewis notes that If the causal basis for those abilities turns out also to be a special kind of representation of some sort of information, so be it. We need only deny that it represents a special kind of information about a special subject matter. Apart from that, it's up for grabs what, if anything, it may represent (1988, 290). I believe that my view is fully compatible with Lewis's approach, and what's more, it serves to flesh out some of the details available to defenders of the ability
16

Lewis 1988; Nemirow 1990.

254 hypothesis. This proves useful in countering some standing objections to Lewis's view. In what follows, I'll briefly present several objections to the ability hypothesis and offer my replies, to show that, with my clarification, Lewis's position is viable. I'll present three objections to the ability hypothesis, offered by Brian Loar, William Lycan, and Michael Tye, respectively. Brian Loar notes that Mary, upon leaving her room and seeing red for the first time, can think, "If red looks like this, then I really don't like red." Loar holds that while the simpler statement "Red looks like this" might express know-how, the more complex judgment with the demonstrative embedded in a conditional cannot express know-how. That use seems to introduce a meaningful predicate. Thus, Mary learns more than just know-how, she learns the concept she employs in the conditional (Loar 1997, 607). This objection is straightforwardly handled on the descriptivist model. The term "this" in the conditional (and in its uses elsewhere, including Loar's simpler sentence), can be read not as a demonstrative mentally pointing to it's referent, but as an abbreviation standing in for the chunk of nonconscious theory activated when we access mental red. That piece of theory will contain information about the location of the quality in its quality space, its connection to everyday objects, and so on. For normal people, the chunk of theory that "this" stands in for is never made explicit, but Mary has explicit knowledge of that piece of theory. Thus, the conditional with the embedded reference to mental red reads, "If red looks like [the quality more like mental purple than mental green, "hotter" than mental blue, usually caused perceptions of fire trucks, etc. ... that I am now

255 automatically accessing], then I don't like red." The judgment involves a legitimate predicate, one in part characterized by the exercised ability. But there are more materials available to account for the embedded judgment than the mere presence of the abilities to recognize, remember, and imagine. Once we take into account the richer resources provided by the automatic application of nonconscious theory, Loar's objection can be met. This also provides us with a general template for forming an abbreviatived term "this" on the descriptivist model without invoking mental demonstratives and the theoretical difficulties they bring with them. William Lycan, in his 1996 book Consciousness and Experience, offers ten separate arguments against the ability hypothesis. To some extent Lycan's arguments recapitulate the argument from Loar just discussed. So for now, I will only focus on Lycan's first argument, from meaning and syntax. Lycan notes that ordinary "knowing wh-" constructions (knowing where, knowing what, etc.), are closely related to "that" clauses. For example, John knows where the store is in virtue of knowing that the store is on Main Street. Or John knows what's on the table in virtue of knowing that the book is on the table. Lycan contends that if we follow this standard way of looking at "knowing wh-" claims, Mary knowing what it's like to see red ought to be true in virtue of Mary knowing that it is like Q to see red. But this appears to be a piece of propositional knowledge, and thus this reading cannot be endorsed by defenders of the ability hypothesis, according to Lycan. Further, he contends that there is no principled alternative to reading the

256 "knowing what it's like" phrase as entailing the presence of propositional knowledge (Lycan 1996, 92-94). However, by appealing to elements of the nonconscious theory, we can provide an analysis consistent with Lycan's point about syntax. Let us accept that when Mary knows what it's like to see red, she thereby knows that it is like Q to see red. The fact Q that she knows is the fact specified by the relevant chunk of the nonconscious theory: that it is like [automatically accessing a quality more like mental purple than mental green, etc...] to see red. This is not a fact that Mary was unaware of before leaving her room, incidentally, but for the rest of us, it will be a fact we learn by way of experience, because unlike Mary, this is how we develop our knowledge of the theory. Thus, there is "near by" propositional knowledge at hand that satisfies the syntactic point Lycan calls attention to. Note that I am not sure that he is correct in this analysis; it may be that the phrase "what it's like" really does behave in a novel way in perceptual contexts. My point is that the defender of the ability hypothesis can remain safely neutral on this issue. The third objection I'll address is from Michael Tyes 2000 book Consciousness, Color, and Content. In a chapter on Lewis's ability hypothesis, Tye counters a number of arguments against Lewis, but then goes on to offer his own, new objections. One is particularly relevant to my use of descriptive theory to account for first-person access, and though Tye aims it at Lewis's characterization of the abilities, it seemingly hits my characterization even more directly. Tye notes that we can be aware of very fine-grain differences in colors,

257 even when we cannot reliably recognize those colors individually. For example, we can see the difference between shades of red21 and red22, but we cannot reliably judge which sample is red21 when we see them alone. From this fact, Tye contends that we therefore do not possess concepts for red21 or red22-concept possession is in part constituted by an ability to recognize examples falling in the concept's extension. But we can know what it's like to see red21 in the experience where we compare it to red22. Against Lewis, Tye contends that we therefore can know what it's like to see red21 without possessing the ability to recognize, remember, or imagine red21. Against my view, it is sufficient to note the lack of a concept of red21, even though we can know what it's like to see red21. In either case, there seems to be a problem for the ability hypothesis. However, this is to underestimate the conceptual resources available in the nonconscious theory. So long as we have a well-defined quality space, we can locate the fine-grain shades like red21 by using comparative concepts. For example, red21 is the quality slight darker than fire engine red, slightly lighter than crimson, and so on. We need not have a name for the shade; on the contrary, we only need to be able to place it in relation to more stable locations in the quality space. And what's more, this makes sense of the fact that we can pick out more fine-grained qualities in comparative tasks than in recognition or recall tasks. We can more easily ascertain that two samples fall in different places along a dimension of the quality space than we can recognize or recall an absolute location in the space on the basis of a single sample. Indeed, this is what we should expect to see if we are indeed picking out qualities by way of

258 location in a relational quality space. It is easier to judge the relative differences between qualities than it is to locate a quality by itself in the space. But in any event, Tye has not shown that the ability hypothesis cannot account for what we can access from the first-person perspective. Given that the theory can employ comparative concepts, there is no worry about knowing what it's like in the absence of descriptive concepts.17 This concludes my limited foray into the knowledge argument. Mary can know all the facts in her room, but she lacks the ability to automatically apply her theoretical knowledge to her own states. This requires experience. But Mary will not be surprised at what she is aware of when she acquires the ability. She may be impressed by the stubbornness of the illusion that there are intrinsic qualities of experience left out of her relational theory. But she will know better. Despite how things seem, she knows that the mental appearances are not always the best guide to mental reality, especially when it comes to the problem of consciousness.

17

See Rosenthal 1999b, 2005a, for a detailed defense of these claims.

259 REFERENCES Achinstein, Peter (1981), "Can There be a Model of Explanation," Theory and Decision, Vol. 13, No. 3. Akins, Kathleen (1996), "Lost the plot? Reconstructing Dennett's multiple drafts theory of consciousness," Mind and Language 11: 1--43. Armstrong, David M. (1968), A Materialist Theory of the Mind, New York: Humanities Press. Armstrong, David M. (1980), The Nature of Mind and Other Essays, Ithaca: Cornell University Press. Baars, Bernard (1988), A Cognitive Theory of Consciousness, Cambridge: Cambridge University Press. Bargh, John A., Chaiken, S., Govender, R., and Pratto, F. (1992), "The generality of the automatic attitude activation effect," Journal of Personality and Social Psychology, 62, 893-912. Bargh, John A. and Chartland, Tanya L. (1999), "The Unbearable Automaticity of Being," American Psychologist, 54, 7, 462-479. Bargh, John A., Chen M. and Burrows, L. (1996), "Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action," Journal of Personality and Social Psychology, 71, 230-244. Baron-Cohen, Simon (1997), Mindblindness: an Essay on Autism and Theory of Mind, Cambridge, MA: MIT Press. Blackburn, Thomas (1988), "The Elusiveness of Reference," Midwest Studies in Philosophy, Volume XII: Realism and Anti-Realism, Peter A. French, Theodore E. Uehling Jr., and Howard K. Wettstein, eds. Minneapolis: University of Minnesota Press: 179-94. Block, Ned (1978), "Troubles with Functionalism," In Block, N. (ed.), Readings in the Philosophy of Psychology, (Vol. 1), Cambridge, MA: Harvard University Press. Block, Ned (1990), "Inverted Earth," In J. Tomberlin (ed.), Philosophical Perspectives: 4 Action theory and the philosophy of mind, Atascadero, CA: Ridgeview Publishing Co. Block, Ned (1992), "Begging the Question Against Phenomenal Consciousness," Behavioral and Brain Sciences 15, 205-206.

260

Block, Ned (1995), "On a Confusion about a Function of Consciousness," Behavioral and Brain Sciences 18, 2, reprinted in Block, N., Flanagan, O., and Gzeldere, G.,(eds.), (1997), The Nature of Consciousness: Philosophical Debates, The MIT Press, Cambridge, MA, 375-416. Block, Ned (2001), "Paradox and Cross Purposes in Recent Work on Consciousness," Cognition, Volume 79, Issues 1-2, accessed at http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/Elsevier.html. Block, Ned (2002a), "The Harder Problem of Consciousness" The Journal of Philosophy XCIX, No. 8, August, 391-425. Block, Ned (2002b) "Consciousness, Philosophical Issues about," Encyclopedia of Cognitive Science, accessed at http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/ecs.pdf. Block, Ned and Stalnaker, Robert (1999), "Conceptual Analysis, Dualism, and the Explanatory Gap," Philosophical Review 108:1-46. Borg, Emma (2000), "Complex Demonstratives," Philosophical Studies, 97, 2, 229-249. Braddon-Mitchell, David, and Jackson, Frank (1996), Philosophy of Mind and Cognition, London: Blackwell Publishers. Bransford, John D., Brown, Ann L., and Cocking, Rodney R. (eds.) (2000), How People Learn: Brain, Mind, Experience, and School, Washington, D.C.: National Academies Press. Brennan, David and Stevens, Catherine (2002), "Specialist Musical Training and the Octave Illusion: Analytical Listening and Veridical Perception by Pipe Organists," Acta Psychologica, 109, 301-314. Brochard, Renaud, Dufour, Andre, and Despres, Olivier (2004), "Effects of Musical Expertise on Visuospatial Abilities: Evidence from Reaction Times and Mental Imagery," Brain and Cognition, 54, 103-109. Brochet, Frdric (2002), "Tasting: Chemical Object Representation in the Field of Consciousness," Application presented for the grand prix of the Acadmie Amorim, accessed at: http://www.academie-amorim.com. Brochet F. and Dubourdieu D. (2001), "Wine Descriptive Language Supports Cognitive Specificity of Chemical Senses," Brain and Language, 77, 2, 187196.

261 Carruthers, Peter (2000), Phenomenal Consciousness: A Naturalistic Theory, Cambridge University Press, Cambridge. Chalmers, David J. (1995), "Facing up to the Problem of Consciousness" Journal of Consciousness Studies, 2: 200-219. Chalmers, David J. (1996), The Conscious Mind: In Search of a Fundamental Theory, Oxford University Press, Oxford and New York. Chalmers, David J. (1999), "Materialism and the Metaphysics of Modality," Philosophy and Phenomenological Research, 59: 473-96. Chalmers, David J. (2003a), "Consciousness and its Place in Nature," in Stich, S. and Warfield, T. (eds.), The Blackwell Guide to Philosophy of Mind, Blackwell Publishing, Oxford. Chalmers, David J. (2003b), "The Content and Epistemology of Phenomenal Belief," in Smith, Q. and Jokic, A. (eds.) Consciousness: New Philosophical Essays, Oxford: Clarendon Press. Chalmers, David J. (website) "Response to Dennett," http://jamaica.u.arizona.edu/~chalmers/responses.html#dennett2. Chalmers David J. and Jackson, Frank (2001) "Conceptual Analysis and Reductive Explanation," Philosophical Review 110: 315-361; page numbers from revised version accessed at http://www.u.arizona.edu/~chalmers/papers/analysis.html. Chase, W.G. and Simon, H.A. (1973), "Perception in chess," Cognitive Psychology, 4, 5581. Chi, M. T., Feltovich, P. J. and Glaser, R. (1981), "Categorization and representation of physics problems by experts and novices," Cognitive Science, 5, 121-152. Churchland, Paul (1981),"Eliminative Materialism and the Propositional Attitudes," Journal of Philosophy, 78, no. 2. Clark, Austen (1993), Sensory Qualities, Oxford: Clarendon Press. Davies, Martin and Stoljar, Daniel (2004) Issue on Two-Dimensional Semantics, Philosophical Studies, 118. De Groot, Adriann (1978), Thought and Choice in Chess, The Hague: Mouton Publishers.

262 Dennett, Daniel C. (1991), Consciousness Explained, Boston: Little Brown. Dennett, Daniel C. (1993), "The Message is: There is no Medium," Philosophy and Phenomenological Research, LIII, 4, 919-931. Dennett, Daniel C. (2004), "What Robo-Mary Knows," in Jackson et al., (eds.), There's Something About Mary: Essays on Phenomenal Consciousness and Frank Jackson's Knowledge Argument, Cambridge MA: Bradford/MIT. Devitt, Michael (1981), Designation, New York: Columbia University Press. Devitt, Michael (1996), Coming to Our Senses, Cambridge: Cambridge University Press Devitt, Michael (2001) "A Shocking Idea About Meaning," Revue Internationale de Philosophie 208, 449-72. Devitt, Michael (2004), "The Case for Referential Descriptions," in Bezuidenhout and Reimer (eds.), Descriptions and Beyond, Oxford: Oxford University Press. Devitt, Michael and Sterelny, Kim (1999), Language and Reality: An Introduction to the Philosophy of Language 2e, Cambridge MA: The MIT Press. Downey, Roger (2002), "Wine Snob Scandal," Seattle Weekly News, February 21-27. Dretske, Fred (1995), Naturalizing the Mind, Cambridge: MIT Press. Dreyfus, Hubert L. (2005), "Overcoming the Myth of the Mental: How Philosophers can Profit from the Phenomenology of Everyday Expertise," Proceedings of the APA, 79, 2. Dreyfus, Hubert L. (manuscript), "A Phenomenology of Skill Acquisition as the basis for a Merleau-Pontian Non-representationalist Cognitive Science", accessed at http://socrates.berkeley.edu/~hdreyfus/pdf/MerleauPontySkillCogSci.pdf. Eerola, Tuomas, Himberg, Tommi, Toiviainen, Petri, and Louhivuori (2006), "Perceived complexity of western and African folk melodies by western and African listeners," Psychology of Music, 34, 3, 337-371. Ericsson, K. A., & Kintsch, W. (1995), "Long-term working memory," Psychological Review, 102, 211-245.

263 Fan, J. Liu, F., Wu, J. and Dai, W. (2004), "Visual perception of female physical attractiveness," Proceedings of the Royal Society of London, 271, 347-352. Fodor, Jerry A. (1992), "Can there be a science of mind?" TLS, No. 4657, July 3 1992. Frantz, Roger (2003), "Herbert Simon: Artificial intelligence as a framework for understanding intuition," Journal of Economic Psychology 24, 265277. Gobet, F. & Simon, H. (1996), "Templates in chess memory: A mechanism for recalling several boards," Cognitive Psychology 13, 1-40. Gopnik, Alison (1993), How we Know our Minds: the Illusion of First-Person Knowledge of Intentionality, Behavioral and Brain Sciences 16, 1-14. Graham, George and Horgan, Terry (2000), "Mary Mary, Quite Contrary" Philosophical Studies 99, S, 59-87. Gromko, Joyce E. (1993), "Perceptual Differences between Expert and Novice Music Listeners: A Multi-Dimensional Scaling Analysis," Psychology of Music, 21, 34-47. Harman, Gilbert (1990), The Intrinsic Quality of Experience, In Tomberlin, J. (ed.), Philosophical Perspectives: 4 Action theory and the philosophy of mind. Atascadero, CA: Ridgeview Publishing Co. Harman, Gilbert (1999), Reasoning, Meaning, and Mind, Oxford: Clarendon Press. Hempel, Carl G. (1965), Aspects of Scientific Explanation, New York: Free Press. Horgan, Terry (1984), "Supervenience and Cosmic Hermeneutics," Southern Journal of Philosophy, suppl., 22:19-38. Hughson, Angus L., and Boakes, Robert A. (2001), "Perceptual and Cognitive Aspect of Wine Expertise," Australian journal of Psychology, Vol. 53, No. 2, 103-108. Jackson, Frank (1982), "Epiphenomenal Qualia, Philosophical Quarterly 32, 127-136. Jackson, Frank (1994), "Finding the Mind in the Natural World" reprinted in Block, N., Flanagan, O., and Gzeldere, G.,(eds.), (1997), The Nature of Consciousness: Philosophical Debates, The MIT Press, Cambridge, MA,483-492.

264

Jackson, Frank (1998), From Metaphysics to Ethics: A Defence of Conceptual Analysis, Oxford: Clarendon. Jackson, Frank (2003), "Mind and Illusion," in Minds and Persons, ed. Anthony O'Hear, Royal Institute of Philosophy Supplement: 53, Cambridge: Cambridge University Press, 251-71. Jackson, Frank, Ludlow, Peter, Nagasawa, Yujin, and Stoljar, Daniel (eds.) (2004), There's Something About Mary: Essays on Phenomenal Consciousness and Frank Jackson's Knowledge Argument, Cambridge MA: Bradford/MIT. Johansson, Petter, Hall, Lars, Sikstrom, Sverker, and Olsson, Andreas (2005a),"Failure to Detect Mismatches Between Intention and Outcome in a Simple Decision Task," Science, 310, 116-119. Johansson, Petter, Hall, Lars, Sikstrom, Sverker, and Olsson, Andreas (2005b), "Supporting Materials," accessed at http://www.sciencemag.org/cgi/content/abstract/310/5745/116. Kim, Jaegwon (1993), Supervenience and Mind: Selected Philosophical Essays, Cambridge: Cambridge University Press. Kim, Jaegwon (1998), Mind in a Physical World, Cambridge, MA: The MIT Press. Klagge, James (1988), "Supervenience: Ontological and Ascriptive," Australasian Journal of Philosophy 66, 461-470. Kripke, Saul (1980), Naming and Necessity, Cambridge, MA: Harvard University Press. Larkin, J., McDermott, J., Simon, D.P., & Simon, H.A. (1980), "Expert and novice performance in solving physics problems," Science, 208, (4450), 1335-1342. Levin, Michael (1995), "Tortuous Dualism" Journal of Philosophy, Vol. 92, No. 6, 314-323, (June). Levine, Joseph (1983), "Materialism and Qualia: The Explanatory Gap," Pacific Philosophical Quarterly 64, 354-361. Levine, Joseph (1993), "On Leaving Out What its Like," In Davies, M and Humphreys, G. W. (eds.) Consciousness, Blackwell Publishing, Oxford. Levine, Joseph (2001), Purple Haze: The Puzzle of Consciousness, Oxford and New York: Oxford University Press.

265

Lewis, David (1972), "Psychophysical and Theoretical Identifications," Australasian Journal of Philosophy, L, 3 (December), 249-258, reprinted in Lewis, D. Papers in Metaphysics and Epistemology, Cambridge: Cambridge University Press,248-261. Lewis, David (1980), Mad Pain and Martian Pain. In N. Block (ed.), Readings in the Philosophy of Psychology (Vol. 1). Cambridge, MA: Harvard University Press. Lewis, David (1983a), "Postscript to 'Mad Pain and Martian Pain,'" In Philosophical Papers, Vol. 1. Cambridge: Cambridge University Press. Lewis, David (1983b), "New Work for a Theory of Universals," Australasian Journal of Philosophy 61: 343-377. Lewis, David (1988), What Experience Teaches, Proceedings of the Russellian Society of the University of Sydney, reprinted in W. Lycan (Ed.), Mind and Cognition, Oxford: Blackwell, 1990, 499-519. Lewis, David (1994), "Reduction of Mind," reprinted in Lewis, D. Papers in Metaphysics and Epistemology, Cambridge: Cambridge University Press, 291-324. Lewis, David (1995), Should a Materialist Believe in Qualia? Australasian Journal of Philosophy 73: 140-144. Lewis, David and Langton, Rae (1998), "Defining 'Intrinsic,'" Philosophy and Phenomenological Research, 58, 333-345. Libet, Benjamin (1985), "Unconscious cerebral initiative and the role of conscious will in voluntary action," Behavioral & Brain Sciences, 1985 Dec, v8 (n4):529-566. Linhares, Alexandre (2005), "An Active Symbols Theory of Chess Intuition," Minds and Machines, 15, 131-181. Loar, Brian (1997), "Phenomenal States," in Block, N., Flanagan, O., and Gzeldere, G., (eds.), The Nature of Consciousness: Philosophical Debates, Cambridge, MA: The MIT Press. Locke, John (1979/1689), An Essay concerning Human Understanding, Oxford: Clarendon Press. Lepore, Ernest and Ludwig, Kirk (2000), "The semantics and pragmatics of complex demonstratives," Mind 109 (434), 199-240.

266

Ludlow, Peter (ed.) (1997), Readings in the Philosophy of Language, Cambridge, MA: MIT Press. Lycan, William G. (1987), Consciousness, Cambridge, MA: The MIT Press. Lycan, William G. (1996), Consciousness and Experience, Cambridge, MA: MIT Press/Bradford Books. Lycan, William G. (2001), "A simple argument for the higher-order representation theory of consciousness," Analysis 61.1: 3-4. Lyons, William (1986), The Disappearance of Introspection, Cambridge MA: Bradford/MIT. Marcel, Anthony J. (1983a), "Conscious and unconscious perception: Experiments on visual masking and word recognition," Cognitive Psychology 15, 197-237. Marcel, Anthony J. (1983b), "Conscious and unconscious perception: An approach to the relations between phenomenal experience and perceptual processes," Cognitive Psychology 15: 238-300. Martin, R. D. (1994), The Specialist Chick Sexer, Melbourne: Bernal Publishing. Martinich, A. P., (ed.) (2000), The Philosophy of Language, New York: Oxford University Press. McGinn, Colin (1989), "Can We Solve the Mind-Body Problem," Mind, 98:891, reprinted in Block, N., Flanagan, O., and Gzeldere, G.,(eds.), (1997), The Nature of Consciousness: Philosophical Debates, The MIT Press, Cambridge, MA, 529-542. McLaughlin, Brian P. (1992), "The Rise and Fall of British Emergentism," In Berckermann, A., Kim, J. and Flohr. H. (eds.), Emergence Or Reduction? De Gruyter, 49-93. McLaughlin, Brian P. and Hill, Christopher S. (1999), "There are Fewer Things in Reality than are Dreamt of in Chalmers's Philosophy," Philosophy and Phenomenological Research, Vol. LIX, 2, 445-454. McLaughlin, Brian P. (2003), "A Naturalist-Phenomenal Realist Response to Block's Harder Problem," Philosophical Issues, 13:163-204. Minsky, Marvin (1985), The Society of Mind, Simon and Schuster, New York.

267 Montero, Barbara (1999), "The Body Problem," Nous 33: 183-200. Morrot, Gil, Brochet, Frederic, and Dubourdieu, Denis (2001), "The Color of Odors," Brain and Language, 79, 309-320. Nagel, Thomas (1974), "What is it Like to be a Bat?" Philosophical Review 83, 435-445. Neale, Stephen (1993), Descriptions, Cambridge, MA: Bradford/MIT. Nemirow, Laurence (1990), Physicalism and the Cognitive Role of Acquaintance, in W. Lycan (Ed.), Mind and Cognition, Oxford: Blackwell, 1990, 490-499. Nisbett, Richard and Wilson, Timothy (1977), "Telling More than we Can Know: Verbal Reports on Mental Processes," Psychological Review LXXXIV, 3 (May): 231-259. Ostertag, Gary (ed.) (1998), Definite Descriptions: A Reader, Cambridge, MA: The MIT Press. Palmer, Steven (1999), "Color, Consciousness and the Isomorphism Constraint," Behavioral and Brain Sciences 22, 6 (December). Pangborn, R., Berg, H., & Hansen, B. (1963), "The influence of color on discrimination of sweetness in dry table-wine," American Journal of Psychology, 76, 492-495. Papineau, David (1993), "Physicalism, consciousness and the antipathetic fallacy," Australasian Journal of Philosophy, Volume 71, Number 2 / June, 169 - 183. Papineau, David (1995), "The Anti-pathetic Fallacy and the Boundaries of Consciousness," In Metzinger, T. (ed.) Conscious Experience, Exeter UK: Imprint Academic. Papineau, David (2002), Thinking About Consciousness, Oxford: Clarendon. Peacocke, Christopher, (1989), "Perceptual content," In J. Almog, J. Perry, and H. Wettstein (eds.), Themes from Kaplan, Oxford: Oxford University Press, 297-329. Perry, John (2001), Knowledge. Possibility and Consciousness, Cambridge, MA: MIT Press.

268 Perry, John (2004a), "Prcis of Knowledge, Possibility, and Consciousness, Philosophy and Phenomenological Research Vol. LXVIII, No. 1, (January), 172-181. Perry, John (2004b), "Replies," Philosophy and Phenomenological Research Vol. LXVIII, No. 1, (January), 207-229. Prietula, Michael J. and Simon, Herbert A. (1989), "The Experts in Your Midst," Harvard Business Review, Jan./Feb., 120-124. Putnam, Hilary (1962a), "It Ain't Necessarily So," Journal of Philosophy, 59: 65871. Putnam, Hilary (1962b) "The Analytic and the Synthetic," In Feigl, H. and Maxwell, G. (eds.), Scientific Explanation, Space, and Time: Minnesota Studies in the Philosophy of Science, vol. 3, Minneapolis: University of Minnesota Press, 358-397. Putnam, Hilary (1975), "The meaning of `meaning'." In Gunderson, K. (ed.) Language, Mind and Knowledge, Minnesota Studies in the Philosophy of Science, VII, University of Minnesota Press. Quine, Willard Van Orman (1951), "Two Dogmas of Empiricism" Philosophical Review, 60:20-43. Quine, Willard Van Orman (1960), Word and Object, Cambridge, MA: MIT Press. Quine, Willard Van Orman (1976), "Posits and Reality," In Ways of Paradox and Other Essays, Cambridge, MA: Harvard University Press. Quine, Willard Van Orman and Ullian, J.S. (1970), The Web of Belief, New York: Random House. Reimer, Marga (1991), "Do Demonstrations Have Semantic Significance?" Analysis, Vol. 51, No. 4 (Oct.), 177-183. Rey, Georges (1997), "A Question About Consciousness," in The Nature of Consciousness, ed. by N. Block, O. Flanagan, and G. Gzeldere, 461-482. Reingold, Eyal, Charness, Neil, Pomplun, Marc, and Stampe, David M. (2001a), "Visual Span in Expert Chess Players: Evidence from Eye Movements," Psychological Science, Vol. 12, 1, 48-55.

269 Reingold E. M.; Charness N.; Schultetus R. S.; Stampe D. M. (2001b), "Perceptual automaticity in expert chess players: Parallel encoding of chess relations," Psychonomic Bulletin & Review, 8, 3, pp. 504-510. Rosenthal, David M. (1980), "Keeping Matter in Mind," Midwest Studies in Philosophy, V, pp. 295-322. Rosenthal, David M. (1986), "Two Concepts of Consciousness," Philosophical Studies 49, 3 (May), 329-359. Rosenthal, David M. (1991), "The Independence of Consciousness and Sensory Qualities," In Villanueva, ed. Consciousness: Philosophical Issues, 1, Atascadero, CA: Ridgeview Publishing Company. Rosenthal, David M. (1995), "Multiple Drafts and Facts of the Matter," in Conscious Experience, ed. Thomas Metzinger, Exeter, UK: Imprint Academic, 275-290. Rosenthal, D. M.: 1997, "A Theory of Consciousness," in N. Block, O. Flanagan, & G. Gzeldere (eds.), The Nature of Consciousness: Philosophical Debates, The MIT Press, Cambridge, MA, 729-753. Rosenthal, David M. (1999a), "Sensory Quality and the Relocation Story," Philosophical Topics, 26, 1 and 2, (Fall and Spring), pp. 321-350. Rosenthal, David M. (1999b), The Colors and Shapes of Visual Experiences, In Consciousness and Intentionality: Models and Modalities of Attribution, D. Fisette (ed.), Dordrecht: Kluwer Academic Press, 95-118. Rosenthal, David M. (2000), "Introspection and Self-Interpretation," Philosophical Topics, 28, 2 (Fall), 201-233. Rosenthal, David, M. (2002a), "Explaining Consciousness," in Chalmers, D. (ed.), Philosophy of Mind: Contemporary Readings, Oxford: Oxford University Press. Rosenthal, David M. (2002b), "How Many Kinds of Consciousness?" Consciousness and Cognition 11, 653-665. Rosenthal, David M. (2004), "Varieties of Higher-Order Theory," in R. J. Gennaro, ed., Higher-Order Theories of Consciousness, Amsterdam and Philadelphia: John Benjamins Publishers, 19-44. Rosenthal, David M. (2005a), "Sensory Qualities, Consciousness, and Perception," in Rosenthal (2005b) 175-228.

270 Rosenthal, David M. (2005b), Consciousness and Mind, Oxford: Clarendon Press. Salmon, Wesley C. (1989) "Four Decades of Scientific Explanation," in Minnesota Studies in the Philosophy of Science, XIII: Scientific Explanation, Kitcher, P. and Salmon, W. (eds.) University of Minnesota Press, Minneapolis. Schmidt, Henk, and Boshuizen, Henry (1993), "On Acquiring Expertise in Medicine," Educational Psychology Review, 5, 3, 205-221. Shoemaker, Sydney (1975) "Functionalism and Qualia," Philosophical Studies 27: 291-315. Shoemaker, Sydney (2003), Identity, Cause, and Mind 2e, New York: Oxford University Press. Shoemaker, Sydney (1996), The First-Person Perspective and Other Essays, Cambridge: Cambridge University Press. Siewert, Charles (1998), The Significance of Consciousness, Princeton: Princeton University Press. Siewert, Charles (2004), Symposium on The Significance of Consciousness, http://psyche.cs.monash.edu.au/symposia/siewert/index.html. Simons, Daniel J. & Levin, David T. (1997), "Change blindness," Trends in cognitive sciences, 1, 7: 261-267. Solomon, Gregg (1990), "Psychology of novice and expert wine talk," American Journal of Psychology, 105, 495-517. Solomon, Gregg (1997), "Conceptual change and wine expertise," The Journal of the Learning Sciences, 6, 41-60. Stich, Stephen P. (1996), Deconstructing the Mind, New York: Oxford University Press. Sturgeon, Scott (1994), "The Epistemic View of Subjectivity," Journal of Philosophy, Vol. 91, No. 5, 221-235. Tye, Michael (1995), Ten Problems of Consciousness, Cambridge, MA: The MIT Press. Tye, Michael, (2000), Color, Content, and Consciousness, Cambridge, MA: MIT Press.

271

Varela, Francisco (1996), "Neurophenomenology : A Methodological Remedy for the Hard Problem" in Journal of Consciousness Studies, "Special Issues on the Hard Problems", J. Shear (ed.) (June). Velmans, Max (1996), "Consciousness and the Causal Paradox," Behavioral and Brain Sciences 19 (3) 537-542. Velmans, Max (2000), Understanding Consciousness, London: Routledge/Psychology Press/Taylor & Francis. Weatherson, Brian (2006), "Intrinsic vs. Extrinsic Properties," Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/intrinsicextrinsic/. Weaver, Francis M., and Carroll, John S. (1985), "Crime Perceptions in a Natural Setting by Expert and Novice Shoplifters," Social Psychology Quarterly, 48, 4, 349-359. Weiskrantz, Larry (1986), Blindsight, Oxford: Oxford University Press. Weiskrantz, Larry (1997), Consciousness Lost and Found: A Neuropsychological Exploration, Oxford: Oxford University Press. White, Peter A. (1988), "Knowing more about what we can tell: 'Introspective access' and causal report accuracy 10 years later," British Journal of Psychology, 79, 13-45.

You might also like