Vagueness and the Metaphysics of Consciousness

 

Michael V. Antony

Department of Philosophy

University of Haifa

Haifa, Israel

 

(appears in Philosophical Studies 128(3), 2006, 515-538)

 

 

Abstract. An argument is offered for this conditional: If our current concept conscious state is sharp rather than vague, and also correct (at least in respect of its sharpness), then common versions of familiar metaphysical theories of consciousness are false—namely versions of the identity theory, functionalism, and dualism that appeal to complex physical or functional properties in identification, realization, or correlation. Reasons are also given for taking seriously the claim that our current concept conscious state is sharp. The paper ends by surveying the theoretical options left open by the concept’s sharpness and the truth of the conditional argued for in the paper.

 

 

I wish to argue for the truth of a conditional which on a first pass can be expressed as follows: If our concept conscious state[1] is sharp rather than vague, then common versions of the identity theory, functionalism, and dualism are false. Clearly such a conditional, even if true, will be of limited interest unless there is good reason to think that its antecedent is also true. In Section 1, accordingly, I attempt to arrive at a more accurate statement of the conditional, and then show why and in what sense the sharpness of conscious state must be taken seriously. That done, in Section 2 I present the arguments for the conditional, and in Section 3 I consider how proponents of the metaphysical theories of consciousness addressed in those arguments might respond.

 

1. Is our concept conscious state sharp?

 

1.1 Intuitions and concepts

 

Intuitively, our concept conscious state seems to many to be sharp rather than vague. Understanding a vague concept as one that has possible borderline cases[2], the intuition is that there can be no borderline conscious states—that every state is either clearly conscious or clearly not conscious. The intuition extends naturally to our concept conscious creature if we understand creatures to be conscious if and only if they can enjoy conscious states. That is because a borderline conscious creature would have to have borderline conscious states, but if there can be no such states there can be no such creatures.

 

            The intuition that conscious state is sharp can be elicited in most people if two mistakes are avoided. First, one must not confuse faint, hazy, incoherent, or fleeting experiences with borderline experiences, since the former have characteristic, rich phenomenologies, and so are clearly conscious. Second, many philosophers speak of experiential elements that can in some sense go unnoticed, perhaps due to a lack of attention, and it may be tempting to view such elements as borderline conscious.[3] On reflection, however, there seem to be only two ways of imagining them: either as having phenomenal features, being “like something,” just that we are in some sense unaware of those features; or as not having them (though they may possess structurally similar non-phenomenal features). Since no third possibility seems conceivable, these appear not to be genuine borderline conscious states.

 

            There are three main ways of responding to the claim that our concept conscious state is intuitively sharp. First, one can take the intuition to accurately reflect the nature of the concept, and so deny the possibility of borderline conscious states. Second, one can insist that borderline conscious states are possible, the intuition to the contrary notwithstanding. Anyone taking this line must treat the intuition of conscious state’s sharpness as in some sense illusory. Third, one can deny finding the concept intuitively sharp.

 

            Although the second option—that of acknowledging the intuition of sharpness but insisting that the concept is vague nonetheless—is the most common of the three, it suffers from an internal tension. That is because it is natural to suppose that our intuitions about the applicability of our concepts to hypothetical cases, as well as our expressed judgments about such cases, are in large part caused by those concepts. It is because my concepts red and less-than-seven are represented in my brain in the ways they are that I intuitively find red to be vague and less-than-seven sharp. The psychological natures of those concepts largely determine and explain my dispositions to judge that some colors are borderline red, but no numbers are borderline cases of being less than seven.[4] On this picture—which I assume to be on the whole correct—one’s (mentally represented) concept is vague if and only if (normally) one has the appropriate corresponding intuitions and dispositions regarding hypothetical cases. It follows that one cannot plausibly hold both that one’s concept conscious state is intuitively sharp and also that that same concept is vague.[5]

 

            One can, however, distinguish between our current concept conscious state and some future version of the concept, claiming that while our current concept is indeed sharp, a future development of it will be vague. This occurred with the concept life. At earlier stages in its history the concept was sharp: borderline living creatures were inconceivable. However, the concept developed with the advent of modern biology, and it now appears to be vague (viruses, for example, are often plausibly suggested as borderline cases).[6] Similarly, it might be thought that our current, relatively primitive concept conscious state must undergo developmental change before it can correctly represent its subject matter, one such change being from sharpness to vagueness. In this way, one can admit that our current concept is sharp in accordance with our intuitions, but maintain that the concept is also in a sense vague since a future, more correct version of it will be vague. (A sense is thus provided in which our intuitions about conscious state’s sharpness can be seen as illusory.) I believe that only in some such way can sense be made of the second option discussed above.

 

            The third way of responding to the claim that conscious state is intuitively sharp is to deny having the intuition. No doubt there are concepts some call ‘the concept conscious state’ that certain individuals find not to be intuitively sharp. Such concepts, however, typically reflect a prior commitment to a materialist theory that invokes complex physical or functional states or properties. In contrast, the concept conscious state that I have in mind is that familiar concept through which most of us think of consciousness when we grasp the traditional mind-body problem. It is the concept employed when, for example, performing Descartes’ dreaming or evil-genius thought experiments, one attends to what it is that remains if there is no external world. Prima facie, that concept is pre-theoretical with respect to materialism, dualism, and idealism, at least in the sense that it is neutral among those theories (which is why each such theory can be of consciousness, so conceived). Now I deny that one can fail to find intuitively sharp that concept conscious state, so I reject the third option under consideration.[7] That is because I believe that that concept is in fact sharp, and that sharp concepts (normally) cause in us appropriate corresponding intuitions and judgments (see above). Though I have argued at length elsewhere that that concept is sharp (see Antony 2004), I cannot rehearse the argument here. Instead. I offer a short argument that points to the same conclusion. The argument concerns in the first instance our concept conscious creature, but is extendable to conscious state.

 

            Those who believe that conscious creature is vague often point to various types of creatures (fish, worms, insects, etc.) as possible borderline cases. Judgments that such cases are borderline, however, are not plausibly determined by one’s concept conscious creature—at least not in the way in which a judgment that some color is borderline red is determined by the psychological nature of one’s concept red. To see that, notice that regardless of where one believes the boundary between conscious and nonconscious creatures to lie, one can entertain a full range of possibilities: one can imagine with Descartes that only humans are conscious, or that the boundary lies closer to fish, or that plants or even everything is conscious (panpsychism). Our concept conscious creature itself rules out none of those possibilities. In contrast, any uncontroversially vague concept like red or child determines for us one or more specific borderline regions to which anyone competent with the concepts is sensitive; and the way in which such concepts are mentally represented excludes our even entertaining the possibility of borderline cases lying well outside those regions—such as, for example, black being a borderline case of red, or a 70-year old human being (literally) a borderline case of child. Since conscious creature determines for us no such borderline region, it appears not to be genuinely vague.[8] But then neither can conscious state be vague. For if borderline conscious states were conceivable, so would be creatures with such states, and consequently so would be borderline conscious creatures.

 

            It appears that none of the three ways of responding to the claim that our current concept conscious state is intuitively sharp allow one to maintain that that concept is vague. At a minimum, therefore, the possibility that it is sharp must be taken very seriously indeed. This conclusion motivates arguing for a conditional whose antecedent contains the proposition that our current concept conscious state is sharp, and whose consequent states that most common metaphysical theories of consciousness are false.

 

1.2 Preview

 

Here is a more accurate statement of the conditional to be defended in Section 2: If our current concept conscious state is sharp, and if that concept is correct (at least in respect of its sharpness), then common versions of the identity theory, functionalism, and dualism are false, or at least should be rejected. The common versions, we shall see, are those that appeal to complex physical or functional states in identification, realization, or correlation—that is, states that are sufficiently complex to ensure that their associated physical or functional concepts are vague.

 

            The above conditional may seem obviously true to many, and so not in need of defense. (Many would wish to infer by modus tollens from the conditional and the truth of one of the theories referred to in the consequent that at least one of the antecedent’s conjuncts is false.) However the argument is worth developing in detail, since matters are more complex than they might seem. The three metaphysical theories of consciousness, for example, require somewhat different arguments; and those arguments also vary depending on which theory of vagueness one favors. For reliable conclusions, therefore, the details need to be worked out.

 

            Assuming the truth of the conditional is established by the arguments of Section 2, there are four ways to respond. The first is to deny that our current concept conscious state is sharp. Having already addressed that, I shall not pursue it further in this paper. The second response involves accepting the truth of the antecedent, and inferring that the theories referred to in the consequent are false. From that it follows that any true theory of consciousness must appeal to properties whose concepts are sharp, perhaps properties of fundamental physics, for example. The third and fourth responses deny in different ways that our current, sharp concept is correct. One can hold that it is incorrect because it is empty or lacks reference, that is, one can adopt eliminativism; or one can maintain that some future, correct version of the concept will be vague. These last three responses will be discussed briefly in Section 3.

 

2. The arguments

 

In this section I lay out the arguments for the truth of the conditional stated above. The arguments assume the sharpness and correctness of our current concept conscious state, and attempt to derive the falsity of the metaphysical theories of consciousness referred to in the consequent.

 

            Since the arguments against the identity theory, functionalism, and dualism differ depending on which theory of vagueness one favors, I begin with a brief description of three main views of vagueness, each of which shall be considered when discussing each theory of consciousness. First, there is the view that borderline cases are a purely semantic phenomenon: we do not know whether to count certain piles of sand as heaps because the semantics of our language (/concepts) determines no answer for such piles. Indeed, there is nothing to know, no fact of the matter. Following Sainsbury (1994) I call this view ‘the semantic view’. The second view also holds that the semantics of our language does not determine whether vague terms apply to borderline cases, but in addition it explains that by claiming that the world itself (objects, properties, etc.) is vague or indeterminate. Call this the ‘ontological view’. The third view is the epistemic view, according to which our inability to classify borderline cases is due to our ignorance. There is no semantic or ontological indeterminacy: for every borderline case there are facts about whether or not it satisfies the vague expression. We simply do not know what those facts are, and perhaps could not know them.[9]

 

2.1 The identity theory

 

Consider now some implications of the sharpness and correctness of our current concept conscious state for the type-identity theory. Identity theorists focus on determinate types of conscious states, C1, C2,…, Cn (pain, orange after-image, etc.), and neurophysiological states, N1, N2,…,Nn (c-fiber firings, activity in visual area V1, etc.), and claim that C1 = N1, C2 = N2,…, Cn = Nn.[10] They typically do not explain what it is in general to be in a conscious state (call that property ‘C’, which is short for ‘conscious state’). However, identity theorists must believe there is some neurophysiological story to be told about what distinguishes conscious from non-conscious states. That story, in effect, will ascribe a single property N to all conscious states, which the identity theorist will identify with C. There are various possibilities for what N might be: a property common to each of N1, N2,…,Nn; a disjunction of N1, N2,…,Nn; a disjunction of properties more general than N1, N2,…,Nn but less general than N; and so forth. For our purposes it does not matter how exactly the story goes, so long as some N is identified with C.

 

            I shall argue that no matter which view of vagueness one favors, the identity theory is false given that conscious state (or ‘conscious state’) is both sharp and correct. [11] A further assumption I need is that the neurophysiological concept N (or expression ‘N’) is vague. I can assume that because no matter what N is, due to the complexity of neurophysiological properties, borderline Ns can in principle always be reached from clear Ns through gradual changes in the neurophysiological structures or properties composing Ns.[12] Since we do not yet know what N will be like, I cannot offer a specific illustration. However, the point can be seen by focusing on the vagueness of the concept neuron. Neurons are highly complex structures, with diverse components which perform sophisticated micro-functions. Anyone minimally familiar with such details can convince oneself that by gradually removing atoms (or other sufficiently small parts) from such neuronal components, one will eventually reach borderline cases for concepts of at least many of those components (and their properties), and as a result borderline cases of neuron as well—structures that are neither clearly neurons nor clearly not neurons.[13] By gradually replacing neurons and neuronal properties that compose a token N with borderline neurons and properties, it is virtually certain that, regardless of what N is, a borderline N will eventually be reached. N does not determine for us a definite point at which a brain undergoing such changes switches from being in N to not being in N.

 

            Consider now the following argument:

 

(I1) If N=C, then ‘N’ has borderline cases if and only if ‘C’ has borderline cases.[14]

(I2) There are borderline cases of ‘N’ but no borderline cases of ‘C’.

----------------

(I3) N¹C

 

My strategy is to show that the identity theory is false (N¹C), given the sharpness of ‘C’, the vagueness of ‘N’, and the correctness of both ‘C’ and ‘N’. Clearly the above argument is valid, and I2 follows from the assumptions that ‘C’ is sharp and ‘N’ is vague. Only I1 need be addressed in any detail, therefore. I shall argue that I1 is true on both the semantic and ontological views of vagueness, given the correctness of both ‘C’ and ‘N’. The epistemic view will require separate treatment, because I1 may be false on that view, even if ‘C’ and ‘N’ are correct. That is because on the epistemic view N has a sharp conceptual boundary in spite of having borderline cases, so it might be that N=C (if N’s sharp conceptual boundary matches C’s) even though only ‘N’ and not ‘C’ has borderline cases.

 

            That the “assumption of correctness” for N and C (or ‘N’ and ‘C’) is required for I1 is easily seen. For if either N or C were incorrect, one could be vague and the other sharp, even though N and C are the same property (N=C). Consider again the concept life. Assume that the property of being alive (“having life”) is identical to some complex biological/functional property B, and that the correct concepts B and life are both vague. Imagine now an incorrect, sharp version of life from an earlier stage in the concept’s history, and consider an analog of I1 using that earlier concept life, and B. We thus have the following: life is sharp, B is vague, but it is of course still true that the property of being alive is identical to B. Absent the assumption of correctness, therefore, the analog of I1 for the properties of being alive and B is false. So the same holds for I1 itself.

 

            To show that I1 is true on the semantic and ontological views I employ the following schema:

 

(Schema Φ): ‘Φ’ has borderline cases if and only if there is an x such that there is no fact of the matter whether x has the property Φ.

 

Two points of clarification regarding Schema Φ are required. First, I rely on an intuitive understanding of ‘no fact of the matter’. Although the notion may be explicable in terms of some notion of indeterminacy, there are difficulties involved in providing the account. However, some such account is required if there is to be an alternative to the epistemic view of vagueness.[15] I shall simply assume that my argument goes through no matter which account is correct. Second, because I understand borderline cases as entailing blurred boundaries, as being consistent with higher-order vagueness, etc., the right-hand side of the biconditional, to be accurate, must include those conditions. Since they can be added to all of the relevant premises in the following argument (I5, I6, I7) without changing the conclusion, I exclude them for ease of exposition.

 

            I shall argue presently that both semantic and ontological theorists accept instances of Schema Φ, given that Φ’ is correct. First, however, notice how I1 can be proved if we help ourselves to instances of Schema Φ.

 

(I4) N=C (assumption)

(I5) ‘N’ has borderline cases if and only if there is an x such that there is no fact of the matter whether x has the property N. (Schema Φ)

(I6) There is an x such that there is no fact of the matter whether x has the property N if and only if there is an x such that there is no fact of the matter whether x has the property C. (I4 and Leibniz’s law)

(I7) ‘C’ has borderline cases if and only if there is an x such that there is no fact of the matter whether x has the property C. (Schema Φ)

(I8) ‘N’ has borderline cases if and only if ‘C’ has borderline cases. (I5 – I7)

--------------------

(I1) If N=C, then ‘N’ has borderline cases if and only if ‘C’ has borderline cases. (I4 – I8)

 

            Here is why semantic and ontological theorists are committed to instances of Schema Φ, given the assumption of correctness. Semantic theorists hold that ‘Φ’ has borderline cases if and only if (roughly) there is an x such that the semantics of ‘Φ’ does not determine whether or not ‘Φ’ applies to x. But semantic theorists move freely between such metalinguistic claims about the applicability of predicates and corresponding object-language claims. For example, rather than saying that the semantics of ‘bald’ does not determine whether the term applies to John, the semantic theorist will often say that it is indeterminate whether John is bald, or there is no fact of the matter whether he is. Moreover, as competent speakers of English, semantic theorists know that the meaning of the word ‘property’ is such that ‘x is P’ and ‘x has the property P’ are for the most part interchangeable (independently of one’s metaphysics of properties). So the semantic theorist is committed to Schema Φ.[16] Notice, by the way, that the legitimacy of such moves from metalinguistic to object-level claims assume the correctness of ‘Φ’. For if ‘Φ’ were incorrectly sharp (as life was), for example, the semantics of ‘Φwould determine, for each x, whether or not ‘Φ’ applies to x—even though in truth there may be many xs such that there is no fact of the matter whether or not they have the property Φ.

 

            To see that ontological theorists also accept instances of Schema Φ, it is enough to note that, given the correctness of ‘Φ’, they will accept all instances of the following argument schema, which entails Schema Φ: (i) ‘Φ’ has borderline cases if and only if Φ is a vague property; (ii) Φ is a vague property if and only if there is an x such that there is no fact of the matter whether x is Φ.[17] Since both semantic and ontological theorists accept instances of Schema Φ, the argument for I1 goes through, and the type-identity theory is seen to be false on both views, given the sharpness of C, the vagueness of N, and the correctness of both.[18]

 

            According to the epistemic view, concepts for neurophysiological states have precise boundaries in spite of being vague. Though there is nothing from the foregoing arguments to conclusively rule out identifying neurophysiological states with conscious states if the epistemic view is true, there are at least two problems for the identity theory on that view, given that N is vague, C is sharp, and both are correct. First, since on the epistemic view our ignorance may be irremediable in principle, we may never know where the precise boundaries of neurophysiological concepts lie. However, that threatens to severely limit the science of consciousness, since we may never know which of indefinitely many distinct borderline Ns involve consciousness and which do not, or understand why some do while others do not. Of course, matters may be that way, but there is at present little reason to think they are independently of the epistemic view. That aside, if the epistemic view is true, it may still be possible to adopt a metaphysical theory of consciousness that does not place a priori constraints on science (which ceteris paribus is preferable, of course). However, that means identifying consciousness not with neurophysiological or other complex properties, but with properties whose concepts are sharp—possibly those of fundamental physics, for example.

 

            A second problem is more serious. If the epistemic view is true, N’s sharp boundary must be such that, for each token N, the difference between being N and not-N is describable at the level of fundamental physics. For example, where N is a pattern of activity across thousands of neurons, removing (say) a single quark from an atom in a dopamine molecule in a synaptic vesicle in the terminal button of an axonal branch of a neuron, could bring it about that that pattern of activity is no longer an instance of N. However, for any such low-level physical change, how could it justifiably be claimed that conscious state’s sharp boundary just happens to match that change rather than one of the countless other possibilities within N’s borderline region; or, indeed, one of the sharp points associated with concepts from other levels of description such as neuropsychology or biochemistry? Since on the epistemic view such sharp cutoff points are determined by the meanings of the terms from the relevant vocabularies (see e.g. Williamson 1994)—which vocabularies are largely independent of one another—such a cross-level match-up would be a coincidence of staggering proportions.[19] That is not to say there is no such match, just that there is not the slightest reason to believe there is. Under such circumstances it seems most rational simply to identify conscious states with states of fundamental physics, rather than neurophysiology (or neuropsychology, or biochemistry, etc.) and hope we can eventually discover which low-level physical states are identical to conscious states. I take this as sufficient reason to reject the neurophysiological identity theory given the epistemic view.

 

2.2 Functionalism

 

According to functionalism, a state is the type it is in virtue of its functional role within a structure of interrelated states, inputs and outputs. Functionalism thus identifies mental state types with functional types.[20] Mental state tokens are the tokens in systems that realize the functional roles by instantiating appropriate first-order properties.[21] Unlike identity theorists, functionalists typically do explain what being in a conscious state in general (C) is: all conscious states share a certain functional role, such as being introspectable, reportable, caused by certain sensory stimuli, etc. The property of having that functional role (call it ‘F’) functionalists identify with C.

 

            We can argue against functionalism as we did with the identity theory:

 

(F1) If F=C, then ‘F’ has borderline cases if and only if ‘C’ has borderline cases.

(F2) There are borderline cases of ‘F’ but no borderline cases of ‘C’.

----------------

(F3) F¹C

 

The argument for F1 parallels that for I1; we need only substitute everywhere ‘F’ for ‘N’. As with I1, F1 holds only for the semantic and ontological views, so the epistemic view must be treated separately. Given our assumption that ‘C’ is sharp, defending F2 requires showing only that there are borderline Fs.

 

            There are various ways one might do that. One is to focus on concepts for the inputs and outputs in the functional definition of F. Regardless of whether inputs are states of the environment, sensory systems, or the sensory cortex; and regardless of whether outputs are states of the motor cortex, bodily motions, or behaviors; the concepts for most if not all such inputs and outputs will be vague. Such vagueness, however, will infect F, since F is defined in terms of such concepts. A second way of showing there are borderline Fs exploits the fact that many functionalist theories require that causal relations be realized only ceteris paribus, or in normal or ideal conditions, etc., to account for cases of malfunction, system damage, etc., Such concepts, however, leave much room for vagueness. A third option concerns Lewis’s (1972) theory, according to which a mental state is the type it is if most of the type’s proprietary relations are realized. However, most is notoriously vague.

 

            A fourth way of showing that there are borderline Fs focuses on the realization level. We said that systems realize functional states by instantiating first-order properties. Whenever such properties are sufficiently complex, (correct) concepts expressing them will be vague. So imagine a system that realizes a functionalist theory of consciousness by instantiating neurophysiological properties. Suppose the system is in N, and that N realizes functional state F (=C). The system is thus also in F. Now assume N and F are correct. By gradually removing atoms from the brain we can generate a borderline case of N (see above). It will then be unclear whether that brain-state bears the causal relations (actual and counterfactual) to inputs, outputs and other neurophysiological states that N did, so it will be unclear whether the system realizes F.[22] We will thus have a borderline case of F as well. We are thus committed to this: If a property P realizes F, then a borderline case of P is a borderline case of F. It follows that there will be many borderline cases of F (or ‘F’). This completes my defense of F2. If conscious state is sharp, and C and F are correct, functionalism is false given either the semantic or ontological views of vagueness.

 

            Since on the epistemic view neurophysiological concepts have precise boundaries, where a neurophysiological state N realizes F (= C), a system in N could in principle realize C. As with the identity theory, however, there is no reason to believe that the precise point at which a token state changes from N to not-N is where it would change from C to not-C. Why not at one of the countless other points within N’s borderline region? But suppose, miraculously, it is at that point where the change occurs. Are we also to believe that the precise points associated with all other types of possible physical realizations (silicon, the Chinese nation, alien brains, etc.) also coincide perfectly with the change from C to not-C? That is incredible. A further worry, as with the identity theory, is that insufficiently motivated, a priori limitations are placed on the science of consciousness. Given the epistemic view, accordingly, there is at least as much reason to reject functionalism as there was the identity theory.

 

2.3 Dualism

 

Though my primary focus will be on property dualism, I shall also comment on substance dualism. I assume throughout this section that C is sharp, N is vague, and both are correct. According to property dualism, conscious-state types C1, C2,…, Cn are properties of neurophysiological-state types N1, N2,…,Nn. This sets up a correlation between conscious and neurophysiological types.[23] Like identity theorists, property dualists say little about the general property of being in a conscious state (C). However, C must be a property of a some general neurophysiological state or other, since only certain neurophysiological states have conscious properties. If we call that neurophysiological state ‘N’, C is a property of N. I shall now argue that property dualism fails given either the semantic or ontological views of vagueness.

 

            Consider a token state N that is gradually transformed (by atom-removal) through a region of borderline Ns to not-Ns. C will be a property of the clear Ns. Given that C is sharp, as the neurophysiological state is altered, C will at some point switch abruptly to not-C. The trouble is that no matter when that happens, the proposed correlation between C and N will be false. If it occurs in the region of clear Ns, or after the region of clear not-Ns has been entered, the proposal is obviously false. But if the switch occurs in N’s borderline region, the correlation tells us nothing about when or why it occurs, since nothing changes with respect to N-hood at the point of the switch. The proposed correlation is thus at best inexact. But why settle for that? Why not examine levels lower than neurophysiology for physical changes that correlate more precisely with the change from C to not-C? Since any reasonable property dualism will posit laws linking the phenomenal and physical realms,[24] the real correlation is likely to be found at some such lower level—from which it follows that the proposed correlation between C and N is false. This line of reasoning holds for both the semantic and ontological views of vagueness.

 

            Similar difficulties confront parallelism, which like property dualism posits correlations between C1, C2,…, Cn and N1, N2,…,Nn. If we assume that C and N must also be correlated, the same argument as above goes through. With interactionist substance dualism, to the extent that conscious activity can be independent of brain activity, there will be fewer type-type correlations than with property dualism. But there must always be some states that are correlated. Something, after all, must distinguish those physical structures that non-physical minds “latch onto” (e.g., brains) from those they do not (pianos). Call that property ‘N’. If a token N is gradually altered through a region of borderline Ns to not-Ns, at some point the token will switch abruptly from being “soul-compatible” to “soul-incompatible,” and it will be clear for reasons already rehearsed that N is not the property souls “latch onto.” The version of interactionist substance dualism under discussion will thus be false. This argument also applies given either the semantic or ontological views.

 

            With the epistemic view, correlations between C and N are in principle possible, but as with the identity theory and functionalism, there is not the slightest reason to think C’s sharp boundary would match N’s. In addition, the science of consciousness would be unduly restricted. It is thus preferable to seek correlations between C and states of fundamental physics and reject the above forms of property and substance dualism.

 

            We have now shown this: Assuming our current concept conscious state is both sharp and correct, where complex physical or functional properties are appealed to, the identity theory, functionalism, and dualism are all false, or at least ought to be rejected, regardless of which theory of vagueness one favors.[25]

 

3. Responses

 

Given that our current concept conscious state is sharp, and the truth of the conditional argued for above, consider again these three ways of responding: (1) accept that our current concept is correct, and infer that the theories of consciousness referred to in the consequent are false, concluding finally that any true theory of consciousness must appeal to properties whose concepts are sharp, for example those of fundamental physics; (2) maintain that there can be no correct concept conscious state, that is, adopt eliminativism; (3) suggest that some future, correct development of the concept conscious state will be vague. I discuss these in turn.

 

3.1. Fundamental physics

 

For those convinced that our current concept conscious state is sharp and correct, since there appears to be no viable functionalist option, the appropriate response to the above arguments is to investigate versions of the identity theory or dualism that appeal to physical properties whose concepts are sharp (if such properties exist).[26] In seeking physical properties whose concepts are sharp, the obvious place to look is of course fundamental physics. Notice, however, that if the nature of consciousness resides at that level, the likelihood that panpsychism is true appears to increase dramatically.

 

            Some may think that since fundamental physics is quantum physics, and quantum phenomena are indeterminate, the world at that level is vague. Though the issue is too complex to enter into here, I believe that quantum indeterminacies are very different from any that may be associated with vagueness. Be that as it may, if conscious state is sharp and correct, but the physical world is nowhere sharp, then the three theories of mind are worse off than I have suggested. Not only are all materialist theories of consciousness false, dualism becomes incomprehensible once one asks what “parts” of a vague physical world sharp conscious states link up with. Here, even those most repelled by eliminativism regarding consciousness might think twice. Recall, however, that most semantic and epistemic theorists believe that the physical world is sharp; they should thus infer that materialist and dualist theories must employ sharp physical concepts—unless they wish to adopt eliminativism or hold out for a future, vague concept conscious state.

 

3.2. Eliminativism

 

Some convinced that our current concept conscious state is sharp may be tempted toward eliminativism about consciousness. Such individuals are likely to be convinced of the central role of complex physical or functional properties in human psychology, as well as the lack of any psychological significance for fundamental physics. They would also likely hold that any transformation of our sharp concept conscious state into a vague concept appropriate for representing complex material properties would necessarily be a case not of conceptual development but rather one of conceptual switching, a change of subject.[27] Since on this view any correct concept would have to be vague, but no vague concept could be conscious state, eliminativism follows.

 

            I have no argument against eliminativism; for all I know it may be true. However, I believe we are currently in no position to know that, and have insufficient reason to believe it. After all, uncovering correlations between conscious states and states of fundamental physics could still swing things decisively against eliminativism. Alternatively, our current sharp concept could develop into a vague concept while retaining its identity. Those points aside, intuitions about the existence of phenomenal experience are so powerful that any defense of eliminativism, to be convincing, would have to provide a way of clearly understanding why we falsely believe conscious states exists. It is fair to say that no eliminativist has yet come close to doing that.

 

3.3 A future concept will be vague

 

Anyone who believes that a future, more correct version of conscious state will be vague is also likely to believe, with the eliminativist, that complex physical or functional properties are central in human psychology, and fundamental physics has little or no psychological significance. Such an individual would differ from the eliminativist in holding that our current sharp concept could survive transformation into a vague concept, as occurred with life. The idea would be one of a future, correct materialist theory of consciousness that makes essential appeal to complex physical or functional states and processes. Call such a theory c-materialist (‘c’ for ‘complex’). This option of holding out for a future, correct, vague concept conscious state is likely to be the most common materialist response to the above arguments.

 

            In evaluating this response, consider what reason we have now to think there will be a future, correct c-materialist theory of consciousness. If we are honest, I believe we must acknowledge that there is not much to go on. Not only is materialism not yet established as the correct metaphysics of consciousness (the concern here is with knowledge, not materialism’s popularity) but within the materialist camp, c-materialism has not been demonstrated as superior to theories that appeal to fundamental physics.

 

            There is, it must be granted, some reason for optimism due to initial progress in constructing theories of neurophysiological and functional states and processes that are associated with consciousness. However, we at present have no reason to believe that success in that project will translate into success in providing a true c-materialist theory of consciousness—that is, of the phenomenon represented (perhaps inaccurately) by our current concept conscious state. For, again, fundamental physics (or dualism) may be key to the correct theory of consciousness. Alternatively, continued success in the c-materialist project might push us toward eliminativism. Because the details are not yet in, and we cannot yet know which transformations would be required of conscious state for it to become suitably vague, we can have no way of judging whether we would be faced with a case of conceptual development or a change of subject. The c-materialist enterprise is still too undeveloped for informed judgment on this matter.

 

            A final point. Suppose eliminativism is false. Then assuming our current concept conscious state is sharp, the fact that it is provides some (defeasible) reason for thinking that c-materialism is false regarding consciousness. At the very least, the burden of proof is on the c-materialist to provide a theory of consciousness sufficiently persuasive to show why we must, and how we can, transform our current, sharp concept conscious state into a concept that is vague. (Consider what was required of biology to establish that the correct concept life is vague.) In the absence of such a demonstration, it is most rational to judge the mere promise of a c-materialist theory of consciousness by how well the theory (or theory-sketch) corresponds to our pre-theoretical concept conscious state, and not the other way around. The point is not to urge that the c-materialist enterprise be abandoned. It is just to say that at present we lack any very good reason to believe that the correct, future concept conscious state will be vague. Theories of consciousness that appeal to properties whose concepts are sharp should thus be investigated more intensively than they have been to date.[28]

 

References

 

Antony, M. (1998): ‘On the Temporal Boundaries of Simple Experiences’, Proceedings of the 20th World Congress of Philosophy.  <http://www.bu.edu/wcp/Papers/Mind/MindAnto.htm>

Antony, M. (2001a): Is ‘Consciousness’ Ambiguous?, Journal of Consciousness Studies 8(2), 19-44.

Antony, M. (2001b): ‘Conceiving Simple Experiences’, The Journal of Mind and Behavior 22(3), 263-286.

Antony, M. (2004): ‘Are Our Concepts Conscious State and Conscious Creature Vague?’, unpublished manuscript, Department of Philosophy, University of Haifa.

Chalmers, D. (1996): The Conscious Mind. New York: Oxford University Press.

Churchland, P. and Sejnowski, T. (1992): The Computational Brain. Cambridge, MA: MIT Press.

Davidson, D. (1970): ‘Mental Events’, In L. Foster & J. W. Swanson (Eds.), Experience and Theory. Amherst: University of Massachusetts Press.

Field, H. (2003): ‘No Fact of the Matter’,  Australasian Journal of Philosophy 81(4), 457-480.

Horwich, P. (2000): ‘The Sharpness of Vague Terms’, Philosophical Topics 28(1), 83-92.

Keefe, R. (2000). Theories of Vagueness. Cambridge: Cambridge University Press.

Keefe, R. and Smith, P. (1997): ‘Introduction: Theories of Vagueness’, in R. Keefe and P. Smith (eds.) Vagueness: A Reader. Cambridge MA: MIT Press.

Lewis, D. (1972): ‘Psychophysical and Theoretical Identifications’, Australasian Journal of Philosophy 50, 249-258.

Lockwood, M. (1989): Mind, brain and the quantum. Oxford: Basil Blackwell.

Nelkin, N. (1995): ‘The Dissociation of Phenomenal States From Apperception’, In T. Metzinger (Ed.), Conscious experience. Paderborn: Schoningh.

Papineau, D. (2002): Thinking About Consciousness. Oxford: Oxford University Press.

Ramsey, W., Stich, S., and Garon, J. (1990): ‘Connectionism, Eliminativism, and the Future of Folk Psychology’, Philosophical Perspectives 4, 499-533.

Sainsbury, M. (1994): ‘Why the World Cannot be Vague’, The Southern Journal of Philosophy 33 (supplement), 63-81.

Sainsbury, M. (1995): Paradoxes. (2 ed.). Cambridge: Cambridge University Press.

Schiffer, S. (2003):  The Things We Mean.  Oxford: Oxford University Press.

Unger, P. (1988): ‘Conscious Beings in a Gradual World’, Midwest Studies in Philosophy 12, 287-333.

Williamson, T. (1994): Vagueness. London: Routledge.

Williamson, T. (1997): ‘Reply to commentators’, Philosophy and Phenomenological Research 57(4), 945-953.

Woodfield, A. (1993): ‘Do Your Concepts Develop?’, in C. Hookway and D. Peterson (eds.), Philosophy and Cognitive Science. Cambridge: Cambridge University Press.

Wright, C. (1975): ‘On the Coherence of Vague Predicates’, Synthese 30, 325-365.

 



[1] Two terminological points: First, I use boldface type to name concepts, sometimes in the sense (roughly) of an abstract constituent of a proposition, and sometimes in the sense of a mental representation (which I assume here always expresses a corresponding abstract concept). When I speak of concepts as mental representations, I make that explicit or the context entails it; otherwise it can be assumed that I have abstract concepts in mind. Second, in speaking of the concept conscious state, nothing hangs on my use of the term ‘state’: ‘process’, ‘event’, etc. could have been substituted.

[2] Unlike Wright (1975), I use ‘borderline case’ so as to entail blurred or fuzzy conceptual boundaries.

[3] See, e.g., Lockwood 1989, Nelkin 1995, Papineau 2002, among many others. I have referred to this phenomenon as consciousness without awareness or consciousness-a (Antony 2001b).

[4] I speak loosely here of judgments about borderline cases, so as to include behaviors which do not necessarily involve employment of the concept borderline case, for example: ‘cases where we hesitate in judging either way or deny both judgments or disagree with each other or change our mind over time’ (Keefe 2000, 43-44). What is important is just that such behavior is displayed in and around the regions of the concepts’ boundaries—which regions I assume not to be sharply defined.

[5] Williamson (1997, 945 ff.) describes a hypothetical “opinionated macho community” in which everyone confidently applies ‘bald’ or its negation to each case, even though there is considerable disagreement among speakers, and the same speaker at different times. Though arguably the concept is intuitively sharp for each member of the community, Williamson maintains that ‘bald’ is vague nonetheless. I myself am doubtful (see also Horwich 2000, 91). But, in any event, since none of our terms are used in that way, the example poses no challenge the claim in the text.

[6] Though there are delicate questions here about whether the same concept can endure through conceptual change, or whether such changes must be construed as introducing new concepts, so far as I can tell such matters need not be addressed here. (At most, certain claims may require reformulation.) See Woodfield 1993 for some related discussion concerning conceptual development within individuals.

[7] In accordance with the common view that the word ‘consciousness’ is multiply ambiguous within the consciousness literature, some readers may doubt that there is a single concept conscious state in terms of which most of us grasp the mind-body problem. I think this common view is highly questionable (see Antony 2001a), but anyone who disagrees can take the arguments below to concern the concept phenomenally conscious state and theories of phenomenal consciousness.

[8] Speculations about insects and fish, etc. being borderline cases must thus be explained without appeal to the structure of our concept conscious state. Any such explanation will likely invoke a prior commitment to a materialist theory of consciousness which entails the existence of such borderline cases due to the vagueness of the relevant material concepts.

[9] For overviews of the views see, e.g., Sainsbury 1995 and Keefe and Smith 1997.

[10] Throughout Section 2 my arguments apply to any broadly physical states that are sufficiently complex to ensure the possibility of borderline cases for their associated concepts. Though I run the arguments with neurophysiological states, complex physical states above or below the level of neurophysiology (neuropsychology, biochemistry, etc.) could easily be substituted.

[11] As will become evident presently, I focus in this section more on the vagueness of linguistic expressions than on that of concepts, so as to better conform to the vagueness literature when discussing theories of vagueness. Nevertheless the arguments could have been run entirely with concepts. An unfortunate result of my doing so, however, is that I must speak somewhat awkwardly of the (in)correctness of linguistic expressions, in addition to that of concepts. Such talk must simply be interpreted as being about the concepts expressed by such linguistic expressions.

[12] The sheer number of neurons and synapses in the cortex—105 and 109 respectively, per cubic millimeter (Churchland and Sejnowski 1992, 37)—helps provide a sense of the complexities involved.

[13] Cf. Unger 1988. For an overview of basic neuroanatomy see Churchland and Sejnowski 1992 or any introductory neuroscience text.

[14] In I1 and I2 (and throughout the paper) what is really at issue are possible borderline cases, since the mere possibility of borderline cases suffices for vagueness.

[15] See Field 2003.

[16] Notice also that we are concerned with semantic theorists who hold the type-identity theory, which cannot even be formulated without presupposing types or properties in some sense—if only in Schiffer’s (2003) “pleonastic” sense.

[17] The possibility of incorrect concepts (in respect of their sharpness or vagueness) seems to pose a serious challenge for the ontological theory. Part of the picture behind that theory is that vagueness within our language is (partly) explained by the existence of worldly vagueness. But if the vagueness of incorrect concepts can vary independently of whatever vagueness there is in the world—and it seems it can; think again of life—it seems that that conceptual vagueness will have to be explained without appeal to worldly vagueness. But then why not explain all vagueness in that way?

[18] Here are two further arguments for the same conclusions. First the semantic view. For reductio, assume N=C. And assume that C is sharp and N is vague. If N=C, then N and C refer to the same property. (Again, type-identity theorists must acknowledging properties in some sense.) Since C has no borderline cases, any object o to which C applies clearly has that property. But if N expresses the same property, and o clearly has it, how could o possibly be a borderline case of N? It seems it cannot. So N can have no borderline cases either, which contradicts N’s vagueness. So N¹C. Now to the ontological view. If C is sharp and N is vague, the properties C and N will also be sharp and vague, respectively. Since a vague property (whatever that is) cannot be identical to a sharp (i.e., non-vague) property, it follows that N¹C. Notice that these arguments works equally well against functionalism if ‘F’ is substituted everywhere for ‘N’.

[19] Note that there may be a general worry here for the epistemic view regarding all cases of inter-level theoretical reduction, since arguably there will never be any reason to expect sharp conceptual boundaries to match up across levels.

[20] That is not so of Lewis’s (1972) version of functionalism, which combines functionalism with the type-identity theory. However, for that reason it inherits the problems with type-identity already discussed.

[21] So one common version of the token-identity theory will be addressed here. Arguably Davidson’s (1970) version is irrelevant, since it concerns mental states with the sort of full-blown intentionality characteristic of propositional attitudes (to which, e.g., the Principle of Charity applies), and it is doubtful whether conscious state is a concept of a state of that sort.

[22] Presumably what would have happened counterfactually regarding a state N is determined in part by laws involving the property N. Since it is unclear whether such laws apply to borderline cases of N, it is unclear whether what would have happened counterfactually regarding a state N would also have happened regarding a borderline case of N.

[23] The correlation need not be between conscious and physical types, however. For Chalmers (1996), e.g., it is between conscious and functional types. Though I do not discuss Chalmers’s version of property dualism in the text, my arguments apply to it. One need only keep in mind that concepts for functional states have borderline cases whenever concepts for their first-order realizers do (see Section 2.2).

[24] Cf. Chalmers 1996.

[25] It is worth noticing that what really does the work in the arguments from Section 2 is the claim that N or F have borderline cases which C lacks. Strictly speaking, that does not require that C be sharp, just differently vague from N or F. In other words, all that is really needed are “vagueness mismatches.” Accordingly, even if conscious state is vague, there will still likely be problems for the identity theory, functionalism, and dualism, since there is no a priori reason to expect the characters of the blurred boundaries of C and N or F—their “vagueness profiles,” call them”—to match. So even if conscious state is vague, theorists who appeal to complex physical or functional states must take up the burden of showing that the vagueness profiles of the relevant concepts match. (Cf. note 19.)

[26] There is also an idealist option, which I ignore.

[27] Although in real cases of conceptual change we have little trouble deciding whether a case is one of conceptual development or conceptual replacement (eliminativism), we know almost nothing about how we arrive at such decisions. For some discussion, see Ramsey, Stich, and Garon 1990.

[28] A similar conclusion is reached from different arguments in Antony 1998.  My thanks to three reviewers for some very helpful comments.