SWIF Philosophy of Mind, 08 April 2001 http://www.swif.uniba.it/lei/mind/forums/carruthers3.htm

Forums Forum 2 Seager's Commentary

Reply to Seager

Peter Carruthers

Department of Philosophy
University of Maryland, College Park.

William Seager gives a generous assessment of my recent book (2000), for which I am grateful. But he also develops an alleged counter-example to my dispositionalist higher-order thought (HOT) theory of phenomenal consciousness. I shall concentrate on this in my response, commenting briefly on his criticisms of my earlier ‘reflexive thinking’ account at the end.

Seager develops his counter-example in stages. First, he envisages what he calls a ‘neural meddler’, which would interfere in someone’s brain-processes in such a way as to block the availability of first-order perceptual contents to higher-order thought. Second, he imagines that our understanding of neural processing in the brain has advanced to such an extent that it is possible to predict in advance which first-order perceptual contents will actually become targeted by higher-order thought, and which will not. Then third, he supposes that the neural meddler might be so arranged that it only blocks the availability to higher-order thought of those perceptual contents which are not actually going to give rise to such thought anyway.

The upshot is that we can envisage two people -- Bill and Peter, say -- one of whom has such a modified neural meddler in his brain and the other of whom does not. They can be neurological ‘twins’ and enjoy identical neural histories, as well as undergoing identical sequences of first-order perceptual contents and actual higher-order thoughts. But because many of Bill’s first-order percepts are unavailable to higher-order thought (blocked by the neural meddler, which actually remains inoperative), whereas the corresponding percepts in Peter’s case remain so available, there will then be large differences between them in respect of phenomenal consciousness. Or so, at least, dispositional HOT theory is supposed to entail. And this is held to be highly counter-intuitive.

I have a number of responses to this example, which will be presented in order of increasing concessiveness.

1 Imaginary examples

The first response is to reject the example as being merely imaginary, and hence as irrelevant to the assessment of a naturalistic reductive explanation. Dispositionalist HOT theory is not, of course, intended as a conceptual analysis of our common-sense notion of phenomenal consciousness, nor as any sort of explication of our folk-theory of such consciousness. It is, rather, intended as a reductive explanation of phenomenal consciousness, parallel in kind to the reductive explanation of (one form of) heat in terms of mean molecular momentum. Such explanations are not intended to hold good in all conceivable circumstances. Rather, they are intended to provide us with an account of the actual constitution of the property being explained, which will be valid only across those possible worlds where the laws of nature remain constant (i.e. remaining the same as in the actual world).

Now, who knows whether it is naturally possible to meddle with the availability of first-order perceptual contents to higher-order thought without also having an impact on the nature of those contents, or on the higher-order thoughts entertained, or both? And who knows whether neural processing is well enough behaved to be predictable, even in principle? (It may turn out to be indeterministic; or it may turn out to be chaotic, with large differences resulting from undetectably-small differences in initial conditions.) In so far as we don’t know these things, we don’t know whether Seager’s example is naturally possible, either. And if it isn’t naturally possible, then it isn’t the right kind of case to provide a counter-example to dispositionalist HOT theory.

Stronger still, we have at least some reason to suspect that Seager’s example is not naturally possible. In order for Bill and Peter to remain neural duplicates while we predict in complete detail the higher-order thoughts which Bill will entertain, and while the readiness-potential of the neural meddler is adjusted accordingly, it wouldn’t be enough just to control and match their respective environments. We would also have to predict how their relations with those environments will change in response to their own actions (e.g. by turning their eyes, or by moving across the room). And we will also have to predict the sequences of thought which will occur to them, presumably influenced by all kinds of detail in their past experiences and in their background beliefs, as well as by current perceptual input. So there may actually be the same sort of deep impossibility here which Dennett (1991) argues to exist for familiar ‘brain-in-a-vat’ scenarios.

Moreover, all these predictions will have to be made while controlling for, and predicting, any effects which the changing state of the neural meddler might have on the rest of the brain -- changes which will take place in response to the very predictions we are presently trying to make. So there may be the same kind of principled difficulty in making accurate predictions here as occurs also in the social sciences, where the social effects of any prediction we make (effects which are partly dependent on our own future decisions) are amongst the things which we have to try to take into account when making a prediction (MacIntyre, 1981).

2 Unreliable intuitions

Suppose we allow that Seager’s example is naturally possible, however. Still, on any account, the example is a fantastic one, requiring us to consider a situation lying wholly outside the orbit of our ordinary thoughts and beliefs. And there is good reason to think that once we leave that orbit our common-sense intuitions cease to have any reliability whatever. (Remembering, of course, that these intuitions are merely beliefs about the circumstances in which phenomenal consciousness would or would not be present, arrived at in the absence of any well-established theory of its underlying nature. We are doing science here, not semantics.)

Travel back in time just twenty years, and you will find that most people have a powerful intuition that visual perception is an intrinsically-conscious process, for example. Yet now, after the discovery of such phenomena as blindsight (Weiskrantz, 1986), and following the development of a two-systems theory of vision (Milner and Goodale, 1995), there are few educated people who would be prepared to make such a claim. Similarly, there are many people who continue to have the intuition that the unwelcomeness of a sensation of pain is intrinsic to its very nature. But in fact there is good evidence that the felt quality distinctive of pain sensations, on the one hand, and the awfulness of pain, on the other, can dissociate from one another in rare cases (Dennett, 1978; Ramachandran and Blakeslee, 1998), for physiological reasons which are now quite well understood (Young, 1986).

Our intuitions are, of course, grounded in the familiar and every-day. They reflect the beliefs which we have formed from experience and/or testimony concerning usual cases. Now we are asked to consider a highly unusual case, whose structure falls well beyond the range of our ordinary experience and belief. Our background beliefs, formed in the context of the ordinary, may well combine in such a way as to deliver an implication in respect of the unusual case we are imagining. This implication will have the force of an intuition. When we imagine the unusual case, and ask ourselves what we are inclined to think about it, our minds may well throw up an answer for us. But there is no reason at all to think that this answer will have been reliably generated, nor that it is likely to be true. The scientific truth about highly unusual cases needs to be settled by careful theoretical enquiry, not by appeals to intuition.

3 Kinds of disposition

Seager thinks that his neural meddler, by blocking access between first-order percepts and higher-order thought, will make it the case that the former are no longer disposed to give rise to the latter -- and so will no longer be phenomenally conscious according to a dispositionalist HOT account. He may be right. But as Seager is aware, there a number of different kinds of disposition. I think it is an open question which kind is involved here.

Suppose we take a green emerald and lock it away in an indestructible light-excluding box. Now (in one sense) the emerald is no longer disposed to reflect green light. The surrounding box has blocked the exercise of that capacity, in something like the way in which the neural meddler blocks the exercise of the capacity to give rise to higher-order thought. But is the emerald still green, as it sits there in the box? Surely so. The fact of being locked permanently in the box hasn’t altered or done away with its color. Yet color, of course, is a dispositional property. So here is a case where the blocking of the exercise of a dispositional property leaves that property present and intact.

Perhaps the same is true in connection with the neural meddler. Although it blocks the access-relation between perception and higher-order thought, it may be that the disposition of the former to give rise to the latter remains present and intact. In which case Bill and Peter can be entirely alike in respect of phenomenal consciousness, according to a dispositionalist HOT account. Whether or not this is so will turn, I think, on as-yet-unresolved issues in semantic theory, as I shall discuss next.

4 The incompleteness of consumer semantics

What Seager doesn’t mention in his discussion of dispositionalist HOT theory, is that much of the crucial work is done by appeal to some or other version of consumer semantics. The thought is this: if intentional content depends, in part, on the powers of the ‘consumer systems’ which use or draw inferences from it, then first-order analog perceptual contents can be rendered AT THE SAME TIME as higher-order, by virtue of their availability to a HOT-wielding consumer system. Each experience with the analog content ‘red’, for example, will at the same time have the analog content ‘seems red’ or ‘experience of red’. And it is this which confers on those experiences -- categorically -- a dimension of ‘seeming’ or ‘subjectivity’, and constitutes them as phenomenally conscious.

Now, consumer semantics embraces a number of different varieties of theory, of which there are two main sorts -- teleosemantics, on the one hand, and various forms of inferential role semantics, on the other. (For the former, see Millikan, 1984, 1986, 1989; and Papineau, 1987, 1993. For the latter, see Loar, 1981, 1982; McGinn, 1982, 1989; Block, 1986; and Peacocke, 1986, 1992.) In the present state of scientific knowledge, it is quite unclear which of these will ultimately prove to be the victor; and their respective implications for Seager’s example may not be clear either.

Actually, it does appear that on a teleosemantic account the higher-order analog content of Bill’s experiences will be left unaffected, in Seager’s example. Teleosemantics claims that intentional content is to be explicated in terms of biological and/or derived proper function. And if we ask what the proper function is, of those of Bill’s experiences which are blocked from giving rise to higher-order thoughts by the presence of the neural meddler, the answer will be: unchanged. These states still have the proper function of feeding into thoughts about the perceived environment, and also of feeding into higher-order thoughts about themselves, despite the presence of the neural meddler. And so they will continue to have the very same first-order and higher-order analog contents which they would possess if the neural meddler weren’t there. Which means, on my dispositionalist HOT account, that they remain phenomenally conscious, despite their de facto inaccessibility to higher-order thought.

In my 2000 I operated within the framework of an inferential-role version of consumer semantics, as opposed to a teleosemantic one -- for convenience only, I thought. But there are a number of issues on which the two approaches may actually generate different answers. As recent work by Sarah Clegg (2001) has brought home to me, for example, a teleosemantic version of dispositionalist HOT theory may imply that the experiences of even new-born infants are phenomenally conscious, despite the infants’ lack of capacity to entertain higher-order thoughts. For their experiences may still have the FUNCTION of feeding into higher-order thought, and will become available to such thought in the course of normal development. In which case those percepts may already possess higher-order analog content, on a teleosemantic account, even though the infants are not presently disposed to draw higher-order inferences from those experiences.)

What should an inferential-role semanticist say about Seager’s example? Does a device which blocks the normal inferential role of a state thereby deprive that state of its intentional content? The answer to this question is not clear. Consider a propositional variant of Seager’s example: we invent a ‘conjunctive-inference neural meddler’. This device can disable a subject’s capacity to draw basic inferences from a conjunctive belief. Subjects in whom this device is operative will no longer be disposed to deduce either ‘P’ or ‘Q’ from beliefs of the form ‘P & Q’ -- those inferences will be blocked. (Even more elaborately, in line with Seager’s example, we could imagine that the device only becomes operative in those cases where we can predict that neither the inference ‘P’ nor the inference ‘Q’ will actually be drawn anyway.)

Supposing, then, that some version of the Mentalese hypothesis is true, are the belief-like states which have the syntactic form ‘P & Q’ in these cases conjunctive ones? Do they still possess conjunctive intentional content? I don’t know the answer to this question. And I suspect that inferential role semantics is not yet well enough developed to fix a determinate answer. What Seager provides, from this perspective, is an intriguing question whose answer will have to wait on future developments in semantic theory. But it is not a question which raises any direct problem for dispositionalist HOT theory, as such.

As is familiar from science, it is possible for one property to be successfully reductively explained in terms of others, even though those others are, as yet, imperfectly understood. Heat in gasses was successfully reductively explained by statistical mechanics, even though in the early stages of the development of the latter, molecules of gas were thought to be like little bouncing billiard balls, with no relevant differences in shape or internal structure. That, I claim, is the kind of position we are now in with respect to phenomenal consciousness. We have a successful reductive explanation of the latter in terms of intentional content (provided by dispositionalist HOT theory), while much scientific work remains to be done to elucidate and explain the nature of intentional content in turn.

5 Seager on reflexive thinking theory

Although I don’t think Seager has discovered anything which need trouble dispositionalist HOT theory, he does have a nice argument ‘by vicious descent’ against my earlier reflexive thinking account (1996). Indeed, I’m quite glad that I don’t now have to defend myself against that argument! However, Seager also says that this argument undermines one of the claims made in my 2000, because of my continued commitment to the view that the reflexive thinking account ‘may be an accurate description of the [human] mind as it actually is’. But this is a mistake (or a misunderstanding).

I do think that our phenomenally conscious states are not only available to HOT, but are also normally available to conscious HOT, where these HOTs will be phenomenally conscious in turn in the same sort of way. For we can formulate our HOTs linguistically, in imaged natural language sentences (in ‘inner speech’), which will be phenomenally conscious in the same way, and for the same reason, that any other sensory images are. For these imaged sentences, too, are available to HOT.

No regress gets started by such claims, because there is no REQUIREMENT that the consciousness-determining HOTs should be conscious ones. Rather, at each level what constitutes a percept or an imaged sentence as phenomenally conscious is its availability to HOT simpliciter. And given that there is nothing in the account to force us to take the next step up in the regress each time, there can be no danger of a regress by vicious descent here, either.



Block, N. 1986. Advertisement for a semantics for psychology. Midwest Studies in Philosophy, 10.

Carruthers, P. 1996. Language, Thought and Consciousness: an essay in philosophical psychology. Cambridge University Press.

Carruthers, P. 2000. Phenomenal Consciousness: a naturalistic theory. Cambridge University Press.

Clegg, S. 2001. PhD Proposal, University of Sheffield.

Dennett, D. 1978. Why you can’t make a computer that feels pain. In his Brainstorms, Harvester Press.

Dennett, D. 1991. Consciousness Explained. Penguin Press.

Loar, B. 1981. Mind and Meaning. Cambridge University Press.

Loar, B. 1982. Conceptual role and truth-conditions. Notre Dame Journal of Formal Logic, 23.

MacIntyre, A. 1981. After Virtue. Duckworth.

McGinn, C. 1982. The structure of content. In A. Woodfield, ed., Thought and Object, Oxford University Press.

McGinn, C. 1989. Mental Content. Blackwell.

Millikan, R. 1984. Language, Thought, and Other Biological Categories. MIT Press.

Millikan, R. 1986. Thoughts without laws: cognitive science with content. Philosophical Review, 95.

Millikan, R. 1989. Biosemantics. Journal of Philosophy, 86.

Milner, D. and Goodale, M. 1995. The Visual Brain in Action. Oxford University Press.

Papineau, D. 1987. Reality and Representation. Blackwell.

Papineau, D. 1993. Philosophical Naturalism. Blackwell.

Peacocke, C. 1986. Thoughts. Blackwell.

Peacocke, C. 1992. A Study of Concepts. MIT Press.

Ramachandran, V. and Blakeslee, S. 1998. Phantoms in the Brain. Fourth Estate.

Weiskrantz, L. 1986. Blindsight. Oxford University Press.

Young, J. 1986. Philosophy and the Brain. Oxford University Press.

© Peter Carruthers