SWIF Philosophy of Mind, 16 March 2001 http://www.swif.uniba.it/lei/mind/forums/carruthers.htm
Forums Forum 2 Colin Allen's commentary
Reply to Colin Allen
Department of Philosophy
University of Sheffield, Sheffield, UK.
Allen focuses on some of the evolutionary / cognitive-engineering arguments which I deployed in my 2000, chapters 8 and 11. These were supposed to support my sort of dispositionalist higher-order thought (HOT) theory against theories of 'inner sense', which maintain that we have a set of inner scanners charged with producing higher-order experiences (HOEs) of our first-order perceptual, imagistic, and emotional states. (Note that inner-sense theory might more accurately be called 'second-order sense theory'. For pain is physically 'inner', of course, but the outputs of pain-perception will need to be scanned too, according to inner-sense theory - producing HOEs of its contents - in order to become phenomenally conscious.)
Allen makes two allegations against me. First, that the evolutionary explanation provided for dispositionalist HOT theory is itself much weaker than I think it is. (So even if there isn't a plausible explanation of the evolution of inner sense, as I claim, still the two theories are near enough level-pegging.) And second, that there are possibilities which I have overlooked for the early evolution of a faculty of inner sense in a variety of creatures besides ourselves. (In which case our common-sense intuition that such creatures are phenomenally conscious may be salvageable.) I shall take these allegations in turn. On closer examination, neither is very powerful.
1 Evolution, HOEs and HOTs
The alleged weakness in the account provided from the perspective of dispositionalist HOT theory, of how phenomenal consciousness may have evolved, is that it cannot explain the full range of phenomenally conscious states which we enjoy. Since the evolutionary account of our capacity for HOT which lies at the heart of my approach has to do with the benefits of psychological interpretation or 'mind-reading', Allen concedes that the availability of visual and auditory contents to HOT is plausible. But he argues that there is little plausibility in the view that taste, smell, and pain-contents would play a significant role in other-interpretation. So we have no evolutionary account of the availability of such contents to HOT, and so no evolutionary account of their phenomenal consciousness, on a dispositionalist HOT approach.
Allen misconstrues the import of the passage he quotes from page 231 of my 2000, however. That passage wasn't - or wasn't primarily - about explaining why the mind-reading faculty needs access to different kinds of perceptual content. (I confess that the phrase 'full range of perceptual representations' may have been misleading.) Rather, it was about explaining why the mind-reading faculty needs access to perceptual contents, rather than running off mere beliefs as inputs. It was taken for granted (having been argued briefly two paragraphs previously, and in much more detail in chapter 11:1.3) that the perceptual outputs from different sensory modalities were already available (prior to the evolution of a capacity for HOT) to conceptualizing and practical reasoning (or 'executive function') systems. My task was just to explain why the evolving mind-reading system would have been set up with access to this set of outputs, rather than as a mere inference system operating on beliefs.
What Allen sees as questions for the evolution of a dispositionalist HOT architecture - namely, a demand to explain, in connection with each distinct sense-modality, why the mind-reading system would need to have evolved access to the outputs of that modality - I see rather as questions for the prior evolution of the first-order conceptualizing and planning systems which we share with many other species of animal. And I think that, in outline, the answer would go something like this. Information from many different sense-modalities can be important factors in object-recognition. Obviously, visual information will often be relevant. But so will sound, touch, smell and taste - what something sounds like, feels like, smells like and tastes like can be crucial factors in identifying it. On cognitive-engineering grounds, then, we should expect that information of each of these sorts should be simultaneously present to the mechanisms charged with conceptualization of the world. Admittedly, this doesn't yet explain why temperature-information and pain-information should give rise to phenomenal consciousness in ourselves, since these factors rarely if ever play a part in object-recognition. But they do play an important part in planning. In deciding whether to continue holding a hot object, say, or whether to continue pushing one's hand into a bees' hive to reach the honey, experiences of heat and pain will be important considerations, with obvious adaptive advantages.
The two-stage account of the evolution of phenomenal consciousness presented in my 2000, then, was this. First, there was the evolution of first-order conceptualizing and planning systems, feeding off the outputs of all those sensory systems which are relevant to the execution of these functions. I envisage this as resulting in a single functionally-defined short-term memory store (the 'E' - for 'experience' - box in Figure 11.1). This would contain analog outputs from all of these perceptual systems, held in such a way that the processes of conceptualization and planning could feed off any aspect of the contents of that store, depending on relevance to current goals, context, background beliefs and so on. This is an architecture which we probably share with all other mammals, at least. Then second, somewhere in the ape and/or hominid lineage, there was selection for a number of conceptual modules or quasi-modules - for folk-physics, folk-biology, and (crucially for my purposes) folk-psychology or mind-reading - each of which was so set up that it could draw on the contents of the already-existing E-store. And it is the resulting availability of these first-order analog contents to the HOTs generated by the mind-reading system which renders them higher-order, and hence phenomenally conscious, according to my account.
Admittedly, further questions can now be raised in the spirit of Allen's first allegation. It can be asked why there should have been just one first-order E-box, drawn on for purposes of both conceptualization and planning; and why the evolving mind-reading system should have been set up to feed off the entire set of contents of the E-box, rather than just the sub-set which might be directly relevant for purposes of other-interpretation.
One answer to the first of these questions is that a single perceptual short-term memory system is simpler - and less costly to build and maintain - than two. Another is that conceptualization and planning are intimately linked. Indeed, conceptualization is largely for planning. The point of conceptualizing the world into kinds is to facilitate inductive generalization and learning of various sorts which can generate information to guide good plans. And while the planning process itself will characteristically be conducted, at least partly, in conceptual terms, it is also often highly indexical - as can be the outputs of the conceptualization process. Conceptualization of an animal in the bushes may lead to the indexical judgment, 'That is a tiger'. From previous learning it is known that tigers are dangerous, leading to the indexical intention, 'I need to get away from that'. Here it will be important that the conceptualizing and practical reasoning systems should both have access to the same set of perceptual contents, so that the referent of 'that' can be held constant through the whole process.
But now, why would the evolving mind-reading system have been set up to feed off the entire contents of the E-box, rather than just a sub-set? One answer is that evolution characteristically builds on and co-opts whatever structures are already in place. Since an integrated first-order perceptual memory system was already in existence, feeding the processes of first-order conceptualization and planning, it is only to be expected that an evolving mind-reading system, needing access to perceptual contents, would have been set up to draw on this perceptual memory system. A further answer is that mind-reading and planning (or 'executive function') are closely linked, too. Planning and reasoning in humans is routinely meta-representational in character (Carruthers, 1996; Perner, 1998). Indeed, it can be debated whether autism, for example, is fundamentally a pathology of mind-reading or of executive function (Russell, 1996). It is only to be expected, then, that the mind-reading system should draw on the same range of perceptual contents which are needed for practical reasoning - and these will include heat and pain.
From this perspective, then, Allen's further questions - concerning synthetic experiences of flavor (as opposed to distinct experiences of taste and smell), and why detection of pheromones should remain unconscious - are not (as he construes them) questions for the second (mind-reading) stage of the above evolutionary account. The task is not that of explaining why the demands of mind-reading should lead to integrated experiences of flavor, on the one hand, and why it shouldn't lead to experiences of pheromones (as opposed to their emotional effects), on the other. For I assume that these matters had already been determined prior to and/or independently of the evolution of the mind-reading faculty. Rather, the task is to explain why the demands of object recognition and/or planning should lead to integrated flavor-contents, and why those same demands shouldn't require the availability of pheromone-information. These are good questions, to which I might begin to construct speculative answers. But they are not questions which bear especially on theories of phenomenal consciousness. For they are faced equally by first-order theories of the sort espoused by Tye (1995) and Dretske (1995); by inner-sense theories of the sort espoused by Armstrong (1968) and Lycan (1996), and defended here by Allen; by actualist HOT theories of the kind espoused by Rosenthal (1986); as well as by my own dispositionalist HOT theory. They are questions for all of us. The lack of easy answers to these questions does not show dispositionalist HOT theory to be weak (on this score at least) in relation to these other theories.
2 HOEs and learning
Allen's first allegation is therefore based largely on a misunderstanding (or at least an unsympathetic reading) of my views. His second allegation is that I don't take seriously enough the idea that higher-order experiences (HOEs) might be needed to underpin the kinds of sophisticated learning of which many other creatures besides ourselves seem capable. So there may be a plausible evolutionary story to be told by an inner-sense theorist, which would at the same time warrant our common-sense intuition that these same creatures undergo experiences which are phenomenally conscious.
Allen makes the point that learning in humans is often mediated by pain-judgments (for example, as to whether suffering a certain level of pain was worth the benefits), and/or by discriminations of different pain intensities. Then if animals, too, can benefit from such learning, we would have good reason to think that they are phenomenally conscious. By implication, Allen must think that such judgments and discriminations are higher-order ones, since he is here supposed to be defending a form of HOE theory.
What Allen misses is that my 2000 endorses a conception of pains as analogous to secondary qualities in other sense-modalities - a conception which I take over from Tye (1995), and which I regard as essential to the plausibility of dispositionalist HOT theory. On this account, pains themselves are the first-order properties which form the intentional content of pain perceptions, just as red is the first-order property which forms the intentional content of one form of color vision. And notice that different shades and intensities of red can be perceptually represented without the need for higher-order experiences of any sort, and that behaviors grounded in color-judgments can similarly occur without the need for any form of higher-order representation. In the same way then, I maintain, discriminations of pain intensity can be entirely first-order; and judgments of pain-intensity and pain-evaluation are similarly first-order. There is no reason at all why creatures incapable of any higher-order representation shouldn't be capable of discriminating pain intensities (any more than they should be incapable of discriminating color intensities), or why they shouldn't be capable of thoughts like, 'That pain was worth that reward'. On my view, there is nothing higher-order in such thoughts, any more than there is anything higher-order in thoughts like, 'That red is deeper than that one'.
Of course it is true that in the human case such discriminations and such judgments will be accompanied by phenomenal consciousness. And it is also true, as Allen points out, that the human capacity for explicit judgments of the temporal ordering of events will be accompanied by phenomenal consciousness. But it is quite another matter to claim that these judgments are made possible by phenomenal consciousness. And on the contrary, these coincidences are readily explained on the dispositionalist HOT approach. For the perceptual contents which ground sophisticated learning and which ground explicit judgments (the contents of the E-box) are also, in the human case, available to HOT, hence rendering them phenomenally conscious (and hence transforming the 'E' box into a 'C' - for 'conscious' - box).
There are views of phenomenal consciousness according to which these discriminations (whether of color or of pain) will be phenomenally conscious ones, of course. (For example, any first-order representationalist view - in the manner of Tye and Dretske - will imply as much.) What Allen loses sight of is that his task is to be defending HOE theory at this point. For evolutionary considerations are only supposed (by me) to help discriminate amongst the competing higher-order theories (inner-sense theory, actualist HOT theory, dispositionalist HOT theory), not between dispositionalist HOT theory and accounts of other types. On the contrary, one of the obvious strengths of first-order representationalism is the plausible evolutionary story that it can tell. (Its weakness is its inability to handle the distinction between conscious and non-conscious experience. See my 2000, chapter 6.) So what Allen really needs, in particular, are some examples of learning which require genuinely higher-order experiences in order to succeed, not just examples of learning which are accompanied by phenomenal consciousness in human beings. He doesn't come close to providing them.
The challenge to inner-sense theorists laid down in my 2000 remains, then - to provide some account of what HOEs might be doing for us in cognition, which doesn't presuppose a capacity for HOT, and which is evolutionarily important enough to explain the investment needed to build and maintain the required set of second-order sense-organs. No one whom I have debated and discussed these issues with over the years has been able to come up with a half-way plausible proposal. And neither does Allen. This doesn't prove, of course, that some function for HOEs mightn't yet be thought of; nor does it show that we mightn't yet find independent evidence for the existence of a set of second-order sense-organs. But it does provide good reason (provided that one is in the market for a higher-order account of phenomenal consciousness at all) to prefer dispositionalist HOT theory to inner-sense theory. For this gets us all of the benefits of HOEs for free, without the need to appeal to any set of inner scanners. (On this, see my 2000, chapter 9.)
Armstrong, D. 1968. A Materialist Theory of the Mind. Routledge.
Carruthers, P. 1996. Autism as mind-blindness: an elaboration and partial defence. In P.Carruthers and P.K.Smith (eds.), Theories of Theories of Mind. Cambridge University Press.
Carruthers, P. 2000. Phenomenal Consciousness: a naturalistic theory. Cambridge University Press.
Dretske, F. 1995. Naturalizing the Mind. MIT Press.
Lycan, W. 1996. Consciousness and Experience. MIT Press.
Perner, J. 1998. The meta-intentional nature of executive functions and theory of mind. In P.Carruthers and J.Boucher (eds.), Language and Thought, Cambridge University Press.
Rosenthal, D. 1986. Two concepts of consciousness. Philosophical Studies, 49.
Russell, J. 1996. Agency: its role in mental development. Lawrence Erlbaum.
Tye, M. 1995. Ten Problems of Consciousness. MIT Press.
© Peter Carruthers