Approaches to Abstraction
Institute for Interdisciplinary Studies
Ottawa, ON, Canada K1S 5B6
1. Introductory Remarks
This is a rich and diverse set of papers, especially diverse. Abstraction is an excellent subject for cognitive science because there is a genuinely interdisciplinary interest in it. Philosophers have been propounding (empiricism) and rejecting (practically everyone else) abstractionism as a theory of the origins of universals, etc., since at least the time of Locke. In linguistics, the postulation of highly abstract, innate structures at the heart of syntax and maybe phonology is as old as the Chomskian revolution. Indeed, the idea has even crept into semantic theory, in the form of Jerry Fodor's hypothesis of a Language of Thought (1975), the hypothesis that even our conceptual apparatus is coded in our genes. Abstracting and abstractions are of course central to much of AI, which is disposed to see the abstract as a compositional assembly of simpler units, an idea also at the heart of the old logical atomist and logical empiricist models of language and knowledge. And cognitive psychology has a huge interest in the whole range of issues to do with the abstract and is predisposed to view abstracting and abstractions on the model of its own method, statistical generalization over instances of a pattern. (This method of finding abstract patterns is probably the closest thing to classical induction to be found in science today.)
The papers before us reflect some of these interdisciplinary interests. Lehtinen and Ohlsson(1) make a persuasive case for the important idea that abstract concepts could not be gained by any process of abstracting commonalities out of particular cases; we must already have the concept in question to recognize the common feature. Stern and Staub explore the role of level and/or degree of abstractness in the presentation of mathematic ideas as a factor in rate of mathematical learning. Perkins explores the role of what he calls epistemic games in the generation of abstract knowledge. Epistemic games are cognitive structures midway between highly abstract rules such as syllogisms and highly situated, one-off problem-solving. Margolis makes a case for the role of he calls negative knowledge role in creativity--knowledge of how to escape deeply entrenched lines of thought, another approach to find a via media between tightly situated and very abstract, general cognitive strategies. Clancey mounts an argument that conceiving of cognition, including abstract cognition, on the model of AI programmes mistakes a part for the whole and misconceives the part. The part he has in mind is cognition in language and the whole is cognition as a whole. His view is that much of cognition does not take place in language, and that the part of cognition that is in language is nothing like an AI programme. Finally, Halford, Wilson, and Phillips argue that abstract cognitive processes deal essentially in relationships, in contrast to more primitive processes that can get by with associations. This allows greater flexibility and the application of abstractions to abstractions. They also explore the heavier processing loads imposed by relational cognition.
Before we examine some of the individual analyses, it is worth remarking on the interdisciplinary
breadth of the collection. Lehtinen and Ohlsson, though steeped in empirical cognitive
psychology, reflect one of the dominant strands of anti-empiricist philosophical thinking. Halford,
Wilson, and Phillips show one of the ways in which the AI community is interested in the
abstract. Stern and Staub deal with the abstract in the context of empirical developmental
psychology. Clancey raises issues not just about abstract cognition but cognition as a whole that
are on the boundary between psychology and philosophy, as, in a very different way, does
Perkins, and also Margolis. So a commentator has his work cut out for him.
2. The Notion of the Abstract
Terminology is always a problem when discussing abstraction, so let us create some provisional terms and note a few distinctions. First let us distinguish between
1. the abstract as a state,
an example of which would be an abstract concept, and
2. abstracting as an activity,
an example of which would be abstracting something common to a group of particulars. This distinction allows us to articulate an idea central to many of the papers before us: that acquiring abstractions is not necessarily or not only via processes of abstracting as traditionally conceived.
The notion that some concepts, objects, etc., are abstract has a history that goes back to Plato's Forms and even beyond. Plato had one of the richest ontologies of the abstract. He postulated a domain of entities and properties that is immaterial and imperceptible and where every instance is a perfect exemplar of its kind of thing. Frege also accepted a realm of abstract entities, though one less floridly populated than Plato's. Locke postulated perhaps the most generous entry conditions: merely to be a property--that is to say, something that a number of particulars could share--is enough to make something an abstraction. At the opposite end of the scale, Reichenbach had perhaps the tightest entry conditions and among the parsimonious ontologies. For him, even theoretical entities are not truly abstract objects. Since they are held to exist as objects, he called them illata. The only true abstracta for him are `theory-bound entities', that is to say, entities whose whole existence consists in their playing a role in a calculation or a science--in an epistemic game, to use Perkins' term. A centre of gravity is an example. All objects with mass have a centre of gravity; but it would be a mistake to ask: What is a centre of gravity made of?, or, what other properties besides being the centre of the distribution of mass in a solid does it have? Recently Quine and other naturalists and behaviourists about language have created ontologies that are even more parsimonious.
What does it mean to say that something is abstract? Stern and Staub discuss this question. About all that seems to hold the concept of the abstract together are two things:
1. To be abstract, an entity, property, etc., must be something other than (or at least more than) a discrete, spatially-bounded, temporally limited but continuous material object.
2. Becoming aware of something abstract requires more than perception of particulars, at minimum some cognitive activity of selection and comparison (see Ohlsson 1993).
If these two tests are an adequate characterization of the abstract, then Locke was right to set the entry conditions very low: all properties do indeed count as abstract because we need more than perception of a particular to be aware of a property, aware of it, at any rate, as a property. Similarly, the abstract will have a generous ontology, because a huge and motley population of entities, states, relations, and processes satisfy condition (1). In particular, on this test all relations turn out to be abstract; and Clancey, for example, does indeed put being relational at the heart of what is required for something to be abstract (p. 23).
In addition to abstract objects, properties, and other states in the world, there is also abstract cognition: abstract concepts, abstract conceptions, etc. Clancey uses conceptualizations as a general term for this aspect of the abstract. Most of the papers before focus on the abstract in cognition. Roughly, abstract conceptualizations are simply conceptualizations of the abstract: abstract entities, abstract properties, and so on.
In their uses of the term and their theories of how abstract conceptualizations are generated, the
papers before us display some of the same diversity that we find throughout the history of the
subject. Many of the papers are broadly in the dominant tradition of theorizing about abstraction
from Plato to the present; Clancey, in arguing that a lot of conceptualization is neither linguistic
nor based on anything like rules in a computer, rejects this tradition. In the characterizations I
have just given of the abstract as a state and of abstract conceptualizations, I have tried to leave
room for his claims that a lot of abstract objects, relations, etc., are not linguistic entities and a lot
of abstract conceptualization is not done in language.
3. Degrees of Abstractness
If the abstract is either non-material, non-perceptible objects, relations, processes, etc., or conceptualizations of same, can there be degrees or levels of abstraction? Such a notion plays a central role in Stern and Staub's work on the acquisition of mathematical concepts. They put the notion to work in a number of different contexts:
(a) the notion of a rational number containing fractions or decimals is more abstract than the notion of a natural number based on the idea of counting and therefore on whole units;
(b) the notion of a number system based on 0 is more abstract than the notion of a number system based on 1;
(c) comparison of sets as a basis for arithmetic is more abstract that increase or decrease of units; and,
(d) the use of `=' in algebra to signify a relation between the two sides of an equation is more abstract than the use of the equal sign in arithmetic to indicate that some manipulation is to be performed on what is to the left of the sign.
It is not too easy to specify this notion of degree of abstractness precisely. Intuitively, we connect the idea of the more and less abstract to some notion of distance from experience, but this notion is not very precise. In what sense exactly is the role of the equal sign in algebra more distant from experience than its role in arithmetic? One way to make the idea more precise would be to define degree of abstractness in terms of some notion of the extent to which an entity or relationship is straightforwardly exemplified in objects that can be perceived, or the extent to which a concept is about such an object. Thus, the notion of countable units can be mapped onto perceived objects very readily and very fully (though probably not completely). The notion of there being an infinity of units between 0 and 1 can be mapped onto anything we see only to a far smaller degree. The notion of a number system beginning with 1 can be mapped onto what we see very readily. The notion of 0 by contrast is never instantiated in experience. And so on.
Another promising route would be to appeal to the order in which concepts have to be acquired. For example, we must have the notion of a cardinal number to acquire the notion of a natural number, but not vice-versa. And must have the notion of a whole number to acquire the notion of a fraction or decimal, but not vice-versa.
I will not pursue this question further. Like so much to do with the abstract, the notion of degree
of abstractness readily lends itself to vagueness, but we have done enough to show that it can be
given a fairly precise sense. That is all we need for present purposes.
4. Three Kinds of Abstracting Activity
Let us turn now from the abstract as a property of objects or concepts to abstracting as an activity. As I did with the abstract as a state, I will try to leave room for non-linguistic kinds of abstracting activity, too, so as not to beg the case against Clancey. We can distinguish at least three kinds of abstracting activity. I will call the first abstracting away:
1. Abstracting away is the activity of letting the bulk of the information about something fall away, so that only a few of a thing's features remain.
We might call the second abstracting out:
2. Abstracting out is the activity of finding commonalities in a jumble of differences running across a number of particulars. Another term for it, one favoured by Lehtinen and Ohlsson, is generalization, the activity of generalizing over particulars to find commonalities. It is the activity central to the simple-minded inductivist picture of scientific method. In classical empiricism, all generation of abstract concepts, everything from simple universals such as colour terms to numerical concepts, scientific terms, the logical constants, and so on was thought to proceed by way of generalization.
The difference between (1) and (2) is that (1) need not involve identification of commonalities--it need be no more that a stripping away of particularities to select other particularities--whereas in (2) identification of commonalities is always involved.
Let us now construct a grabbag category to distinguish both of these from some of abstracting activities. I will this third, grabbag category building the abstract:
3. Building the abstract is simply a term for all the ways of identifying or creating something
abstract other than abstracting away and abstracting out. Its value is that it allows us to avoid the
danger of confusing the abstract as a state with abstracting away or abstracting out as activities.
This brings us to Lehtinen and Ohlsson's paper.
5. Abstracting or Identifying?
As I sketched earlier, Lehtinen and Ohlsson argue that abstract concepts could not be gained by any process of abstracting commonalities out of particular cases. The reason is that we must already have the concept supposedly gained by such abstracting out to recognize the common feature being abstracted. Thus, physicists could not have developed the concept of a quark by finding an interesting new regularity running across a number of elementary particle experiments because they had to have the concept of a quark in order to recognize the regularity in the first place. The old empiricist idea that we acquire our abstract concepts via activities of abstracting gets things exactly backwards.
Lehtinen and Ohlsson's strongest argument in defense of this view is the actual case histories of scientific discovery that they cite. As they point out, it would be very difficult (they say impossible) to spot the Newtonian properties shared by ordinary middle-sized objects and solar systems without Newtonian concepts. Similarly, it would be (at least) very difficult to learn that objects all accelerate at the same rate in as vacuum without the concepts at the heart of the law of acceleration; what we actually see are objects not accelerating at the same rate, e.g., feathers and cannonballs. And so on. The point might be put this way: like many concepts (the concept of a triangle, for example), most scientific concepts are idealizations. Idealizations suppose the absence of all the confounding particularities that are always present in things as we actually experience them; ideal situations never occur. Thus, the only way to spot the kinds of regularities central to science is to use the concepts or laws for them. Without a concept or law, we would not be able to spot the regularity.
This case against simple abstractionism and inductivism is extremely strong. It is also quite old, going back at least as far as Kant. It was Kant who first formulated the idea that nature answers only the questions that we put to it (1787, B xiii). Of course, Kant wanted to go much farther. He wanted to insist that to have any experience of any kind whatsoever, even the most concrete experience of individual spatio-temporal objects, we must already have a rich storehouse of concepts, what he called the categories. We do not need to follow him this far to see that there is clearly something in the more limited Kantian point that Lehtinen and Ohlsson make, the point that to recognize the kind of regularities distinctive to science, we must already possess the concept or law for them.
Now the question becomes, How then do we acquire new abstract concepts, laws, etc.? For Lehtinen and Ohlsson, the answer is fairly straightforward: by assembling them out of simpler abstract concepts and laws.
To create an abstraction is to compose or assemble some existing abstractions into a larger, more complex abstraction. Following Piaget, we will ... refer to this as a process of coordination [p. 14].
Here I am less sympathetic. Certainly I agree with the general point that abstractions are "the result of constructive operations on the part of the knower" (p. 11), not generalizations over objects of knowledge. But the question is, what sort of constructive operations?(2)
Lehtinen and Ohlsson adopt their coordination view of the creation of abstract concepts, laws, etc., because they believe that "the only source of abstractions is the current stock of abstractions" (p. 14). Here one might make offer two suggestions:
1. Perhaps we can generate new abstractions using our current stock of abstractions in more ways than by simple composition or assembly.
2. Perhaps something in generating new abstractions is more than exploiting current abstractions.
Let us look at each thought in turn.
6. Assembling vs. Conceptual Restructuring
The first thing to note about a new abstraction is that it typically cannot be reduced to earlier abstractions. The concept of an electron cannot be reduced to any previous concept. The concept of a quark cannot be reduced to any earlier particle concept. The concept of the quantum cannot be reduced to classical electro-dynamics. The concept of superposition cannot be reduced to any previous concept of position. And so on. There are two ways in which such reductions could be performed. One is semantic reduction: show that, in some useful sense of `meaning', the later term has the same meaning as some package of earlier terms. No one thinks that that is remotely possible for new scientific terms. The other is extensional reduction: show that the new term covers the same range of cases as some earlier term or assembly of earlier terms. Generally, most philosophers of science now believe, extensional reductions are impossible for new abstract terms, too. New terms generally cross-classify with old ones.
If so, there is something more to a new abstract term than is contained in any old terms, including any composite or assembly of them. How can we account for this additional element?
Stern and Staub suggest one way: conceptual restructuring. It is not easy to see what characterizes conceptual restructuring in general, but Stern and Staub offer some clear and evocative examples. Take the transition from the idea of natural numbers to the idea of rational numbers. To make this transition, a child has to give up a number of intuitively powerful ideas: that every number has a successor; that there exists a smallest number; that all numbers lying between two numbers can be enumerated; and so on. An even simpler example is the concept of 0: here a child has to give up the idea that the smallest possible number of any kind of thing is one, and learn that there can also be none.
In each of these cases of acquiring new number concepts, then, the child does more than just
assemble old number concepts in a new way. They acquire something that is different from and
goes beyond anything they had before. Unfortunately, this observation, well motivated though it
is, merely invites another question: What is this more like? To approach the answer to this further
question, let us turn to (2): perhaps there is something to generating new abstractions that is more
than exploiting current abstractions.
7. New Abstractions are New Ways of Acting
The suggestion I want to introduce, a suggestion that would fill out both Lehtinen and Ohlsson's and Stern and Staub's pictures, starts from Wittgenstein's idea that, in a large range of cases, the meaning of a word is its use (1953, 43): find out what job we do with a word and we will have found out what it means. It is easy to extend this idea to the thought that what a new concept does is to allow us to do a new job. Consider Wittgenstein's own homely example that begins with the concepts of a block, slab, etc. With them, we can point to, name, blocks and slabs. Add a concept of the natural numbers. Now we can also count them. Next add colour concepts. This allows a further new activity, grouping by surface similarities. And so on. Each new kind of concept allows us to perform a new kind of action.
Moreover, and this is the important point for present purposes, each of these activities is, as we might put it, sui generis. That is to say, none of them can be composed of or decomposed into assemblies of any of the others. That would be like trying to reduce the language of music to the rules of arithmetic. Such reductions cannot be done. Wittgenstein's way of putting this point, a way picked up by Perkins (p. 9), is to say that with each kind of concept we can play a distinctive language game, and each new language game opens up a new form of life for us.
Here is not the place to give a full account of Wittgenstein's picture of words and how they work. One more aspect of Wittgenstein's picture, however, can help us to see more precisely exactly what needs to be added to Lehtinen and Ohlsson's and Stern and Staub's two accounts. As Wittgenstein insisted over and over in different ways, uses don't arise out of thin air; to acquire a new use requires at minimum (1) that there be situations that exemplify the new concept, serve as clear or paradigm cases of what the word names, and (2) that we can group additional cases, indeed an indefinitely large number of additional cases, with the exemplar cases as similar to it. Rosch (1978) and her colleagues call the constellation of features that allow such assimilation of new cases to an exemplar or paradigm case its prototype; starting from this notion, they have developed Wittgenstein's basic insights into a serious research programme.
However, and this is crucial, similarities are not carved into nature. Anything can be judged similar to anything else and dissimilar to anything else, depending on the properties with respect to which the comparison is being made; whether A is similar to B depends entirely on what feature(s) one has in mind. Since the feature(s) one has in mind depend(s) in turn on what one's interest in A and B is, and one's interest in A and B determines what actions one wants to take with respect to them, whether A is similar to B with respect to some feature, say F, depends entirely on what job the concept of F does for us.
On this picture, to acquire a new concept is to acquire a new ability. It is to be able to do something we could not do before: construct a number system starting with 0; see sand and the solar system as governed by the same laws; identify and describe superpositions; and so on. To illustrate what is meant here, consider the diagrams of the degrees of relationship at the centre of Halford, Wilson, and Phillips' paper. When I first encountered these diagrams, I had hardly any idea what they meant. To find out, I had to find out what the authors were using them to do. Once I understood the role they play in the paper, I knew what they meant.
Now, a new skill such as this cannot be decomposed into packages of old skills. If not, the new is more than a package of old skills. Likewise, learning to play music is more than a composition of old skills of finger movement, counting, etc.
This understanding of the relation of new abstract concepts to old ones points to a connection between Lehtinen and Ohlsson's and Stern and Staub's claims about abstraction and Perkins' notion of an epistemic game: one way to put the points I have been making would be to say that to acquire a new abstract concept, law, etc., is to acquire a new epistemic game. A new epistemic game is a new way of interacting cognitively with the world. It cannot be reduced to any old game or package of old games. Each new epistemic game contains something not just new but also sui generis.
Lehtinen and Ohlsson's picture of the acquisition of new abstractions reminds one a bit of the logical atomist picture of language: that the more complex is simply an assembly of the less complex. Perhaps one of the things that lead theorists into this picture nowadays is the way abstract concepts have to be generated in AI. In computers, the more complex always is indeed some combination of the less complex, ultimately binary bits, and has to be; there is nothing else for the more complex to be made of. But people are not limited to mere composition in this way. Unlike computers, people interact with the world, and the world contains utterly new similarity-classes and thus new ways to live and think, what Wittgenstein called new forms of life, without limit. This is what gives people what computers do not have, the capacity to create and acquire concepts that are genuinely new. As Perkins emphasizes, new similarities can be utterly dissimilar to ones we already have, as dissimilar as descriptive concepts are from explanation concepts (which themselves vary widely one from another), explanation concepts from normative concepts (concepts of justification, theoretical and practical goodness or adequacy), normative concepts from mathematical concepts, mathematical concepts from musical concepts, .... and so on, without end. (Description, explanation and the normative are closely related to Perkins' three basic kinds of epistemic game, characterization, explanation and justification.)
I will close this section with a comment on Margolis's paper. His central idea is that creativity,
the identification of new abstract patterns, requires what he calls doubly negative
knowledge--ability to breech the frames within which an area of knowledge is articulated. This
seems correct. Now ask, what would escape from entrenched patterns of thought look like on the
Lehtinen/Ohlsson model? It may not over-simplify too much to say that on their model, genuine
escapes of this sort are not possible; anything that looked like an escape from old patterns would
merely be a reassembly of old patterns. Of course, our repertoire of old patterns, old abstract
concepts and laws and moves, is huge and the combinations possible by assembling them in new
ways is virtually boundless, so the limitation imposed by this observation might not be too
stringent. Nevertheless, there does seem to be something to new concepts, like new styles in art,
that is different from anything that has gone before.
8. Linguistic and Non-linguistic Cognition
The idea that being connected to the world gives us possibilities for new concepts not available to standard computers brings us to Clancey. As I said earlier, his argument can perhaps be summarized as follows: conceiving of natural cognition, including abstract natural cognition, on the model of an AI programme mistakes a part for the whole and misconceives even that part. The part is cognition that uses language and it is taken to be or be a model of the whole. The whole is natural cognition as a whole. Clancey's view is that much of cognition, including a lot of abstract cognition, does not take place in language, and that the cognition that does take place in language is deeply misunderstood if it is thought to operate like an AI programme (p. 13). He connects these ideas to the notion that natural cognition is situated. Since, as we just saw, being situated in at least one of the senses of that Protean term is what gives natural cognition some of its special possibilities, this connection is an interesting one.
It is quite possible that much in the papers we have examined so far is compatible with Clancey's claim that a lot of abstract cognition is non-linguistic, but it is also likely that most of the other authors had linguistic cognition primarily in mind, so we should take a look at Clancey's claims about the character of natural cognition.
Clancey starts from AI, indeed from the classical, serial, expert-system kind of AI in which he himself played a prominent role earlier. The computational model of the mind suggested by this work was the dominant model in cognitive science in the 1970's and 1980's and is still alive. To a first approximation, Clancey wants to say that this kind of artificial cognition is and natural cognition largely is not: based on non-holistic descriptions (non-holistic because each piece is discrete, not affected by context), and it proceeds by doing tasks, not actions. These tasks consist of performing computations over descriptions using rules, rules shared with other such systems, stored in discrete units, and introduced by being looked up and imported into computations. Equally, much natural cognition is and artificial cognition is not: situated, purposive, holistic, dynamic, and its processes and states are either wholly non-verbal or have a non-verbal element in them. As well as or instead of words, it makes use of things like rhythms, intonation patterns, gestures, facial expressions, musical ideas, and imagistic phenomena such as figure-ground contrasts--in short, non-verbal coordinations--and it proceeds by way of activities, not tasks, where activities are understood as movements, processes, etc., shaped and selected by the overall projects and social roles of a person as a whole (Clancey, undated).(3)
I have not found it easy to weld this wealth of distinctions into a single picture. Part of the problem is the two case studies with which Clancey begins. At best, they point us in too many directions. I have in mind the neuropathological patients Rebecca and Dr. P., taken from Oliver Sacks (1970). Rebecca is the woman who had lost all capacity for abstract coordination, even spatial orientation. What exactly is she supposed to illustrate? She seems to lack explicitly verbalized abstract cognitive ability--but she also lacks, as Clancey says, "a kind of non-verbal abstraction" (p. 6). There are two points to be made about this. First, Clancey wants to insist that natural verbal cognition is nothing like the symbolic cognition of computers, too. So what she lacks is nothing like what a computer has. Second, Rebecca lacks vast reaches of what Clancey wants to treat as non-verbal abstract cognition, too. So she does not illustrate the contrast that Clancey wants to make on two counts. What about Dr. P., the man who can no longer recognize faces and many other `gestalts'? Clearly, Dr. P. has lost a vital range of non-verbal cognitive abilities. Equally, there is something vaguely computer-like about what he has left. But the superficial similarity breaks down in two ways. First, Dr. P. has also lost vital abilities that do use language. If he can no longer recognize facial and other `Gestalts', he can no longer analyze facial features either, for example. Second and more important, if Clancey is right about natural language, the abilities Dr. P. has left are not at all computer-like, because natural language in a natural system is not at all like symbolic structures in a (serial) computer. Thus Dr. P. does not illustrate the contrast Clancey wants to make either. To try to weld Clancey's rich set of distinctions into a single picture, let us set his examples aside and go straight to the heart of his discussion.
To start, let us turn again to Wittgenstein. Wittgenstein is famous for developing two complete models of language in his lifetime. In the first model, introduced in (1921), language is seen as consisting of descriptions that picture states of affairs, and thinking consists of manipulating descriptions according to the dictates of evidence and rules of logic. In the second model, introduced in (1953), language becomes a set of tools for performing actions. This is the picture of language that we took from Wittgenstein earlier. Rather than consisting of descriptions whose job is to introduce surrogates into the mind for the things described (i.e., representations), in the new view representations need not accompany use of language at all. So long as language is guiding human actions and interactions, it is being used, and this is determined by how the organisms interacts with the things around it, not by whether it has representations or other internal surrogates of the actions and interactions it is performing. Language is continuous with non-linguistic purposive activity.
What Wittgenstein wanted to say about language, Clancey wants to say about natural cognition as a whole. Wittgenstein's 1921 model is not a bad model for AI programmes and such and the 1953 model is a reasonable first crack at natural cognition. The crucial idea in the later model is that language is divorced from any necessary link to representation and is linked instead to non-linguistic action. On this picture, natural cognition must also be situated cognition.
To see what Clancey has in mind by cognition being situated, let us start with a kind of cognition and a kind of situatedness that plays only a small part in his story, cognition is not just non-linguistic but altogether non-representational. (By `representational' I mean an assembly of elements that describe or picture something else.) To see how this might work and begin to build toward Clancey's full picture, let us start with very simple cognition: an organism trained to respond to an environment in some `intelligent' way (more exactly, to perturbations of its sensitive surfaces caused by an environment). Such an organism would not need to have internal surrogates of that environment. The repertoire of skills, of shaped dispositions, in which it has been trained will latch onto objects in the environment directly, without need of an internal intermediary. This simple cognition has two important properties that I now want to highlight.
1. Cognition that does not involve internal-surrogate representations could not but be situated. Without the world, without objects to trigger the repertoire of skills, there would be nothing for such cognition to work on and so nothing for it to do. Such cognition must be situated.
2. In such a system, `rules', i.e., mechanisms for taking the organism from inputs to responses, will be something very different from rules in a (serial) computer:
i. Rules in such cognition will not be representations: they will not picture, will not refer to, anything outside themselves. Instead, they will simply be dispositions in the system to respond in certain ways when presented with certain stimulations. Modus ponens, for example,
may be a characterization of how categorizing events [are] ordered in time as a reactivation of previous relations between processes, not of a `rule represented implicitly' in the brain [p. 11]
ii. Such rules, being trained patterns of dispositions, need not be the same in any two systems. So long as the training produces an appropriate response to a given environment, it does not matter whether it does so in the same way in each organism.
iii. Rules in such a system will be something utterly different from discrete strings of code stored in some separate place ready to be looked up and introduced into a cognitive process, the same code in every system that has the rule.
It follows from iii. that memory in such a system will be very different from computer memory.
This characterization of natural cognition may work for simple cognitive systems lacking internal representations but what about cognition that makes use of representations and what about cognition performed in language? As has been said many times, it is difficult to introduce representation (whether non-linguistic or linguistic) and retain the kind of physical situatedness sketched above. Once you introduce the internal surrogates, internal representations of the world, cognition can proceed can now proceed in the absence of the environmental objects themselves. (Popper's famous dictum that in science, our hypotheses die in our stead is a classic example of how this works.) Thus, for cognition with representation, situatedness has to become more sophisticated. Clancey responds to this need, indeed urges that the notion of situation as physical environment is not the important kind of situatedness. Once an organism has internal surrogates of the external world, the situation needed for cognition to occur shifts from something physical to the internal context of the cognition, the context that motivates the cognition, gives it its point, give its individual terms their use and therefore their meaning, and so on. If non-representational cognition needs a physical context, representational cognition needs a problem context. And this new context is still enough to ensure that natural cognitive processes are holistic, purposive, actions not tasks, and so on.
On the nature of representations in natural systems, the similarity between Clancey and Wittgenstein's later picture is illuminating. Clancey of course recognizes that a lot of cognition is linguistic, done in descriptions that are internal surrogates for external environments. But like the later Wittgenstein, he views these things as continuous with non-linguistic cognition using non-linguistic representation. His example of chimpanzees solving the banana problem is an example of the intermediate stage between non-representational cognition and cognition taking place in anything like a language. As he says, chimpanzees need not manipulate a "descriptive model" of the situation; all they need is "coordination of a movement sequence" (p. 10). But he also allows that they can imagine the process, i.e. represent it to themselves, before they do it. Thus, he wants non-linguistic visualizations to be continuous with non-representational cognition and, like the later Wittgenstein, he wants linguistic representations to be continuous with non-linguistic representations, not something radically different from them as anything like an AI programme would be. And he may well be right.
To show that he is, he needs to give us more by way of a theory of representation. Clancey
sketches a theory of how rules might be represented on his new model: "representing [rules] is
[also] a process of constructing perceptual categorizations and categorizations of sequences" (p.
11); representations of rules are also merely categorially structured sequences of responses to
environments. He now needs to show that representation of objects, processes, states outside us,
representations that refer to things outside themselves, can be relevantly similar, linguistic
representation included. He needs to argue that referring representations can also be dispositional,
stored as dispositions, etc., too. A book-length version of his model is soon to appear (Clancey,
forthcoming). I expect that it addresses this need.
9. Abstraction in Situated Cognition
We now have a fairly adequate sketch of Clancey's model of natural cognition. Where does it leave us with respect to abstraction? Here Clancey says some interesting things. In addition to descriptions that are abstract by virtue of representing relationships, general properties, patterns, abstract objects, and other non-observable or partially non-observable phenomena, we clearly have to postulate abstract conceptual coordinations that are non-linguistic: "perceptual categorizations and categorizations of sequences" (p. 11). Some of these coordinations will refer beyond themselves, i.e., be representations as standardly conceived, some need not do even that. (Probably all or almost all representations in natural language will be of the former kind.) But none of them need be compositionally-constructed assemblies of elements. Clancey offers an interesting taxonomy of these categorizations; they fall into at least three types (p. 17).
The next point he makes, and a very interesting one, is that, if perceptual categorizations and categorizations of sequences are on approximately the same level of abstraction as descriptions, there is also a higher level of abstraction than either: activity conceptualizations (p. 18). This higher level contains the conceptualizations of activities (and also, I think, the conceptualizations of our projects, values, etc.) that set the problem contexts of cognition, guide the selection of certain categorizations, certain behaviours, etc., over others. There is no more reason to think that these abstractions are always expressed in natural language than that all categorizations are in natural language.
So how does Clancey's picture stack up? I think that it is pretty plausible overall. It also has the signal virtue when compared to computational cognitive science of the 1970's and 80's that it cares about psychological realism. Earlier work was happy if its models matched the observed data; whether the inferential structures in the model in any way resembled the mechanisms generating the data in the cognitive system was a matter of little interest. If the question came up, they tended to deny that any correspondence was intended. Clancey by contrast cares about how things actually work in cognitive systems. That being said, I think that there are gaps in his model; some of them, as I said, are doubtless addressed in his forthcoming book. The most important we have already mentioned: Clancey needs a more detailed theory of representation, including linguistic representation. Here the Wittgenstein/Rosch account of lexical meaning might point a useful direction. On representation more generally, the work of naturalistic and evolutionary neo-Wittgensteinians such as Dennett and Putnam might be useful.
The relationship between representational and non-representational categorizations could also be clearer. Clancey maintains that there is a non-linguistic element in even the most linguistic of representations (p. 23). Put so generally, this point has been known for a long time (though it has been overlooked by some theorists). Kant, for example, insisted that cognition requires both concepts and percepts, where concepts as he conceived them were linguistic or at least propositional: "Thoughts without content are empty, intuitions [perceptions] without concepts are blind" (1787, B75). Indeed, it was the centrepiece of his epistemology. What we now need is an account of how the two are linked.
Then there is the challenge of externalism, the view that the content of representations and descriptions is not a property of representations or words by themselves but consists of their causal relationships to the world. Externalism currently dominates philosophical thinking on these subjects. It also one form of the idea that cognition is situated, in something resembling the old physical sense, indeed: on externalism, cognition is not just a matter of what goes on in the head but also of how the head is hooked up to the world. Externalism and the version of situatedness that Clancey need to be brought into relation.
Finally, there is connectionism. When Clancey contrasts natural and artificial cognition, he always
has classical AI programmes run on a serial computer in mind: expert systems, production
systems--that kind of thing. The new kid on the block, neural networks, needs to be brought into
the analysis. They operate (or so many maintain) much more like natural cognitive systems as
Clancey characterizes the latter than serial systems do. Yet they are as artificial as any expert
10. Abstract Cognition and Language
I will close these comments by connecting two of the papers to an issue that has captured a lot of attention in another part of the cognitive science spectrum: folk psychology, and the extent to which its picture of cognition, including abstract cognition, reflects how we actually do it.
As a test case, consider Halford, Wilson, and Phillips' model. They argue that cognizing over relations is more abstract and also more powerful than associative cognizing.(4) As they see it, abstract cognition is a matter of identifying and manipulating multidimensional relationships, relationships between other relationships, and so on in a function/argument framework rather than a framework of associations. Part of the debate over folk psychology is a debate about what such cognizing is like. One side views it as at least something like the view of cognition that Clancey rejects, the other as rather like the view he favours.
As children we all acquire a picture of what the entities and relationships central to cognition and action are like, entities like beliefs and desires, and relationships like: `If you desire X and believe that Y is a good way to get X, and believe that nothing else is relevant to the situation of satisfying desire X, then you will be disposed to do Y.' This picture of the mind is now called commonsense or folk psychology. In the huge literature on its value, Fodor (1975, 1987) is a keen exponent, the Churchlands are key sceptics. The study of folk psychology has extended its tentacles into all sorts of unexpected areas. For example, Baron-Cohen (1995) and his supporters argue that autism consists largely of a lack of such a picture of the mind, the so-called `theory' theory of autism. The picture of cognition presented by folk psychology is, some argue, an extremely sentential one. By this they mean that folk psychology sees thinking, etc., as taking place in something very much like sentences (Fodor  has even invented a whole special quasi-language called Mentalese to do the job), using processes that are structured very much like inferences in the predicate calculus. This picture of thinking is very much like, indeed was a central component of, the computational model of the mind suggested by serial AI--the very model that is for Clancey such a massive misrepresentation of natural cognition, including natural linguistic cognition.
On the other side of the debate, P. M. Churchland (1981, 1984), Stich (1983), P.S. Churchland (1986) and many others argue that the commonsense picture of the mind as an inefficient sentence-processor completely misrepresents the nature of cognition, including cognition that takes place in language. Rather, cognition is a process of very efficient phase-space transformations in a large phase-space (that is one view, at any rate). Expression in sentences is merely a final, superficial translation for purposes of communication and perhaps certain kinds of storage. This view clearly has close parallels to Clancey's position.
Applied to Halford, Wilson, and Phillips, the issue takes the following form. Let us accept that abstract cognition is a matter of identifying and manipulating multidimensional relationships, relationships between other relationships, and so on. How are these relationships expressed in the activities of the brain? If there are states in there that have approximately the form of sentences and if there are processes that have approximately the form of moves in the predicate calculus, then to that extent the sententialist model of folk psychology and classical cognitive science will have been vindicated. If, on the other hand, "perceptual categorizations and categorizations of sequences" (p. 11) are implemented by mechanisms of some very different kind, phase-space transformations or something else as different from the predicate calculus, alternative pictures like Clancey's (pp. 18 and 20) will to that extent have been vindicated.
To sum up: we have examined Lehtinen and Ohlsson's and Stern and Staub's analyses of the
relation of new abstract concepts to old ones, using Wittgenstein's later notion of langauge as use
in general and Perkins' notion of an epistemic game in particular, and argued that one main way
to acquire a new abstract concept, law, etc., is to acquire a new epistemic game. New epistemic
games cannot be reduced to old games or package of games. We have used Margolis's ideas
about creativity and his doubly negative knowledge to illustrate these claims. Then, turning to
Clancey's argument for non-linguistic knowledge and a picture of natural cognition as something
utterly unlike cognition in AI systems, we have urged that, attractive as the picture is, it also
needs further development in crucial ways. Finally, we looked at Halford, Wilson, and Phillips'
account of how relational cognition proceeds as a test-case for the debate between sententialist
and non-sententialist pictures of that particular high level variety of natural cognition. The debate,
as we saw, could have far-reaching implications for Clancey's analysis, at least as an account of
more abstract forms of cognition.
Baron-Cohen, S. 1995. Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: MIT Press.
Churchland, P. M. 1981. Eliminative materialism and the propositional attitudes. Journal of Philosophy 78, No. 2, pp. 67-90
Churchland, P. M. 1984. Matter and Consciousness. Cambridge, MA: MIT Press/A Bradford Book
Churchland, P. S. 1986. Neurophilosophy. Cambridge, MA: MIT Press/A Bradford Book
Clancey, W. forthcoming. Situated Cognition: On Human Knowledge and Computer Representations. Cambridge: Cambridge University Press
Clancey, W. undated. The Conceptual Nature of Knowledge, Situations, and Activity
Fodor, J. 1975. The Language of Thought. New York: Thomas Y. Crowell
Fodor, J. 1987. Psychosemantics. Cambridge, MA: MIT Press/A Bradford Book
Kant, I. 1787. Critique of Pure Reason, 2nd edition. Trans. N. Kemp Smith. London: Macmillan
Ohlsson, S. 1993. Abstract Schemas. Educational Psychologist 28(1), pp. 51-66
Rosch, E. 1975. Principles of categorization. In E. Rosch and B. B. Lloyd, eds. Cognition and Categorization. Hillsdale, NJ: Erlbaum Associates
Sacks, Oliver. The Man Who Mistook His Wife for a Hat. New York: Harper and Row
Stich, S. 1983. From Folk Psychology to Cognitive Science. Cambridge, MA: MIT Press/A Bradford Book
Wittgenstein, L. 1921. Tractatus Logico-Philosophicus. Trans. D. Pears and B. F. McGuinness. London: Routledge and Kegan Paul.
Wittgenstein, L. 1953. Philosophical Investigations. Trans. G. E. M. Anscombe. Oxford: Basil Blackwell
1. I will refer to the papers in this special issue of the International Journal of Educational Research by authors' name(s) and, where appropriate, page number.
2. Perhaps I should also indicate that, in my view, we can accept the point about the constructive element in abstract concept acquisition without having to accept another of their desiderata for a theory of abstraction, that it "should dispense with the correspondence notion of representation" (p. 11). Why should we dispense with the notion of correspondence? Certainly the great scientists such as Galileo, Newton, etc., thought that their concepts and laws corresponded to the way things are--that masses really do accelerate at the same rate in a vacuum, that the laws governing the universe really are the same as the laws governing small objects on the face of the earth, and so on. In the face of the strong intuition in both science and everyday life that beliefs correspond or fail to correspond to the way things are, that this is what makes them true or false, we would need a strong argument to reject the notion.
3. Activity is also relative to a description. The same movement, say the movement of a hand, can be a great many different activities, say waving, warning, dismissing, agreeing, skipping part of an argument, etc., etc., and the movement becomes any of these actions only as a result of the intentions of the actor and therefore under a description. This intentional/semantic opacity of action would seem to be grist for Clancey's mill.
4. I regret that I have not said more about Halford, Wilson and Phillips' paper but I don't have more to say. The points they make about relationships, the dimensionality of relationships, the limits on the number of dimensions that different kinds of animals and humans at different stages of development can handle, and the relation of dimensionality to processing load all strike me as well-argued and well-taken.