<< >> Up Title Contents


3.3 Animacy and Computation


Animacy is a primary domain, in that it is not itself grounded in metaphor but in more basic processes. However, it can and does serve as a source domain for the metaphorical understanding of other realms. Obvious and fanciful versions of these metaphors are quite common in everyday life, as in utterances like "this bottle just doesn't want to be opened". Some mechanical systems are so complex that they need to be treated as having moods, particularly vehicles, and are thus anthropomorphized. I will refer to such usages as animate metaphors. Computer systems in particular are prone to be anthropomorphized, due to their complexity and apparent autonomy:

Anthropomorphization[9] -- Semantically, one rich source of jargon constructions is the hackish tendency to anthropomorphize hardware and software. This isn't done in a naive way; hackers don't personalize their stuff in the sense of feeling empathy with it, nor do they mystically believe that the things they work on every day are 'alive'. What is common is to hear hardware or software talked about as though it has homunculi talking to each other inside it, with intentions and desires. Thus, one hears "The protocol handler got confused", or that programs "are trying" to do things, or one may say of a routine that "its goal in life is to X". One even hears explanations like "... and its poor little brain couldn't understand X, and it died." Sometimes modeling things this way actually seems to make them easier to understand, perhaps because it's instinctively natural to think of anything with a really complex behavioral repertoire as 'like a person' rather than 'like a thing'.

The computer bridges the animate and inanimate worlds as few other objects can. Although even humans can become subject to physical rather than intentional explanations, the computer, as a manufactured object designed around animate metaphors, inherently straddles the divide. People who interact with computers must do the same. Sherry Turkle has written extensively on children's reactions to computers as "marginal objects", not readily categorizable as either living or inanimate:

Computers, as marginal objects on the boundary between the physical and the psychological, force thinking about matter, life, and mind. Children use them to build theories about the animate and the inanimate and to develop their ideas about thought itself (Turkle 1984, p31).

But animate metaphors for computation can be problematic as well. AI, which might be considered an attempt to build an extended metaphorical mapping between humans and machines, impinges upon highly controversial issues in philosophy, and gives rise to contradictory intuitions, intense passions, and stubborn disagreements about whether computational processes can truly achieve intelligence (Dreyfus 1979), (Penrose 1989), (Searle 1980). The stereotypical AIs and robots seen in fiction are lifelike in some ways, but mechanical in others--inflexible, implacable, even hostile. Computers are said to lack certain key components of humanity or aliveness: consciousness, free will, intentionality, emotions, the ability to deal with contradiction. People exposed to computers will often end up defining their own humanness in terms of what the computer cannot apparently do (Turkle 1991). Something about the nature of computers, no matter how intelligent they are, seems to keep them from being seen as full members of the animate realm.

Our focus on animate metaphors allows us to sidestep this often sterile philosophical debate. Instead, we will examine the ways in which computational practice makes use of animism to structure itself.

Within the field of computation, anthropomorphic metaphors are sometimes denied as wrongheaded or harmful:

Never refer to parts of programs or pieces of equipment in an anthropomorphic terminology, nor allow your students to do so...The reason for this is that the anthropomorphic metaphor...is an enormous handicap for every computing community that has adopted it...It is paralyzing in the sense that because persons exist and act in time, its adoption effectively prevents a departure from operational semantics and, thus, forces people to think about programs in terms of computational behaviors, based on an underlying computational model. This is bad because operational reasoning is a tremendous waste of mental effort (Dijkstra 1989).

But most computer practitioners are not of the opinion that anthropomorphism (or operational thinking, for that matter) is such a bad thing. Indeed, the deliberate use of anthropomorphism has been a promising tool in computer education:

One reason turtles were introduced [into Logo] was to concretize an underlying heuristic principle in problem-solving--anthropomorphize! Make the idea come alive, be someone...Talking to inanimate objects and thus giving them life is an implicit pattern in our lives; we have tried to turn it to our advantage and make it an explicit process (Solomon 1976).

A common worry about anthropomorphic descriptions of computational system is the danger of causing overattribution errors. That is, people might draw incorrect inferences about the abilities of computers, assuming that they share more properties of the animate than they actually do, such as the ability to reason or to learn. Sometimes this is dealt with by instructing students to regard the little people as particularly dumb, mechanical instruction followers with no goals, judgment, or intelligence of their own, much like a player of the "Simon Says" game. In Papert's phrase, "anthropomorphism can concretize dumbness as well as intelligence".

This section explores the animate roots of computation and the ways in which the programming paradigms introduced in the last chapter make use of animate metaphors. We will also look at the relation of animism to human interface design, artificial intelligence, and the teaching of programming. The questions to be asked are which attributes of animacy are mapped into the computational domain, and onto what parts.

3.3.1 Animism at the Origins of Computation

The entire enterprise of computation might be seen as being built around a series of anthropomorphic metaphors, beginning with Turing's description of what are now called Turing machines. At this point in time, the word "computer" referred to a human calculator, who was destined to be replaced by the described machine:

Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child's arithmetic book... the behavior of the computer at any moment is determined by the symbols which he is observing, and his `state of mind' at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment... Let us imagine the operations performed by the computer to be split up into `simple operations'... Every such operation consists of some change of the physical system consisting of the computer and his tape [a one-dimensional version of the squared paper]... The operation actually performed is determined, as has been suggested by the state of mind of the computer and the observed symbols. In particular, they determine the state of mind of the computer after the operation. We may now construct a machine to do the work of this computer (Turing 1936).

Here the anthropomorphism underlying computation is made quite explicit. Computation is first conceived of as a human activity, albeit one carried out in a rather formal and limited manner. The metaphorical mapping from the human domain has some questionable components--in particular, the characterization of the human computer as having a `state of mind' that is discrete and drawn from a finite set of possible states. Nonetheless this metaphor has proven extraordinarily powerful, and forms the basis of the imperative model of programming described in section 2.2.3.1.

This prototypical computer, while described in animate terms, has few of the properties that are central to animacy. The computer is not particularly autonomous--it is "doing what it is told, no more, no less". Its purpose, if any, comes from outside itself--it has no representation or access to its goal. It has no relationship with a world outside of its own tape memory, so certainly cannot be said to be reactive. It is, however, repeatedly performing actions, and so despite its limitations is seen as animate. I call this particular reduced form of animism "rote instruction follower animism", to contrast it with the concept of animacy used in everyday life, which is linked to autonomy, purposefulness, and consciousness.

It is interesting to contrast this form of computational animism with the related imagery found in the discourse of computation's close cousin, cybernetics. Cybernetics and the technology of computation developed in parallel during and after World War II. Cybernetics concerned itself precisely with those properties of animacy marginalized by Turing's hypothetical machine and the later actual machines: purposefulness, autonomy, reactivity, and the importance of the relationship between an organism and its environment. These sciences had origins in different technological tasks (feedback control in the case of cybernetics, code-breaking in the case of computation), different mathematical bases (continuous time series vs. discrete symbolic manipulation), and ultimately in different ideas about the mind (regulative vs. cognitive). While the two approaches were discussed together during the period of the Macy conferences (Heims 1991), they ultimately went their separate ways. Recent developments in AI, such as the situated action approach to behavioral control (Agre and Chapman 1987) are in some sense an attempt to bring back into AI the cybernetic elements that were split off.

3.3.2 Animacy in Programming

The various programming models described in the last chapter can be analyzed in terms of animacy. This will mean looking at each in terms of what each model offers the programmer in terms of tools for thinking about action and actors. Like trans-frames, statements of programming languages can be considered as representations of change. If we assume that the operation of programs is understood, at least in part, by means of the application of such frames, we can ask ourselves questions about the contents of the terminals of such frames. What are the actions, what are the ACTORs and ORIGINs and DESTINATIONs? The identities of the ACTORs are of particular interest: who or what is making these actions happen, who is in charge? What images of control can inform our understanding?

To take an obvious example, consider an imperative assignment statement like LET A=B. In the above terms, it is a command to move a value OBJECT (contained in B) from a SOURCE (B) into a DESTINATION (A). This is essentially represented metaphorically as a physical transfer (PTRANS) of an object from one place to another, although neither the objects nor places are truly physical in any tangible sense. But what about the ACTOR terminal of the trans-frames that represents this action? Who is performing the move operation?

Imperative statements or commands like the one above are like imperative sentences--they do not explicitly name their ACTOR, instead they have an implied actor who is the recipient of the command. In the computational case, the computer itself will take on the role of ACTOR when it executes the instruction. This mode of address meshes with the image of the computer as an animate but dumb being, who must be instructed in detail and cannot really take any action on its own.

While object-oriented programming languages also follow a basically imperative model, the image of their operation is somewhat different, because the objects are available to fill ACTOR terminals. In OOP, programs are written as methods for specific object classes, and are seen as executing on behalf of particular objects. This tends to make the implied actor of an imperative instruction that occurs inside a method be the object, rather than the computer as a whole. The fact that methods can refer to their owning objects through reserved words like self heightens this effect.

Message-passing OOP languages introduce MTRANS or information-transmission events to the metaphorical underpinnings of the language. A message send is an action that transmits a request or order from one object to another (or to itself). OOP languages often use animistic names like ask or send for this operation. In a frame representation of a message-passing action, objects will occupy both the ACTOR and DESTINATION terminals. Once the message is transmitted, the DESTINATION object executes its own method for the message and, if it should take any action, will fill the ACTOR role of the next operation.

Message-passing languages thus provide a richer conceptual image of their activity than can be found in the basic imperative model, even though the computations they support may be identical. The activity is no longer the actions of a single actor, but a complex of interactions between objects. Even though, in an ordinary object-oriented language, the objects are not in any real sense active or autonomous (that is, they only take action when explicitly activated from the outside), they provide better handles for programmers to apply animistic thinking. The partially animistic nature of objects creates natural "joints" that allow the programmer or program reader to carve up the activity of the program into small segments, each of which is more readily understood in animate terms. Whereas previously the computation was seen as a single actor manipulating passive objects, under OOP objects are seen as taking action on their own behalf.

paradigm
organizing metaphors and principles
treatment of agents
imperative
action, instruction-following
single implicit agent
functional
functional mappings
no agents
dataflow
flow of values through network
cells can be seen as "pulling" agents
procedural
society of computing objects; call and return metaphors; combines imperative and functional modes
procedures can be seen as agents; little-person metaphor
object-oriented
communication metaphors, message-sending, encapsulation
encapsulation helps to present objects as agents
constraint
declarations of relationships to be maintained
constraints can be seen as agents with goals.

Table 3.2: Summary of metaphors and agents in programming paradigms.

Functional languages, by contrast, are those in which action, time, and change have all been banished for the sin of theoretical intractability. Such languages do not support animate thinking at all, and there are no implied actors to be found. Some programming environments that are quasi-functional, like spreadsheets, are amenable to a rather limited animate interpretation, in the sense that spreadsheet cells that have a functional definition can be thought of as actively "pulling in" the outside values needed to compute their own value. Spreadsheet cells are also responsive to change, and purposeful in that they act on their own to maintain their value in the face of change. But because they cannot act to change the world outside of themselves, they are not going to be understood as animate in any broad sense.

Procedural languages blend the imperative and functional models, and so admit to being understood as animate to varying degrees depending on the style of the program. The modularization of a program into a collection of procedures allows each procedure to be seen in animate terms; this is exemplified by the little-person metaphor, described below.

Constraint systems and languages vary in their treatment of actors and agency. Like the functional model, they emphasize declarative statements of relationships rather than specification of action, and so tend to be actorless. But some, like ThingLab, explicitly treat the constraints as objects with procedural methods. This suggests that the constraints themselves could take the role of actors in the system. But in practice, constraints are seen as passive data used by a unitary constraint-solver. The procedural side of constraints are single statements that are manipulated as data objects by a planner, rather than active entities in their own right.

Animacy is a powerful conceptual tool with which to analyze programming paradigms (see Table 3.2 for a summary). In particular, it can help explain the specific appeal and utility of object-oriented programming, which has in recent years become an extremely popular model for commercial programming. OOP is fundamentally just a way of organizing a program, and it has not always been clear why or even if it is a superior way to do so. Animism provides a theory about why OOP is powerful: its particular style of modularization divides up a program so that animate thinking can be readily applied to any of its parts. Functional programming, on the other hand, provides a model that systematically excludes animism, which might explain why, despite its undeniable theoretical advantages, it has little popularity outside of a small research community.

Animism lurks in the background of these classic programming models. Can it be used as an explicit basis for the design of new ones? Agent-based programming, to be described, attempts to do just that. In particular, it seeks to provide a language for programs that are understandable in animate terms, but with the ACTOR slot filled by objects that partake of a higher degree of animacy than the rote instruction followers found in the imperative and object models. They should be capable of being seen as possessing the main qualities of real animacy: purposefulness, responsiveness, and autonomy.

3.3.2.1 The Little-Person Metaphor

The little-person metaphor is a teaching device used to explain the operation of Logo to beginners. The little-person metaphor was invented by Seymour Papert in the early days of Logo development; the description here derives from the descriptions in (Harvey 1985) and (diSessa 1986). Under this metaphor, the computer is populated with little people (LPs) who are specialists at particular procedures, and "hire" other LPs to perform subprocedures. LPs are normally asleep, but can be woken up to perform their task. Whenever an LP needs a subprocedure to be run, it wakes up and hires an LP who specializes in that procedure, and goes to sleep. When the hired LP finishes executing its procedure, it reawakens the caller. A "chief" LP serves as the interface to the user, accepting tasks from outside the LP domain and passing them on to the appropriate specialists.

The LP metaphor is an explicitly animate metaphor for a procedural language. Procedures are still rote instruction followers, accepting commands from the outside and executing fixed scripts, and not capable of autonomous activity in any sense. Still, the LP metaphor succeeds quite well in "animating" procedures and making their activity understandable. In some respects it is easier to see a procedure as animate than the computer as a whole. I believe this is because a procedure, being specialized for a particular task, brings along with it a feeling of purposefulness. LPs thus can be seen as fulfilling a task rather than as simply carrying out a sequence of instructions. The social and communicative aspects of the metaphor are also important, since they give a metaphoric basis to the relationships between procedures.

The little-person metaphor has been quite successful as a device for teaching the detailed workings of the Logo language.[10] Sometimes the model is taught through dramatization, with students acting out the parts of the little people. Not only does the model provide a good tangible model for the otherwise abstruse idea of a procedure invocation, but it turns them into animate objects, allowing students to identify with them and to project themselves into the environment.

3.3.3 Body- and Ego-Syntonic Metaphors

The Logo turtle was developed to encourage what Papert calls syntonic learning (Papert 1980, p.63). Turtles are said to be body syntonic, in that understanding a turtle is related to and compatible with learners' understandings of their own bodies. The turtle may also be ego syntonic in that "it is coherent with children's sense of themselves as people with intentions, goals, desires, likes, and dislikes". Syntonic learning, then, is any form of learning which somehow engages with the student's existing knowledge and concerns, in contrast to the more common style of dissociated learning (such as the rote learning of historical events or multiplication tables).

Papert's theory of syntonic learning resonates with our theory of metaphorically-structured understanding, which holds that all real conceptual learning involves building connections with existing knowledge. Looking at syntonic learning as a metaphor-based process might give us a way to think about how it works. Body syntonic learning involves creating a correspondence between the body of the learner and some anthropomorphic element in the problem world (the turtle). The learner is thus able to bring to bear a large store of existing knowledge and experience to what was an unfamiliar domain (geometry, in the case of the turtle). Metaphors based on the body have the advantage of universality: everyone has a body and a large stock of knowledge about it, even young children. The power of the turtle lies in its ability to connect bodily knowledge with the seemingly more abstract realm of geometry.

The utility of body-syntonicity for problem solving has also been explored by (Sayeki 1989). Sayeki explored how people project themselves into an imaginary scene such as the world of a physics or geometry problem, and found that problem solving times could be drastically decreased by including cues that allowed solvers to map the problem onto their own bodies (i.e. making what was a geometric figure resemble a body by the addition of a cartoon head). By introducing anthropomorphic forms (which he labels kobitos, or little people) he can apparently make it much easier for subjects to find the appropriate mappings from their own bodies to the objects of the problem, a process he refers to as "throwing-in". This is an interesting case of the deliberate deployment of metaphor in order to aid in problem-solving. It also highlights the active nature of the process of metaphorical understanding--learners actively project themselves into the problem domain, as is the case with the turtle.

The realization of body-syntonicity through turtles and similar physical or graphic objects that permit identification would seem to be a clear success. Ego-syntonicity, however, is more problematic. The Logo turtle is only weakly ego-syntonic. A turtle always has bodily properties (position and heading), but does not in itself have goals and intentions of its own, or behaviors that take into account the environment (Resnick and Martin 1991). If it has any kind of ego, it is the simple-minded one of the rote instruction follower. And this of course does allow the learner to project some of their own mental activity onto the turtle, but only a highly limited subset. The turtle can, in theory, be programmed to have ego-syntonic properties like goals, but they are not inherent in the basic Logo turtle or the Logo environment. In particular, in many implementations of Logo the turtle lacks any kind of sensory capability and thus cannot really have goals because there is no way to verify when the goals are satisfied. Later work by Papert and his students (Papert 1993) (Martin 1988) addresses the issue of augmenting the turtle with sensors so that it could have responsive behaviors and gave more emphasis to the role of feedback. However, the Logo language is still essentially oriented around the procedural model, rather than a reactive or goal-oriented one.

There are strong emotional factors at work in this process of projection and identification. Child programmers may have different styles of relating to the computer that affect how or if they can identify with and project themselves onto the machine. Such styles are strongly conditioned by gender and other social factors (Turkle 1984) (Papert 1993). It has been suggested, for instance, that the child-rearing role of women predisposes them to "take pleasure in another's autonomy" while men are more likely to be preoccupied with their own autonomy and thus more prone to domination and mastery rather than nurturing the autonomy of others (Keller 1985). This difference in style has been observed in the different programming styles of boys and girls learning to program (Motherwell 1988) (Turkle and Papert 1991). In Motherwell's study, girls were considerably more likely to treat the computer as a person than were boys.

3.3.4 Anthropomorphism in the Interface

Human interface design has considered anthropomorphism in the form of "agents". In this context, an agent is an intelligent intermediary between a user and a computer system, visualized as an anthropomorphic figure on the screen with whom the user interacts, often by means of natural language (Apple Computer 1991). Interface agents are still mostly a fantasy although there are some explorations of anthropomorphic interfaces with minimal backing intelligence (Oren, Salomon et al. 1990), as well as efforts to use more caricatured, less naturalistic anthropomorphic metaphors that use emotion to indicate the state of a computational process (Kozierok 1993).

There is a long-standing debate in the interface community about the utility and ethics of agents (Laurel 1990). Anthropomorphic agents promise easier to use interfaces for novices; but they also threaten to isolate the more experienced user from the ability to directly control their virtual world (Shneiderman 1992) (Lanier 1995). Other ethical issues revolve around the question of whether having a computational system present itself as a mock person requires or elicits the same sort of moral attitudes that apply to a real person, along with the even more provocative notion that dealing with simulated people who can be abused (deleted or deactivated, for instance) will lead to similar callous attitudes towards real people.

Nass demonstrated that people interacting with computers are prone to treat computers as social actors, when interaction is framed appropriately (Nass, Steuer et al. 1993). That is, if the interaction is framed as a conversation, users will apply the usual social rules of conversation to the interaction (i.e., praising others is considered more polite than praising oneself). The subjects of the experiments applied these rules even though they believed that such rules were not properly applied to computers. This research suggests that people can be induced to take an animate view (in Goffman's terms, to apply a social framework) by certain specific cues, such as voice output. This cueing process seems to operate almost beneath conscious control, much as the motional cues used by Michotte to induce perceptions of animacy.

Many interfaces are not as explicitly anthropomorphic as those above, but incorporate a few cues that induce a mild degree of animism. The Logo language, for example, is designed to use natural-language terms in a way that encourages a form of interface anthropomorphism. Commands, syntax, and error messages are carefully crafted so that they resemble natural language communication, casting the Logo interpreter, procedures, or turtle as communicating agents. For instance, if a user gives Logo an unrecognized command (say SQUARE), it responds with "I DON'T KNOW HOW TO SQUARE", rather than something on the order of "Undefined procedure: SQUARE". The student can then "instruct" Logo by saying:

TO SQUARE
REPEAT 4 [FD 70 RT 90]
END SQUARE

The interesting detail here is the syntax for procedure definition makes the definition resemble instruction in natural language: "To [make a] square, do such-and-such...". By putting the user in the position of addressing the computer (or the turtle) in conversation, the system promotes the use of animate thinking. This syntactical design allows Logo teachers to employ the metaphor of "teaching the computer" or "teaching the turtle" to perform procedures (Solomon 1986).

Consider the different shadings of meaning present in the following different ways of expressing a textual error message:

  1. "Missing argument to procedure square."
  2. "Procedure square requires more arguments."
  3. "Procedure square needs more arguments."
  4. "Procedure square wants more arguments."

Of these, message 1 is the most formal, and the least animate. It is not even a sentence, just a declaration of a condition. Message 2 casts the procedure as the subject of a sentence, which is a large step in the direction of animacy. Messages 3 and 4 are of the same form as 2, but alter the verb so as to imply increasingly human attributes to the procedure. All these variants and more occur in various places in the discourse of computer science and in the discourse-like interactions between programmers and their environments. Most professional programming environments use messages that are like message 1 or 2, while those like message 3 are found in some Logo environments in keeping with the language's use of animate and natural-language constructs (see below). Statements like message 4 are rarely used as error messages, but are common in the informal discourse of programmers.

Consider also the messages of the form "I don't know how to square", issued by some Logo environments to indicate an undefined procedure. The use of the first person makes these more explicitly anthropomorphic than the earlier group. But since there is no procedure, the "I" must refer to the computer as a whole, or the interpreter, rather than a specific part. These presentational subtleties are indicative of subtle modulations of animism in the Logo environment: the interpreter/turtle is fairly heavily animate, to the point where it can refer to itself in the first person, whereas procedures are less so: they have "needs", but do not present themselves as agents, instead letting the interpreter speak for them.

In recent years the term "agent" has become overused in the commercial software world almost to the point of meaninglessness. Still, some core commonalties appear among most products and systems touted as agents. One of these meanings is "a process that runs in the background, autonomously, without being directly invoked by a user action". Even the simplest programs of this type, such as calendar and alarm programs, have been dignified as agents. Less trivial applications include programs that try to learn and automate routine user actions (Kozierok 1993) (Charles River Analytics 1994).

This usage seems to derive from the overwhelming success of the direct manipulation interaction paradigm. Because of the current dominance of this model of computer use, any user application that operates outside of the direct manipulation mode, even something as simple as an alarm clock, becomes an agent if only by contrast.

Another popular type of "agent" is the migrating program, such as "Knowbots" that roam the network looking for information that meets preset criteria (Etzioni and Weld 1994). Agents of this sort can transport themselves over a network to remote locations where they will perform tasks for their user. Special languages, such as Telescript (General Magic 1995) are under development to support the development and interchange of these agents. It is not clear to me why a program running on a remote computer is any more agent-like than one running on a local computer. The animism apparently arises from the ways in which these programs are intended to be used. Remote execution might contribute to a greater feeling of autonomy, and agent programs can migrate from machine to machine, like a wandering animal, and reproduce by copying themselves. The development of computer networks has given rise to spatial metaphors (like the overused "cyberspace") which in turn seems to encourage the use of animate metaphors for the entities that are to populate the new space. Telescript in particular features places, which are similar to processes, as one of its main object types.

3.3.5 Animate Metaphors in Artificial Intelligence

The task of Artificial Intelligence is to derive computational models of human thought, a process that is metaphorical in nature but maps in the opposite direction from the one we have been considering here, that is, the use of animate metaphors for computation. The two ways of knowing are intimately related, of course: metaphorical mappings of this broad sort tend to be bidirectional, so that models of the mind and of the computer are necessarily co-constructed in each other's image. I will not directly address the issues raised by computational metaphors for mind, a topic addressed widely elsewhere (see (Agre 1996) for a view that emphasizes the specific role of metaphor).

However, mental or animistic metaphors for computation are just as much a part of AI as are computational metaphors for mind. From this standpoint, AI is less a way of doing psychology and more an engineering practice that relies on the inspiration of human thought for design principles and language. In other words, it makes explicit the animism that has been implicit in other approaches to computation. For example, here is a fairly typical description of the operation of an AI program, selected at random from a textbook:

When FIXIT [a program] reexamines a relation, it looks for a way to explain that relation using all but one of the precedents... [the exception] is omitted so that FIXIT can explore the hypothesis that it provided an incorrect explanation... (Winston 1992, p390) [emphasis added].

The italicized words above indicate verbs that put the program in the position of an animate actor. Such talk is probably necessary, in that it is the most compact and meaningful way to communicate an understanding of the program to the reader.

AI has always pushed the frontiers of complexity in programming and thus has generated many new metaphors for computational activity, metaphors which are not necessarily linked to the founding metaphor of computation as thought. An example of this would be the blackboard model of problem solving (Reddy, Erman et al. 1973), in which a blackboard metaphorically represents a scratch memory and communications medium that can be shared by multiple computational processes.

The willingness of AI to freely leap between computational and animate language has proved both fruitful and dangerous. The fruitfulness results from the richness of animate language for describing processes, and the inadequacy of more formal language. The danger results from the tendency to take one's own metaphorical language too literally. (McDermott 1987) is a well-known diatribe against the practice of giving programs animistic names like UNDERSTAND rather than more restrained, formalistic names based on the underlying algorithm. Such naming practices, however, are a constituent part of the practice of AI. Constant critical self-evaluation is the only way to ensure that the metaphors are used and not abused.

The subfield of distributed artificial intelligence (Bond and Gasser 1988), in which AI systems are split into concurrent communicating parts, makes more explicit use of anthropomorphic metaphors to represent its parts. Because communication often dominates the activity of such systems, metaphors of communication and social interaction predominate. The metaphors employed include negotiation (Davis and Smith 1983), market transactions (Malone, Fikes et al. 1988), and corporations or scientific communities (Kornfeld and Hewitt 1981). A particularly important thread of work in this field begins with Hewitt's work on Actors, a model of concurrent computation based on message passing (Hewitt 1976) (Agha 1986). The Actor model is a computational theory, in essence a model of concurrent object oriented programming, but the work on Actors has included a good deal of work on higher-level protocols and on ways of organizing communication between more complex actors. This has led to an interest in "open systems" (Hewitt 1986), an approach to thinking about computational systems that acknowledges their continual interaction with an external environment.

Minsky's Society of Mind represents perhaps the most figurative use of anthropomorphic language in AI, perhaps out of necessity as it is a semi-technical book written in largely non-technical language. By using anthropomorphic metaphors to talk about parts of minds, Minsky risks being accused of resorting to homunculi, and indeed has been (Winograd 1991). Dennett has a lucid defense of homuncular theories of the mind:

It all looks too easy, the skeptics think. Wherever there is a task, posit a gang of task-sized agents to perform it -- a theoretical move with all the virtues of theft over honest toil, to adapt a famous put-down of Bertrand Russell's. Homunculi -- demons, agents -- are the coin of the realm in Artificial Intelligence, and computer science more generally. Anyone whose skeptical back is arched at the first mention of homunculi simply doesn't understand how neutral the concept can be, and how widely applicable. Positing a gang of homunculi would indeed be just as empty a gesture as the skeptic imagines, if it were not for the fact that in homunculus theories, the serious content is in the claims about how the posited homunculi interact, develop, form coalitions or hierarchies, and so forth (Dennett 1991, p261).

3.3.6 Conclusion: Computation Relies on Animate Metaphors

We have seen that animate metaphors and usages play a number of roles in computing: they appear in the discourse of computation as foundational metaphors, in pedagogy as teaching devices, and are taken with various degrees of literalism in interface design and AI. Fundamentally, computers are devices that act, and thus we are prone to see them in animate terms.

But in most cases the animism attributed to the computer is of a limited kind, the "rote instruction-follower" kind of animism. In some sense, this is proper and unexceptional--after all, the computer is a rote instruction follower, at the level of its inner workings. Students must learn to give up their animism and look at the computer as it "really is", that is, under the mechanical viewpoint, as a device without goals or feelings, if they are to understand its operations in detail.

But what if our goal is not to teach how machines work, but particular styles of thought? If our interest is in exploring programming as a means of controlling action and of building animate systems, the fact that the language of computation provides only a narrow concept of animacy is unfortunate. With that in mind, the next section looks at the idea of agents as the basis for programming languages that can support a richer notion of animacy.


[9]From the communally-written Hacker Jargon File, Version 3.0.0, 27 July 1993

[10] Seymour Papert, personal communication.


<< >> Up Title Contents