Autonomous Communicative Behaviors in Avatars
by
Hannes Högni Vilhjálmsson
Submitted to the Program in
Media Arts and Sciences,
School of Architecture and Planning,
on May 9, 1997, in partial fulfillment of the requirements for
the degree of
Master of Science
in Media Arts and Sciences
at the
Massachusetts Institute of Technology
Most networked virtual communities, such as MUDs (Multi-User Domains), where people meet in a virtual place to socialize and build worlds, have until recently mostly been text-based. However, such environments are now increasingly going graphical, displaying models of colorful locales and the people that inhabit them. When users connect to such a system, they choose a character that will become their graphical representation in the world, termed an avatar. Once inside, the users can explore the environment by moving their avatar around. More importantly, the avatars of all other users, currently logged onto the system, can be seen and approached.
Although these systems have now become graphically rich, communication is still mostly based on text messages or digitized speech streams sent between users. That is, the graphics are there simply to provide fancy scenery and indicate the presence of a user at a particular location, while the act of communication is still carried out through a single word-based channel. Face-to-face conversation in reality, however, does make extensive use of the visual channel for interaction management where many subtle and even involuntary cues are read from stance, gaze and gesture. This work argues that the modeling and animation of such fundamental behavior is crucial for the credibility of the virtual interaction and proposes a method to automate the animation of important communicative behavior, deriving from work in context analysis and discourse theory. BodyChat is a prototype of a system that allows users to communicate via text while their avatars automatically animate attention, salutations, turn taking, back-channel feedback and facial expression, as well as simple body functions such as the blinking of the eyes.
Thesis Supervisor: Justine Cassell
Title: AT&T Career Development Assistant Professor of
Media Arts and Sciences