(Appears in SIGGRAPH 96 Visual Proceedings, Technical Sketches, p. 141)

Distributed ALIVE


Kenneth B. Russell, Bruce M. Blumberg, Alex Pentland, Pattie Maes
MIT Media Lab
{kbrussel, bruce, sandy, pattie}@media.mit.edu

The goal of the Distributed ALIVE (dALIVE) system is to provide a shared virtual space among several computers which allows two or more people to interact visually with autonomous agents ("creatures") and each other. Three requirements affect the system's design:

  1. it must work in low bandwidth/high latency situations (wide area networks, for example).
  2. it must support distributed computation of the world model.
  3. it must provide "approximate accuracy" of each person's view of the world model.

Related Work
Distributed virtual environments are currently a hot topic in the field of computers, and there are many examples of such systems. Mitsubishi Electric's SPLINE focuses on providing a programmatic toolkit supporting 3D sound as well as graphics; the U.S. military's SimNET and DIS focus on realistic, large-scale battlefield scenarios. The difference between dALIVE and systems like these is that dALIVE focuses on the distribution of behavior-system computations for autonomous agents. Other systems send very low-level geometric information over the network to allow accurate prediction of position and orientation of tanks, vehicles and other objects; dALIVE sends high-level control commands to the geometry of agents instead. In addition, dALIVE is geared towards fine-grained distribution of the virtual environment; for example, two apartments in the same building could be "hosted" on different computers to reduce the workload.

Architecture
As described in Blumberg and Galeyan, 1995, we use the architecture shown in Figure 1 for our autonomous agents. The Behavior System decides what the creature should do at each point in time, and sends these directions to the Motor System in the form of Motor Command Blocks (MCBs), which are small, high-level pieces of information containing commands plus arguments, representing procedure calls. These are interpreted by the Controller and Motor Skills as control commands for the geometry. Human users of the system are represented by avatars in the shared virtual space. Input from humans through a passive vision system replaces the behavior system as the decision maker for the avatar.

When a creature is loaded into the Distributed ALIVE system, the computer on which it was loaded becomes the host for that creature. The host runs a creature's sensing and behavior system, the two most computationally intensive tasks for an agent. The other computers engaged in the same shared virtual space load only the geometrical representation for the creature; these representations are called droids. The MCBs which the master creature generates during each update cycle are sent out over the network. They are received by the droids and executed as normal by the droids' controllers. Thus, the droids mimic the actions of the master creature. From the standpoint of a user or another creature, there is no difference between a master and a droid; this knowledge is contained solely within the master creature.

Network Protocols
We define the notion of a server which performs two primary functions: it stores the geometry of the environment and all the creatures in it, and it provides a unique identifier for each creature in the environment. However, every computer running the dALIVE application can load a new creature into the shared environment; in this sense all the computers are peers.

At the network level, we use two styles of communications protocols to enable these two paradigms (client-server and peer-peer) while reducing network bandwidth. A reliable, full-duplex byte stream is used for sending (relatively large) geometry files between the server and its "clients". Unreliable byte streams are used for sending the MCB packets.

If a client is on the same local network as the server, it can use IP multicasting to send its MCB packets. Otherwise, it uses a point-to-point protocol to send the packets to the server, which redistributes them among all clients. This reduces the number of direct network connections between machines, and, combined with fine-grained distribution of the environment among servers, should scale well to multiple users in the same virtual space.

Demonstration and Future Work
We demonstrated the dALIVE system in the Interactive Communities exhibit at SIGGRAPH 95, with a cross-country link between Los Angeles and the MIT Media Lab over a single ISDN line (56 kbits/sec). In the future, we plan to implement a hierarchically organized virtual environment (an apartment building) with transferral of creatures among host machines as the creatures roam from place to place in the environment. We are also moving towards full integration of an interpreted language into the behavior and motor systems to allow the creation of machine-independent autonomous agents.

References

  1. Blumberg, B and Tinsley Galeyan, 1995. Multilevel Direction of Autonomous Creatures for Real-Time Virtual Environments. Proceedings of SIGGRAPH 95 (Los Angeles, CA, August 7-11, 1995).
  2. Maes, P., Darrell, T., Blumberg, B., and Pentland, A., (1996) "The ALIVE System: Wireless, Full-Body Interaction with Autonomous Agents", (to appear Special Issue on Multimedia and Multisensory Virtual Worlds, spring 1996)





Figure 1. The primary components of the agent architecture are the behavior system, motor skills and geometry. The controller and degrees of freedom act as abstraction barriers. Communication between the behavior and motor systems occurs in the form of motor command blocks.











Figure 2. dALIVE in action. Each of the dogs is hosted on a separate computer engaged in the shared space.












Kenneth B. Russell - kbrussel@media.mit.edu

$Id: index.html,v 1.1 1997/02/20 02:04:45 kbrussel Exp $