|
|
|
|
The concept of avatars is thrilling. People are rushing out to realize
the idea and create graphical chat rooms. Producing cool graphics and rich
worlds is all the hype. Users can customize the look of their avatars and even
build them a colourful home on the new bitfield frontier. BUT how will
the actual embodyment take place? How are you going to project yourself
into the avatar? William Gibson's brain implants and direct coupling
into the matrix will remain the fabrics of fiction for decades to come, so
less intrusive methods must be applied. Laboratories exploring virtual environments commonly employ trackers to map certain key parts of the user's body onto the graphical representation. As the user moves, the avatar imitates the motion. This approach, when used in a non-immersive setting, shares a classical problem with video conferencing: The user's body resides in a space that is radically different from that of the avatar. This is flaw becomes particularily apparent when multiple users try to interact, because our deeply rooted social skills rely a lot on localization and gaze direction. Commercially available chat room systems that only use standard input devices, don't even attempt to model the gestural richness of communication. The arrow keys or the mouse are typically used to navigate the avatar to a certain location, perhaps close to another avatar. Then you let go of the navigation device and start typing away in order to make contact. Up to that point you have not received any visual feedback from your potential interlocutor, since that user might have been busy chatting with someone else and thus not in the process of animating his avatar. This is especially troublesome when you are trying to decide wether to join a conversation or not, since you don't have a clue if you have been acknowledged by the group. We usually rely on visual cues before we actually break in with a full blown introduction. |