During April of this semester, Scientific American Frontiers came to the Media Lab to film a show on current research at the Lab. One of the themes of the Vision and Modeling Group's demonstrations was the combination of the real and the virtual. I assisted in implementing new functionality for the Artificial Life Interactive Video Environment (ALIVE) demo, which allowed us to incorporate video of a real person into a computer-generated environment.
Towards that end I modified pfinder, our person tracking software, to send the bounding rectangle of video data to its client programs (along with the other information it generates). We then texture mapped that video on to the polygon representing the person in ALIVE. This representation is called a video avatar.
The scene above shows yours truly, represented as a video avatar, walking around in a model of the Media Lab created by Mike Hlavac. A script written in Scheme allowed a "director" to perform a camera zoom and tell Silas T. Dog where to walk in the courtyard.
A shot of Alan Alda, represented as a video avatar, interacting with Silas in this virtual courtyard was used as the opening shot for the Scientific American Frontiers show.
Kenneth B. Russell - email@example.com