The Museum Wearable
Project Type: Wearable Computing
The museum wearable is a real time storytelling device: it is a museum
guide which in real time evaluates the visitor's preferences by
observing his/her path and length of stops along the museum's exhibit
space, and selects content from a large database of available movie
clips, audio, and animations. The museum wearable targets individual
visitors with special learning needs or curiosity, and offers a new
type of entertaining and informative museum experience, more similar
to immersive cinema than to the traditional museum experience. The
museum wearable identifies three visitor types: busy,
greedy, and selective, which have been selected as
the essential museum visitor types from the museum literature. It uses
a custom-made infrared location sensor to gather tracking information
about the visitor's path in the museum's gallery and uses this
information to introduce evidence in the dynamic bayesian network
which interprets the sensor information and delivers content to the
visitor. The network performs probabilistic reasoning under
uncertainty in real time to identify the visitor's type. It then
delivers an audiovisual narration to the visitor as a function of the
estimated type, interactively in time and space. The model has been
tested and validated on observed visitor tracking data using the EM
algorithm. Estimation of visitor preferences using additional sensors
is provided in a simulated environment.
|
Keywords
- machine learning
- bayesian networks
- interactive museum
- wearable computing
- user modeling
- augmented reality
- mixed reality
- context based
- interactive narrative
- interactive storytelling
|
- date of completion of first prototype: July 1999,
(presented at SIGGRAPH 99)
- MIT undergraduate collaborators: Eric Hilton, Chin Yan Wong,
Anjali D'Oza, Sarah Mendelovitz, Audrey Roy, Manuel Martinez, Tracie Lee
- project advisors: Neil Gershenfeld, Sandy Pentland, Thad Starner,
Yuan Qi, Tom Minka, Glorianna Davenport, Walter Bender, Ron McNeil,
Kent Larson
- written in C++, DirectX, and Flash, by Flavia Sparacino,
copyright MIT Media Lab and Flavia Sparacino
- link to publications:
|
City of News
Project Type: 3D Internet
City of News is an immersive 3D web browser City of News is a
dynamically growing urban landscape of information. It is an
immersive, interactive, web browser that takes advantage of people's
strength remembering the surrounding three-dimensional spatial
layout. Starting from a chosen "home page", where home is associated
with a physical space, our browser fetches and displays URLs so as to
form skyscrapers and alleys of text and images through which the user
can "fly". The City is organized in urban quarters (districts) that
provide territorial regrouping of urban activities. City of News
can be experienced either as a desktop 3d web browser or as a
gesture-driven information space (immersive cinema)
A computer vision stereo based interface is used to navigate inside the
3-D Internet city, using body gestures. A wide-baseline stereo pair of
cameras is used to obtain 3-D body models of the user’s hands and head
in a small desk -area environment. The interface feeds this
information to an HMM gesture classifier to reliably recognize the
user’s browsing commands. This work is based on our previous prototype,
called the Hyperplex, in which people navigate a media-scape using pointing
gestures.
|
Keywords
- 3D Visualization
- internet browser
- gesture recognition
- computer vision
- hidden markov models
- human machine interface
- human computer interface
- interactive computer graphics
- information architecture
- virtual reality
|
- date of completion of first prototype: April 1996
(at the time the project was called: NetSpace)
- MIT undergraduate collaborators: Jeff Bender, Mary Obelnicki,
Michal Hlavac
- Shown at Ars Electronica 1997
- Shown at SIGGRAPH 1999
- other collaborators: Chris Wren, Ali Azarbayejani
- project advisors: Alex Pentland, Glorianna Davenport, Ron McNeil
- written in C++ and Open Inventor by Flavia Sparacino,
copyright MIT Media Lab and Flavia Sparacino
- link to publications:
|
Presentation Table
Project Type: Interactive Narrative Spaces
The presentation table is an interactive display table which narrates
a story guided by the position of the objects on it. The featured
example is Unbuilt Ruins, an exhibit space which shows a variety of
architectural designs by the influential XXth century american
architect Louis Kahn. This exhibition interactively features
radiosity-based, hyper-realistic computer graphics renderings of 8
unbuilt masterworks by Louis Kahn (side screens). It is housed in a
large room, and contains in the center a square table above which are
mounted a camera and a projector pointing downwards. By placing the
foor-plan selector-object in the center table, and moving a
cursor-object around the displayed floor plan the public can explore
hundreds of images which otherwise would have been impossible to house
in the limited space of a single museum.
|
Keywords
- object tracking
- computer vision
- RFID tags
- interactive architecture
- interactive museum
- architectural animations
- interactive storytelling
|
- date of completion of first prototype: January 1999
- project advisors: Alex Pentland, Ron McNeil, Kent Larson,
Neil Gershenfeld, Glorianna Davenport
- other collaborators: Nuria Oliver, Tom Minka
- parts of this research is based on previous work by Chris Wren and
Ali Azarbayejani
- This table became the original prototype for the interactive table of the
MOMA exhibit: The Unprivate House, curated by
Terence Riley, and shown at MOMA in the summer and fall of 1999
- link to publications:
|
Immersive Cinema
Project Type: Interactive Narrative Spaces
The Immersive Cinema is a large scale installation which uses two projection
surfaces: one vertical and one horizontal. The horizontal surface is a
large map projected on the floor. People physically walk onto
different locations of the floor map and trigger consequently the
front surface projection to show the correspondent visual and auditory
information. One can see the floor map projection like a mouse pad,
and the person walking onto the map like the mouse driving the
system. A computer controlled infrared camera detects people's
presence, location, and gestures on the floor map.
The immersive cinema setup was used in a variety of installations.
|
Keywords
- computer vision
- body tracking
- augmented reality
- gesture recognition
- interactive narrative
|
- date of completion of first prototype: March 1998
- shown in video format at Ars Electronica 1998 (Metacity Sarajevo)
- shown with City of News at SIGGRAPH 99 (City of News)
- project advisors: Ron McNeil, Alex Pentland, Glorianna Davenport
- MIT undergraduate collaborators: Jeff Bender, Teresa Hernandez
- link to publications:
|
Dance Space
Project Type: Interactive Performace
DanceSpace is an interactive performance space where both professional
and non-professional performers can generate music and graphics
through their body movements. As the dancer enters the space, a number
of virtual musical instruments are invisibly attached to their
body. The dancer then uses their body movements to magically generate
an improvisational theme above a background track. Simulatenously the
performer can paint with their body on a large projection screen,
thereby generating graphics shaped by their movement and the music
they produce. Dance Space uses unencumebring sensing: the performer
does not need to wear any special suits or markers. Motion tracking is
uniquely based on image processing from a camera pointed at the human
performer.
|
Keywords
- interactive music
- body tracking
- computer vision
- interactive art
- interactive performance
|
- click here for more info
- date of completion of first prototype: January 1996
- shown at IDAT (International Dance and Technology) 1999
- Shown at ISEA 2000
- project advisors: Alex Pentland, Glorianna Davenport
- MIT undergraduate collaborators: Chloe Chao, Tyson Hass, Rego Sen
- link to publications:
|
Wearable Cinema, Wearable City
Project Type: Wearable Computing
Wearable computing provides a means to transform the architecture and
the space surrounding us into a memory device and a storytelling
agent. We assembled a wearable computer specifically aimed to the
purpose of mapping architecture into an experience comparable to that
of watching a movie from inside the movie set, or being immersed in an
information city whose constructions are made of words, pictures, and
living bits. The wearable is outfitted with a private eye which shows
video and graphics superimposed on the user’s real-surround view. It
uses real-time computer vision techniques for location finding and
object recognition. We describe two applications. Wearable City is the
mobile version of a 3D WWW browser we created called "City of News."
It grows an urban-like information landscape by fetching information
from the web, and facilitates the recollection of information by
creating associations of information with geography. Wearable Cinema
extends the previous system to generate an interactive audio-visual
narration driven by the physical path of the wearer in a museum space.
|
Keywords
- augmented reality
- mixed reality
- wearable computing
- interactive architecture
- interactive cinema
- interactive art
- interactive narrative
- interactive storytelling
|
- date of completion of first prototype: July 1999
- shown at SIGGRAPH 1999
- shown at IMAGINA 2000
- MIT undergraduate collaborators: Jeff Bender
- other collaborators: Chris Wren
- project advisors: Sandy Pentland, Glorianna Davenport
- link to publications:
|
Responsive Portraits
Project Type: Interactive Narrative Spaces
Responsive Portraits challenge the notion of static photographic
portraiture as the unique, ideal visual representation of its subject.
A responsive portrait consists of a multiplicity of views -- digital
photographs and holographic 3D images, accompanied by sounds and
recorded voices -- whose dynamic presentation results from the
interaction between the viewer and the image. The viewer's proximity
to the image, head and upper body movements elicit dynamic responses
from the portrait, driven by the portrait's own set of autonomous
behaviors. This type of interaction reproduces an encounter between
two people: the viewer and the character portrayed. The sensing
technology that we used is a computer vision system which tracks the
viewer's head movements and facial expressions as she interacts with
the digital portrait; therefore, the whole notion of "who is watching
who" is reversed: the object becomes the subject, the subject is
observed.
|
Keywords
- shape recognition
- pose recongnition
- computer vision
- interactive photography
- interactive art
- human computer interface
- gesture recognition
- face and lip tracking
|
- date of completion of first prototype: April 1997
- shown at ISEA 1997
- other collaborators: Nuria Oliver
- project advisors: Glorianna Davenport, Sandy Pentland, Steve Benton
- MIT undergraduate collaborators: Jeff Bender, Amandeep Loomba, Tracie Lee
- holographic version collaborators: Steven Smith, Betsy Connors,
Ray Molnar, Jessica Lazlo, Linda Huang, Leila Haddad
- link to publications:
|
Improvisational Theater Space
Project Type: Interactive Narrative Spaces
Progress in wireless and untethered body tracking techniques offers
today a variety of expressive opportunities for performers. We have
developed an interactive stage for a single performer using a wireless
real-time body tracker based on computer vision techniques. In
Improvisational Theater Space we create a situation in which the human
actor can be seen interacting with his own thoughts in the form of
animated expressive text, images, and movie clips projected on
stage. Such Media Actors are just like human actor able to understand
and synchronize their performance to the other actors' movements,
words, tone of voice, and gestures. Our work augments the expressive
range of possibilities for performers and stretches the grammar of the
traditional arts rather than suggesting ways and contexts to replace
the embodied performer with a virtual one.
|
Keywords
- body tracking
- interactive performance
- interactive theater
- speech recognition
- word spotting
- gesture recognition
- pitch tracking
|
- date of completion of first prototype: February 1996
- performed at The Sixth Biennal Symposium for Arts and Technology,
Connecticut College, New London, 1997
- performed at the MIT Media Lab, March 1996
- collaborators: Kristin Hall, Chris Wren, Erik Trimble
- project advisors: Glorianna Davenport, Sandy Pentland
- link to publications:
|
Any_background Virtual Studio
Project Type: Interactive Narrative Spaces
Virtual Studio is a 3D set inside which people can meet and interact
among themselves or with other 3D characters. As in a magic mirror,
advanced computer vision techniques allow participants to see their
full body video image composited in 3D space - without the need for a
blue screen background . Hence such setup can be used for
collaborative storytelling, visual communication from remote
locations, or game playing. The participant's image is subjected to
all graphics transformations that can apply to graphical objects,
including scaling. According to the participant's position in the
space, his/her image occludes or isoccluded by virtual objects in
respect to the 3D perspective of the virtual scene. Multiple people
can connect from remote locations, therefore turning the magic mirror
into a magic space
|
Keywords
- real time compositing
- body tracking
- gesture recognition
- hidden markov models
- computer graphics
|
- date of completion of first prototype: January 1999
- shown at IMAGINA 1999
- MIT undergraduate collaborators: Ken Russell, Michal Hlavac
- project advisors: Sandy Pentland, Glorianna Davenport
- link to publications:
|