How to Talk to a Papa-TV-Bot: Interfaces for Autonomously Levitating Robots.

Stefan Marti
The Media Laboratory
Massachusetts Institute of Technology
20 Ames Street, Cambridge, MA USA 02139
stefanm@media.mit.edu

Abstract

Developing new ways of thinking about speech for interaction with computers was always part of the agenda of the MIT Media Lab Speech Interface Group. However, I think that soon there will emerge a closely related new area: speech for interaction with mobile entities. By working with Artificial Life and Robotics people at MIT, I have come to the conclusion that speech might be the most appropriate way of human-machine interaction if the machine is a small, autonomous, and mobile entity.
Exploring the paradigms and principles behind these kinds of dialog systems is one thing, applying them to real objects is another. Unfortunately, small ultra-mobile autonomous entities to test the appropriateness of speech interaction are not available yet. However, I would be highly motivated to try to build a prototype, in interaction with Media Lab and MIT people, if I would be given the necessary means. I am convinced that the Speech Interface Group has a lot of relevant knowledge to solve the upcoming human-machine interface problems.
In the following, I will describe the possibilities, the evolution, and the basic technical elements of autonomously hovering micro robots, the perfect test bed for exploring the above mentioned paradigms. There are three main sections. First, the application Papa-TV-Bot, a free flying automatic video camera; second, a schedule for the long-term development of autonomously hovering mobots in 8 phases; and third, the basic technologies of the vehicles of Phases 1 through 3 as well as 8.

This paper can be downloaded as a Adobe Acrobat file (.pdf, 310KB)



This paper was written for Prosem class in Fall 1999.

Back to FFMP homepage


Send me some comments! Stefan Marti Last updated December 20 1999.

Copyright © 1997-2000 by Stefan Marti and MIT Media Lab. All rights reserved.