How to Talk to a Papa-TV-Bot: Interfaces for Autonomously Levitating Robots.
The Media Laboratory
Massachusetts Institute of Technology
20 Ames Street, Cambridge, MA USA 02139
Developing new ways of thinking about speech for interaction with computers was always part of the agenda of the MIT Media Lab Speech Interface Group. However, I think that soon there will emerge a closely related new area: speech for interaction with mobile entities. By working with Artificial Life and Robotics people at MIT, I have come to the conclusion that speech might be the most appropriate way of human-machine interaction if the machine is a small, autonomous, and mobile entity.
Exploring the paradigms and principles behind these kinds of dialog systems is one thing, applying them to real objects is another. Unfortunately, small ultra-mobile autonomous entities to test the appropriateness of speech interaction are not available yet. However, I would be highly motivated to try to build a prototype, in interaction with Media Lab and MIT people, if I would be given the necessary means. I am convinced that the Speech Interface Group has a lot of relevant knowledge to solve the upcoming human-machine interface problems.
In the following, I will describe the possibilities, the evolution, and the basic technical elements of autonomously hovering micro robots, the perfect test bed for exploring the above mentioned paradigms. There are three main sections. First, the application Papa-TV-Bot, a free flying automatic video camera; second, a schedule for the long-term development of autonomously hovering mobots in 8 phases; and third, the basic technologies of the vehicles of Phases 1 through 3 as well as 8.