This is an overview of Ben Wong's work at the Media Lab. This page has the following sections:
This page also includes links to the following sub-pages:
I am a graduate student in the Ph.D. program at Georgia Tech working at the MIT Media Lab during the Summer of 2000. My more permanent (currently content-free) home page is at Georgia Tech.
I am collaborating with Rich DeVaul, Brian Clarkson, and Sandy Pentland on the Memory Glasses project which examines the use of wearable computers ("wearables") in aiding memory. The Memory Glasses will issue reminders similar to what one might expect from a personal digital assistant but with two key differences. First, the Memory Glasses will use context-aware techniques to decide when and how to alert the user. (An audible alert would be inappropriate if I am in a meeting, for example). Second, they will allow for vastly richer schedules: instead of being triggered by time an alert may be triggered by the context the user is in. (I should be reminded that I am out of milk when I'm near a supermarket not when I'm on an airplane.)
My main interest in this research is in exploring how a computer can automatically detect what reminders should be given. That is, I'm interested in what schedules the Memory Glasses can "know" about and how it knows them. My goal is to require little or no effort from the user to input a schedule. Can a wearable computer quietly gather enough information such that a user need not explicitly program the device? I believe so.
A secondary goal is to research using Remembrance Agent style JITIR (Just In Time Information Retrieval) for speech. Wearable computer users already use "Remembrance Agents" while they are taking notes or writing e-mail to suggests potentially relevant information from previous notes or databases such as INSPEC. If everything said is already being transcribed as text, why not use speech to query those databases or use the conversations, themselves, as a database?
This project has the potential to become a guardian angel / personal secretary which can listen over a person's shoulder to the promises she makes and then gently remind her about them in the future. As our population ages and begins to experience benign senescent forgetfulness, the world itself isn't slowing down. With the ever increasing information-glut any tool that can remind us of important things we've forgotten would be well welcome.
Furthermore, attempting to actually build this system may bring up interesting problems in wearable computing, natural language processing, context-awareness, and ubiquitous computing. For example, how can a computer judge what is important enough to be reminded about later? Is the fact that a person even bothered to say something a good enough clue, or will it need something more such as an utterance's "prosody"? How far can we push practical soft AI techniques to judge "importance" before we come up against an AI complete problem?
I will start small with basic, testable hypotheses. Namely that:
A relatively simple BNF grammar will suffice to recognize the most common spoken schedules.
When a person schedules an important appointment verbally she will often state it at some point as a single utterance.
To test this, I will continuously wear a computer with a microphone that can eavesdrop on the things I say. Using a grammar of appointments my program will perform speech recognition to detect and then parse the phrases that I ought to be reminded of later. Here are some of examples of the sorts of things I'd like it to catch:
When my system recognizes a "scheduling event" such as above, it will canonicalize it to a standard form and then send it on to the Memory Glasses proper. The Memory Glasses will then choose the right time and method of sending a reminder. (For example, they might detect when I'm leaving work and double-check that I actually did make that call I promised).
After recording a large enough dataset, I will go through the audio recordings and manually tally the mistakes. When was the grammar insufficient? When would having both sides of the conversation have helped?
I hope to flesh this out in more detail in the future. In the meantime here is a short list of related work:
I have a wearable system working and recording transcriptions. I am now wearing it every day during almost all waking hours. As I speak, the audio Remembrance Agent described above is constantly running, searching for "relevant" messages in my e-mail folders. (Soon I hope to have INSPEC indexed as was done with Margin Notes).
A list of the hardware and software I am using for my wearable is available including reasons so others won't make the same mistakes I did. (My system is a little unusual because my need for speech recognition precluded the use of one of the "standard" Borg Lab rigs).
I returned to Atlanta in mid August and I'm now continuing this research with Thad Starner. I will be concentrating on
Memory Glasses developers can find the latest status and to do list on the internal project web-page. (Developers only).
Ben Wong <bbb @cc.gatech.edu>
(I'm sorry I have to do this, but if you see NOSPAM in
my address please remove it to e-mail me).
Last modified: Mon Aug 28 11:05:32 EDT 2000 David