next up previous contents
Next: Conclusion Up: Synthetic Movies Previous: Video Finger: Data


Results

Video Finger has been functioning as a Macintosh application for five months now, in one form or another. The early versions were used to test the object representation, both external and internal. The later versions have included the ability to generate the movie description and animate the display. A small, but representative, set of object descriptions has been prepared, including several backgrounds. The final function to be added was the network interface. When Video Finger first started working, I was startled more than once by the sight of a person entering the window, informing me that a friend had logged in.

Improvements

There are several areas of the Video Finger implementation that need improvement:

Future Work

A more intellegent means of monitoring the external user state is needed. I propose adding a server at the monitored machines for providing the user status. The server could be notified of the receiver's desire to monitor a particular user. Thereafter, the receiver would be notified by the server upon a change in the state of the monitored user. This would decrease the network traffic generated by Video Finger (although this is already quite minimal, especially when compared to a packet video movie !), and reduce the remote processor workload.

The background image is not restricted to still images; it can be a motion image sequence itself. This yields the digital video equivalent of filming a movie scene in front of a screen upon which is projected the scene background. Similarly, the background may be used to convey additional information; perhaps a window in the background, showing the current state of the weather, somewhere in the world.

The addition of sound to synthetic movies is a necessity. Ideally, the sounds associated with a synthetic movie must themselves be synthetic. Two different types of synthetic sound are being investigated: video synchronized speech synthesis and object associated sounds. The synchronization of a synthetic actor with synthetic speech (usually generated by a dedicated speech synthesizer) is not a new idea, it has been illustrated previously. The association of sounds in the soundtrack with particular objects in the video sequence, however, requires interframe synthesis. Video Finger provides a starting point for experimenting with object associated sounds.

Future Hardware

Video Finger is a very simple example of a synthetic movie, due to the restrictive object representation chosen. A more complex object representation would require a correspondingly more powerful hardware system.

  
Figure 7.1: Computer Price/Performance Comparison - 1990

Such systems exist now [Akeley88][Apgar88][Goldfeather89], but the specialized hardware required is very costly. This is changing, however, thanks to advances in VLSI technology. As the chart in Fig. 7.1* shows, the personal computers of the early 1990s will benefit from microprocessors that outperform many large computers of the 1980s. The development of more realistic synthetic movies on personal computers will be helped tremendously.



next up previous contents
Next: Conclusion Up: Synthetic Movies Previous: Video Finger: Data

wad@media.mit.edu