next up previous contents
Next: Basic Components of Up: Synthetic Movies Previous: Introduction


Previous Examples of Synthetic Movies

Synthetic movies have been explored previously many times. Early examples of synthetic movies, with a very limited message, were interactive video games. The graphical output of these video games tended to be of poor quality, with low spatial and color resolution due to the high cost of memory. The next attempts at creating synthetic movies were prompted by the availability of the videodisc, which provided a large, random access frame store. As mentioned earlier, this allows interframe synthetic movies to be generated. Newer videodisc systems include graphic overlays and simple digital effects to allow increased manipulability of the displayed image sequence. The generation of interframe synthetic movies has received a large amount of attention from the field of computer graphics.

Computer Graphics

Intraframe synthetic movies have been given much attention by the field of Interactive Computer Graphics. Foley and Van Dam, in their classic treatise upon the subject[Foley83], define interactive computer graphics to be when the ``user dynamically controls the picture's content, format, size, or color...by means of an interaction device''. Computer graphic animations are very similar to synthetic movies. The difference between them is the amount of time required to generate the image sequence. A synthetic movie is composed at it is viewed, whereas computer graphic animations are not normally generated in real time. This, however, is a restriction imposed only by the capabilities of the synthesizing hardware. Very expensive flight simulators are capable of rendering images of moderate complexity in real time (ie. as they are viewed). The cost of high-powered graphics engines is decreasing, however, as their ability to generate more complicated real time animations is increasing gif.

Generating a computer animation presents several problems to the animator. The objects in the scene should look real, as well as act real. The realistic synthesizing (rendering) of images is not simple, requiring that a 2D view be made from a 3D database, incorporating the effects of lighting, lens effects and occlusions. Much work has been done in the field of rendering, resulting in the development of techniques that allow realistic generation of synthetic images.

It is the acting real that presents a more serious problem in computer animation. Modeling the physics involved in natural motion, such as a tree waving in the wind, or a dog running down a street, is a difficult task. Once the motion has been modeled, a method of controlling and synchronizing the motion of objects in the scene must be found.

The control of the objects in an animation has three basic approaches. These are the guiding mode, the animator level, and the task level. In the guiding mode of control, the positions of the animated objects are explicitly controlled by the animator. In the animator level of control, the motion/behavior of objects is described algorithmically, usually in some sort of programming notation. The task level of control stipulates the motion in the scene in terms of events and relationships between the objects [Zeltzer85].

Unfortunately, the synthesis of realistic images is currently possible only by the use of rendering techniques, such as ray tracing, which are not very amenable to rapid computation. It is for this reason that alternative means of generating synthetic movies merit research and development. At some point in the future, advances both in computer technology and graphic rendering algorithms will likely make computer graphic techniques the medium of choice.

Bolio

A system for the development of interactive simulation, the Integrated Graphical Simulation Platform (IGSP) developed at the MIT Media Laboratory, provides for simple synthetic movies in response to user input, present constraints and task level animation. The IGSP includes an object editor, and behavior modeling tools for the objects in the microworld. A hand position input device, the VPL Dataglove, is provided for the user to interact in real time (actually handle!) with the objects in the microworld. Another input device is a 6 degree of freedom joystick, the Spaceball. Objects in the microworld may be fairly complex: a roach which is capable of walking around under its own control, for example. Another of the objects developed for the microworld is a robot arm that can ``play catch'', with a user using the Dataglove, using inverse kinematics techniques. [Brett87][Sturman89][Zeltzer88].

Interactive Video Games

The introduction of direct computer/user interaction with the appearance of the MIT TX-0gif was soon followed by the introduction of the first interactive video games. Spacewar, a game developed at MIT in 1961 for the DEC PDP-1, for the first time presented a user with a virtual world with which he could interact in real time. The graphics display of Spacewar was very limited, using simple line graphics to present the users with two miniature spaceships that could be manipulated in front of a astronomically correct background of stars[Levy84].

The first video games to be available to the public were commercial video games such as ``Pong'' and ``Space Invaders''. These first games were very simplistic, due to the level of computer hardware available at affordable costs. Many other games of increasing complexity, however, soon followed.

The introduction of the video game into the consumer market followed soon after the advent of the first monolithic microcomputer. The microprocessor also fueled the personal computer revolution, providing many people with the means of producing and playing interactive video games. These games tended to have poor spatial and color resolution, due to both memory and system bandwidth constraints. The programmers who created these video games were forced to explore methods of providing real-time graphics performance with very simple equipment. They almost universally used a 2D image representation for their objects.

The ready availability of consumer videodisc players spurred the development of electronic games making use of them. The videodisc games produced tended to combine interframe synthesis with simple user input via the host computer. The resultant motion sequence can diverge at any user query step, although due to videodisc storage considerations the divergent sequences are usually reunited at a later time. These games suffer from the need to have an external videodisc player, which is often as large as the computer itself, nearby. Additionally, extra video equipment is necessary to mix and display the resultant video images. In order to display video while the videodisc is searching for a new sequence, it is necessary to use two videodisc players or a frame storage device.

The Movie Manual

The ``Movie Manual'' was a prototype electronic book, incorporating text, sound, images and image sequences. It used computer controlled videodiscs to provide a large, random access frame and data store. This data was accessed and displayed to the user by interactive software using a book metaphor. Central to the ``Movie Manual'' was the concept that the displayed information was being constructed from the stored information as it was viewed. The information base, and the means of presenting it, could be altered by the reader. The receiver utilized information about the contents of the electronic book in displaying (formatting) it.

The hardware consisted of a host minicomputer, equipped with a medium resolution framestore, two or more videodisc players, and video mixing equipment to superimpose the computer graphics and video. The framestore was used to provide still computer graphics images and text, but was not used in generating image sequences. Although this limited the electronic book to interframe synthesis, the ``Movie Manual'' is a good example of a synthetic movie [Backer88].

The Oxyacetylene Welding Simulator

Another example of a synthetic movie is a training simulator designed by D. Hon of Ixion Systems to teach oxyacetylene welding [Ixion][Brush89]. This simulator uses a videodisc as a large framestore to store the images required for the simulator. The student is coached in the technique of using a welding torch by a personal computer controlling the videodisc player. An actual welding torch (without real gases) is used as an input device. Its three dimensional position, and the state of the two gas valves are used as input to the computer.

The student is first taught the procedure for lighting the torch. The computer uses images from the disk to simulate the flame that would be produced were the student using a real oxyacetylene welding torch. The desired flame is also shown for comparison. Once the student has adjusted the gas flow valves to the proper settings, the simulation continues.

On the horizontal monitor face, an image of two pieces of steel sheeting is shown. The student uses his welding torch to ``weld'' the two sheets together. As the weld progresses, the controlling personal computer displays its evaluation of the student's work by selective retrieval of images from the videodisc frame store. If the weld being made would be flawed, the computer will display a flawed weld. If the weld is acceptable, a good weld is displayed. The interframe synthetic movie generated reflects the internal model of the weld maintained by the personal computer.

The Elastic Charles

A very recent example of an interframe synthetic movie, utilizing advances in computer technology, is the ``Elastic Charles'' project [Brøndmo89]. The ``Elastic Charles'' is a hypermedia journal, a prototype electronic magazine. It contains a collection of multimedia stories about the history, ecology, current news and usage of the Charles River in Massachusetts. These stories are primarily visual, but are augmented with text, graphics and sound. Hypermedia references (links) may be made between text, graphics and sound and video segments.

The hardware platform for ``Elastic Charles'' consists of a personal computer equipped with a specialized frame buffer, and a videodisc player. The computer used is an Apple Macintosh II. The ColorSpace II frame buffer allows the Mac to generate an overlay on the output of the videodisc, as well as permitting the videodisc output to be digitized. The videodisc is used for storage of the video sequences, limiting the magazine to interframe synthetic movies.

The traditional hypertext notion of a link is extended in ``Elastic Charles'' to include references to motion image and sound sequences. Unlike text links, the presence of links from a video sequence are presented in a separate window to avoid disturbing the presentation of the video information. If a link references another video sequence, the link is represented by a ``micon'', or moving icon. This icon is a short, miniature sequence of video digitized from the referenced video sequence. Micons are also used to navigate the video sequences, recording the path taken by the user in a recent segments card.



next up previous contents
Next: Basic Components of Up: Synthetic Movies Previous: Introduction

wad@media.mit.edu