VideoFinger, an example synthetic movie implemented as part of my
Master's Thesis , was a
visual version of finger
, implemented on a Mac IIx in
1989. It produced a 320x240, 12 fps color moving image (a single
frame of which is shown in monochrome above), containing structured
video objects representing the users being monitored. The objects
performed simple repetetive tasks (such as turning a page, while
reading a book, or nodding their head while asleep), which indicated
their online status.
It concentrated, however, on the generation of the video and did not approach the problems of user personalization. The resulting application was also limited to a single presentation medium - video, and required a special server running on the machines being queried to provide the desired information. The user had few means of personalizing the video display (a different background, with correspondingly different locations for users was the only one), and the video objects were stored locally. In short, while VideoFinger was a stunning statement as to the reality of synthetic movies on personal computers of the time, it wasn't a good implementation of finger.