NOTE - This proposal has remained unmodified since 4/3/96. It may therefore not reflect the final Metafinger application in some details.
I propose to extend finger to make it respond to several different contexts.
The obvious personalization to add to finger is the ability to request information in a looser way (i.e. who is around the VLW, or who from MAS961 is around.) This could be done in several ways either by having the user specify his own groups of users, or by providing predefined groups of users (e.g. based on mailing lists)
A second user personalization is the ability to specify how the output is presented. While this is as important with a limited medium such as text, it becomes more important as audio or video is supported.
A richer interface medium than text is now commonly available for the
finger
application. Two that immediately come to mind are
video and audio.
An audio-only interface to finger is certainly feasible. Such a presentation might consist of only a tailored text to speech interface (relying on touch-pad for user input), or it might include speech recognition for input. Such a system could facilitate remote location of a user - a phone call to the common finger server could indicate who from a particular group (say sysadm, or cheops-hardware) is currently working, and the phone number closest to where they are.
A video interface to finger
is not a new idea. This presentation format allows for visual searching
for relevant information using graphical instead of textual symbols.
The notion of a finger presentation that continues to evolve over time
instead of simply showing a single snapshot is not alien to the textual
medium (the top utility, for example), but the use of graphical icons reduces
the attention required to parse the changing presentation.