Though identity and emotion are expressed by the body as a whole, the face is arguably the part of the body that most densely portrays these qualities. On the internet, with no face, it may seem that the expression of these qualities becomes difficult or impossible. While portraying one's identity online can be more difficult and explicit than in real life (whese by default one's face already expresses one's identity) certain charateristics can be expressed in currently available systems. Such qualities as nationality, heritage, and culture can be portrayed by username, email address or server name. Physical descriptions allow people to portray their physical characteristics (or at least assumed physical characteristics. These static or long term dynamic qualities of the person that the face portrays in the real world can be at least in part be portrayed online.
Short term dynamic qualities, such as emotion, are more problematic. When emailing or using IM, people often express the emotional content of their communication in their face. This happens when both writing or reading: when you write a message to someone, you might smile when making a joke, or have a stern expression when writing about some serious topic; when reading a message, you might chuckle when reading a funny quip or show surprise when receiving a piece of news.
These two cases are example of two different problems. The emotional content that is expressed in your face is not included in the messages you send, so a comment that might mean something specific when accompanied by a particular facial expression can all of a sudden become ambiguous or confusing. When reading a message, the changing expression on your face is unknown to the sender, so that the dynamic phrasing and delivery of a particular piece of information that would take place in the real world in response to your changing facial expression can no longer take place. A message must be written completely without any facial feedback from the recipient. The latter of these two problems is important, but secondary to the first problem.
Expression of emotion online has been accomplished by emoticons or delimited expressions or jargon (such as lol or *smile*). Part of the difficulty with using these conventions is that the communication and the emotional expression are serially segmented, such as "I thought she was talking about you :)", where the expression follows the communication. In the real world, the emotional expression would occur simultaneously with the communication, and might also prefix the sentence.
Another important limitation of such textual emoting is that it is static. The smiley is binary--it either does or doesn't exist, whereas a smile in the real world has a beginning, middle and end. This time dimention of a facial expression is as important to the smile as the upturned corners of the mouth and eyes. Interestingly, people can read emotion with just motion information without a facial image than with a static image with no motion. This points to the importance of motion in the expression of emotion.
While video conferencing deals with this problem by giving people live images of each other, so their facial expressions are transmitted. When this is impossible or not wanted (perhaps for a chat environment when people don't want to reveal their face), current systems provide poses--static images meant to convey emotion. Such systems do provide a more timely emoting interface, but are very limiting in the actual expressions they allow. There is a danger in using photorealistic static faces, since they are very powerful expressors of emotion and carry a great deal of social expectations and assumptions, which can in many cases be incorrect if their use is not exact. Using a set of 10 static photographs of a face to emote is not enough: a smirk and a jovial smile have very different meanings, and such a small set of graphics might not cover this distinction. Further, much of the difference in these two facial expressions comes from the dynamics of the face, and not a static look.
Ideally, a dynamic emote system should be made available. A system that uses even simple shapes whose motion evokes the emotion--in the same way the face-light trials in Zebrowitz evokes emotion more clearly than just static faces--would provide a more malleable interface for emoting. Such a system might provide a template for particular emotion movements, and allow people to mutate and fashion their own motions by augmenting or combining existing ones. Such a system might store the deviation from norm for various points on the deformable object, and then provide a way to move from one deformation to another, creating an emotional expression. Control of such a system may require an alternative input device, as the keyboar and mouse are very limiting for control of dynamic represenations.
page last updated