next up previous
Next: Conclusions Up: Visually Steered 3-D Audio Previous: Adding dynamic tracking

Preliminary results

In order to experiment with head tracking in the context of transaural 3-D audio, we are currently using a Polhemus tracking system. This system returns the position and orientation of a sensor with respect to a transmitter (6 degrees of freedom). The sensor can be easily mounted on headphones or a cap to track head position and orientation. The head position and orientation can be used to update the parameters of the 3-D spatializer and transaural audio system. Results are preliminary at this time.

The strategy used to update transaural parameters based on head position and orientation obviously depends greatly on the head model used for the transaural filter. We used the simple head model given in equation 11. The following points were observed:

Using the static, symmetrical transaural system described earlier, the head tracking information was also used to update the positions of 3-D sounds so that the auditory scene remained fixed as the listener's head rotated. This gives the sensation that the source is moving in the opposite direction, rather than remaining fixed. There is a good reason for this. Using a static transaural system, the position of rendered sources remains fixed as the listener changes head orientation (provided that the change in head orientation is small enough to maintain the transaural illusion). This is contrary to headphone presentation, where the auditory scene moves with the head, even for small motions. Thus, the transaural presentation doesn't require compensation for small head motions, and if the compensation is provided, it is perceived as motion in the opposite direction. We hoped that this form of head tracking would provide dynamic localization cues to improve rear localization, but this is inconclusive. Despite the fact that head orientation should be decoupled from the positions of rendered sources, it may be important to compensate for listener position, in order that the listener can walk past virtual sources and sense the direction of the source changing.


next up previous
Next: Conclusions Up: Visually Steered 3-D Audio Previous: Adding dynamic tracking

Michael Casey
Mon Mar 4 18:47:28 EST 1996