Guitarists: have you ever controlled your electric guitar with your face? This work presents a system which uses the output from face-tracking and gesture recognition software to drive a parameterized guitar effects synthesizer in real-time.
This work takes advantage of recent improvements in real-time face tracking and gesture recognition to use their face in the control of effect parameters and preset selection. Although the face is the most expressive and malleable part of the body, until now the face has been a largely unused effector in music. This project explores the use of the face for both gestural and continuous control of musical effects. In affording continuous control, the system replaces existing continuous input devices. The gestural/conversational control feature is a new method of interacting with an audio effects processor.
Gestural control: Switching between presets was achieved in a gestural conversational manner. Upon detecting a head-shake ("no"), the system would prompt the user with "Select new amp", and if the system then detected a head-nod ("yes") it would enter amp selection mode. Amp selection mode consists of the system switching to a new preset, and prompting the user with "This Amp?" (at which point the user has as much time as they need to play through the new preset and evaluate its fitness for their current musical needs) If the system detects a head-nod, it responds with "OK" and amp selection mode is over. Otherwise, if it detects a head-shake it cycles to the next amp and re-prompts the user.