Main >> Technology and Learning >> Proximity Detection in a Microphone v.1



Proximity Detection in a Microphone: Version 1

What's so special about this microphone?


I have designed and built my first working prototype of a proximity detection system integrated into a microphone. My current prototype automatically adjusts microphone audio gain. The gain will increase when the distance between the microphone and the user is far or the microphone is tilted away from the user. The gain will decrease when the microphone is very close to the user. The hope is that this system will minimize the effects of using a microphone improperly. The microphone then becomes more transparent to the user, and has a more intuitive interface.

Future prototypes will adjust equalization as well as gain, and will map distance with various audio effects for expressive purposes.


Does it have to be so big and ugly?


No, this is just a quick prototype made with the materials immediately on hand. The sensors and circuitry could be implemented to fit inside a normal microphone shell. A microphone with proximity detection could be physically virtually indistinguishable from a conventional microphone.


Why is variable gain by proximity better than conventional Automatic Gain Contol?


Current Automatic Gain Control (AGC) circuits use only the audio signal when adjusting audio gain. If the audio signal is low, the gain is boosted. If the audio signal is high, the gain is cut.

There are numerous problems with this idea:
   •  If a person chooses to speak loudly or softly for effect, the AGC minimizes this effect.
   •  AGC cannot distinguish between somebody pausing and somebody far away from the microphone. Therefore, AGC increases gain during pauses. This is called "pumping." Pumping can be very disturbing to listeners, as they hear background noise steadily rising during pauses.
   •  AGC circuits either have to delay their audio signal coming out or, more commonly, they adjust audio gain "on the fly." Delaying the audio signal coming out can be confusing to listeners and users and is not practical in musical applications. Adjusting on the fly means that the gain adjustment is always slightly delayed. We have all heard the disturbing negative effects of this. Every time a person using a microphone takes a breath, the gain increases and the audience hears the "pumping." When the user again begins to talk, the amplifier is disturbingly loud for a brief moment before the circuitry compensates. It may become an annoying pattern for every sentence the user makes. The only way to compensate is to increase the time it takes for the gain control to adjust to the audio signal. Then the positive effects of gain control are lost so if, for example, the user moves away from the microphone, increased gain to the amplication is delayed.
   •  Microphones loose bass response as a function of distance. It would be difficult for an AGC circuit to detect this loss and compensate. However, future prototypes of a microphone with proximity detection could compensate for this easily.


How much power does it draw?


The current prototype will run for many hours on 2 9V batteries. Future prototypes will operate even more efficiently.


How much variable range does it have?


The current prototype can change adjust gain by about 8 decibels.


Can I see a demonstration?


Here's a video of my officemate Victor testing the prototype. He's actually holding two microphones, my prototype and a conventional microphone, so we can compare the difference.

The thing to notice is that the oscilloscope has a line which moves up and down as he moves my prototype toward and away from his face. There's an LED in the circuit whose brightness changes as well as a function of distance, but it's more fun to watch the oscilloscope.


How does it compare with a conventional microphone?


Here are the resulting wave forms from the demonstration video above. My prototype is the upper waveform while the conventional microphone is the lower waveform.

The signals are of approximately equivalent strength when the microphones are close to Victor's mouth. As Victor moves the microphones away, the conventional microphone's signal drops at a much higher rate than my prototype.

Besides seeing this visually, you can hear this aurally. Listen to the difference between the two microphones:

Demo audio with my prototype.

Demo audio with conventional microphone.

Here's another example:

Demo audio with my prototype.

Demo audio with conventional microphone.


Any other tests?


I fastened a speaker to my mouth and played test tones, comparing the response of the microphone at various distances. The solid line shows the results with proximity detection enabled, and the dashed line shows the results with proximity detection disabled.

Charts 5 and 7 show the RMS amplitude varying as a function of distance.

Charts 6 and 8 show the Decibel difference between distances.


And what about Tilt?


With the speaker still attached to my mouth, I tilted the microphone toward me and away while playing test tones.

Chart 3 shows the difference in amplitude for various tones with proximity detection enabled and disabled. It shows that the difference in amplitude between on axis (microphone tilted toward the mouth) and off axis (microphone tilted away from the mouth) is vastly reduced when then sensor is enabled.

Chart 5 shows the same information as a function of difference in Decibels


Anything else?


I'm not a big electronics guy so this was a difficult circuit for me to make. Fortunately I had help. Twice the circuit itself warned me before I blew myself up. By chance, a video camera captured these critical moments.

Video of the first warning.

Video of the second warning.

(I don't know why the thing keeps calling me Dave.)


That's weird. So...what's next?


I have already made a Second Version of this microphone, using ultrasonic sensing monitored and controlled by a DSP chip. You can read more about it here.


Acknowledgements


Thank you to Win Craft, formerly of THAT Corporation in Milford MA, for the initial idea the sparked the project.
Jamie Cooley, graduate researcher for the media lab's Viral Communications gave me the initial idea for implementation.

Joe Paradiso taught Sensors for Interactive Environments, the course which prompted this project and gave me the knowledge to create it. Ari Benbasat was his T.A. I am still in awe of how much I learned from this course --- they did a wonderful job.

Finally, my officemate Víctor Adán has been a wonderful source for feedback as we have discussed various approaches to this project. He has kindly put up with me as I made a mess of our office, conducted my weird tests, and made him the perpetual guinea pig. I couldn't hope for a better officemate.