Main >> Technology and Learning >> Proximity Detection in a Microphone v.2



Proximity Detection in a Microphone: Version 2

What is this 'version 2' stuff?


Version 1 of my microphone with proximity detection used ultrasonic sensing and analog electronics. The system worked, but the sensing was finicky and the analog electronics limited flexibility in how the system could be expanded.

In Version 2, I wanted to compare ultrasonic sensing to capacitive sensing for robustness in proximity detection. Because ultrasonic sensing is directional, using multiple sensors allows for greater possibilities in detecting tilt and placing where large objects, such as people, are in relation to the microphone.

Second, I wanted to move the project from an analog electronics environment to a digital signal processing environment. By controlling the logic and function with a DSP chip, it is only a matter of software to change the function of the proximity detection system from adjusting gain to adjusting bass response and equalization. Taking this idea further, it is now possible to map the ranging of the ultrasonic sensors to various musical effects. Through software mapping, the microphone promises to be a new type of music controller.


What are the miniature speakers around the base?


These aren't speakers but are the ultrasonic sensors. What you are seeing are 6 pairs of Devantech SRF10 ultrasonic rangefinders outlining the circumference of the microphone. Each pair consists of one transmitter, one receiver, and a microcontroller. The rangefinders are slaves on an I2C bus. An Analog Devices ADSP2181 DSP development board is the I2C host and processes the microphone audio. The DSP chip calculates and overall distance measurement by reading the distance measurements of each of the pairs of rangefinders and applying a moving average filter.


Can I see a demonstration?


Here's a video of my officemate Victor testing the prototype. The microphone is fixed to a microphone stand while Victor moves toward it and away from it.

The left channel of the microphone is not being altered by the rangefinding system. You can see its real-time recorded signal being recorded on the top 1/2 of the desktop computer screen in the video.

The right channel gain is being altered by the rangefinding system and the DSP chip. You can see its real-time recorded signal on the lower 1/2 of the desktop computer screen in the video.

The laptop computer to the left of the desktop computer is showing a graph. The left bar of the graph shows the current distance Victor is from the microphone as calculated by the rangefinding system. The right bar of the graph shows the amount of amplification being added to the right channel as a consequence of this calculation. The gain is being adjusted as the square of the distance.


How does Version 2 compare with a conventional microphone?


Here are the resulting wave forms from the demonstration video above. My prototype is the lower waveform while the conventional microphone signal is the upper waveform.

The upper waveform (without rangefinding correction) clearly shows a dramatic increase and decrease in the recorded signal strength as a function of the distance Victor was from the microphone. The lower waveform (with rangefinding correction) shows a very strong signal when Victor is far from the microphone, and the signal actually drops slightly when Victor is close. This means the system is over-correcting for the distance. Tweaking the adjustment of gain vs. distance is now a trivial matter as it happens through a calculation in the DSP chip's software.

Besides seeing this visually, you can hear this aurally. Listen to the difference between the two microphones:

Demo audio with my prototype.

Demo audio with conventional microphone.


So...what's next?


I have already designed a new mount for the ultrasonic sensors so they can sense proximity more successfully throughout the 3 axes. With this new mount and the DSP environment, it is now possible to create a new type of music controller which a musician can manipulate both through sound and proximity. I would like to then integrate this into the modular sound-blocks I am developing.