Future versions of Apple devices — especially HomePods, but also iPhones, iPads, and even Macs — may be able to tell where a speaker is located, according to a newly granted patent (number 20210020189) for “learning-based distance estimation.”
The tech giant says it can be useful for a device to estimate the distance from the device to the user by using a compact microphone array. Why? The device may adjust the playback volume or the response from a smart assistance device based on the estimated distance of the user from the device.
In other words, if the user is close to the device, music or speech will not be played at a high volume. If the user is far away, media playback or the response from a smart assistant device play audio louder.
Here’s the summary of the patent: “A learning based system such as a deep neural network (DNN) is disclosed to estimate a distance from a device to a speech source. The deep learning system may estimate the distance of the speech source at each time frame based on speech signals received by a compact microphone array. Supervised deep learning may be used to learn the effect of the acoustic environment on the non-linear mapping between the speech signals and the distance using multi-channel training data.
“The deep learning system may estimate the direct speech component that contains information about the direct signal propagation from the speech source to the microphone array and the reverberant speech signal that contains the reverberation effect and noise. The deep learning system may extract signal characteristics of the direct signal component and the reverberant signal component and estimate the distance based on the extracted signal characteristics using the learned mapping.”