Acoustic Augmented Reality

Objectives #

Wearables have solidified their position in the global mobile market in the form of smart watches and smart bands. Today, a new form of wearable devices, called Earable, is emerging. Earables are essentially wearables worn in the ear. With the wide adoption of earbuds, there is a great opportunity in using them for 3D sound spatialization, or namely Acoustic AR. Its use cases range from recreation activities like video games, to health sensing and education scenarios. On the other hand, Earables can pose safety risks to users wearing them all the time. Noise cancelation in earbuds isolates users from surrounding ambient sounds, making them unaware of any imminent dangers such as oncoming traffic or attackers. There is a need for an artificial auditory system that can localize and detect relevant sound events.

Apple Airpods, as a form of Earables

Participant #

  1. Awny M. El-Mohandes
  2. Navid Zandi
  3. Nazanin Moshtagh
  4. Bodee Quansah

Research Thrusts #

1-Acoustic Platform

Earable is an embedded system of different sensors, which are packed within an earbud for different quantifications. Due to its small form factor and wireless transmission capabilities, Earable will provide users with much more comfortability while doing their daily activities. They are the most suitable form of wearables to deal with the biosignals of the head because of its location. Moreover, it can use proprietary machine learning algorithms for higher accuracy of the bioelectric analysis. Basically, it is an earbud where the user can use it for common applications like listening to music, or answering a phone call, as well as acoustic AR like video games and education, while in the background, different sensors are continuously monitoring the biological signals of the user and use them for health monitoring and other scenarios.

Different Components inside an Earable

2-HRTF Estimation

Humans can hear and localize the incoming sound to the ears at the same time. This ability is attributed to filtering effects of the head and body on the incoming sound, which is called Head-Related Transfer Function (HRTF). HRTFs are frequency and direction dependent. Moreover, since the physical properties of different people are not the same, HRTFs are also subject specific, which makes it hard to estimate. Learning the HRTF of each human subject makes it feasible to do sound externalization (or spatialization), as if the HRTF of another person or a general one is used, the sound does not appear to be natural, and leads to bad user experience.

HRTF measurement setup inside anechoic room

Estimating subject specific HRTFs is not a trivial job, needs sophisticated infrastructure and should be repeated for each individual. Providing a simple, yet accurate procedure to do the HRTF estimation, to use the resulting estimated transfer function in the subsequent applications can help to spread the use of 3D sound.

3-Binaural Localization

While using earbuds, the person is isolated from his surrounding environment. Consequently, he needs an artificial auditory system to monitor the surroundings. As in our human auditory system, the artificial one must be capable of localizing, extracting, recognizing, and interpreting. Localization is the ability to determine the location of a sound source in a 3D space. Sound localization is important for safety reasons such as to avoid oncoming traffic, approaching cyclist on a running path, or a falling object. With localization ability, the listener can turn toward the sound source and take the advantage of using the additional visual cues to enhance communication in adverse listening conditions.

Publications #

  1. Navid Zandi, Awny M. El-Mohandes, and Zheng, Rong. “Individualizing Head-Related Transfer Functions for Binaural Acoustic Applications”,
  2. Awny El-Mohandes, Navid Zandi, and Rong Zheng. “DeepBSL: 3D Personalized Deep Binaural Sound Localization”, IEEE Internet of Things Journal, 2023

Downloads #

Acknowledgment #

Powered by BetterDocs