How Acoustics Might Help Prevent Car Accidents

Car seats save lives, but ultimately the best way to protect passengers is to prevent crashes from happening in the first place. Prevention has been tackled from many different directions–insurance incentives for defensive driving, legal consequences for reckless driving, backup cameras, more effective headlights, reduced speed limits, and safer intersections, among others. But it’s still not enough.

Accident. Credit: thinktk (CC BY-NC 2.0).

One of the more recent prevention approaches is outfitting cars with advanced collision warning systems. By integrating radar, laser, camera, and sometimes sonar technology, these systems use math and physics to determine the relative speed between the car and objects in its path. If a system senses that the car is approaching a slow-moving vehicle or stationary object too quickly, it beeps, vibrates, or otherwise prompts the drivers to react.

Drivers don’t always react appropriately or quickly enough though, so many car manufacturers are now combining warning systems with automatic braking; the car reacts even if the driver doesn’t. Computers can calculate the outcome of many different scenarios quickly and choose the reaction most likely to produce a good outcome, so manufacturers may be onto something. Similarly, studies suggest that the widespread use of self-driving cars could dramatically reduce the number of crashes. It’s not yet clear by how much. That depends on many factors, including how well the cars are equipped to sense impending crashes and avoid them.

Skid marks from tires on paved road. Credit: Robert (CC BY-SA 3.0).

But even if we adopt self-driving cars and equip them with current collision avoidance systems, it’s not going to be enough, according to Keegan Yi Hang Sim, a student researcher at Hong Kong University of Science and Technology working with professor Kevin Chau. Different systems have different limitations, but most perform worse in low light, very bright light, dense fog, around curves, on steep hills, and when a vehicle approaches the car quickly from the side.

With this in mind, Sim and fellow scientists at Hong Kong University of Science and Technology have been exploring the feasibility of a new kind of collision warning system. This system doesn’t rely on cameras, lasers, or the physical detection of nearby objects at all. It’s based on audible sound. (This is different than the sonar systems sometimes incorporated into current systems. Sonar relies on sound too high pitched for humans to hear.)

The idea behind this new system is to capture and analyze the sounds surrounding a car in motion–passing traffic, pedestrians, birds singing–and then raise the alarm when squealing tires or blaring horns suddenly interrupt the soundtrack. This would complement rather than replace existing systems, and be especially valuable in situations with reduced visibility. At a meeting of the Acoustical Society of America earlier this month, the team demonstrated a first step toward assessing the feasibility of this approach: a computer algorithm for detecting and isolating the sound of honking horns and skidding tires.

Top: Audio
signal plotted against time (in seconds). Bottom: Spectrogram of the audio,
with frequency on the vertical axis and time on the horizontal axis. The brightness
reflects the magnitude of a particular frequency component. The visible
differences between honking, tire skidding, and collision suggest that
mathematical methods should be able to detect and isolate the sounds. Also note
in the spectrogram that the collision contains all frequencies while tire
skidding primarily contains lower frequencies and has two distinct bands around
2000Hz. Credit: Keegan Yi Hang Sim, Yijia Chen, Yuxuan Wan, and Kevin Chau.

All-day long, our ears capture a stream of vibrations from the surrounding air. These vibrations, or waves, are interpreted by our brains as sounds. We differentiate between sounds by their pitch (which corresponds to frequency), volume (which corresponds to amplitude), and duration, and by how these components are integrated. If you were to record the noise around you and graph it with decibels (which corresponds to the intensity of the signal) on the vertical axis and frequency on the horizontal axis, the signal might look something like this.

Credit: Public domain.





The voices of your loved ones, the sounds on the radio, and the bark of the dog down the street would all be captured in this kind of a sound spectrum.

Our brains are constantly filtering through all of the noise we hear, searching for important patterns. That’s how a parent can recognize their baby’s cry and why you hone in on your ring tone, but not anyone else’s. Similarly, researchers have tools for analyzing audio signals and searching for patterns that correspond to specific sounds.

For this project, the researchers used a mathematical tool called the discrete wavelet transform (DWT). In DWT, the overall signal (the wave) is broken into groups of smaller signals (wavelets) according to two characteristics. First, all of the wavelets in a group have to be dominated by the same frequency (pitch). Second, all of the wavelets have to be centered around a specific point in time and have a limited duration.

Repeating this process several times makes it possible to eliminate the drone of background noise because it’s not dominated by anyone frequency and it lasts a long time. Squealing tires would stand out though, as would a dog bark, because they have both a dominant frequency and a short time span.
Once you’ve identified the groups of wavelets that meet these criteria, you can cycle through them and look for patterns that match the characteristic frequencies, relative amplitudes, and durations of a particular signal–the variables that differentiate a honk from a bark.

This might sound like a long, slow process, but not for a computer. The program designed by Sim and the rest of the team is so fast that it can successfully detect honking horns and squealing tires in real-time. That means it could give cars and drivers precious time to engage in safety systems and take action to reduce the chance of a collision.

You can hear an audio version of their results by listening to the following series of audio files. As a byproduct of the algorithm, the isolated noises in files 2 through 4 don’t sound like you’d expect, but that doesn’t affect the computer’s ability to detect the signal. The files are used with permission and were created by the team: Keegan Yi Hang Sim, Yijia Chen, Yuxuan Wan, and Kevin Chau

In the original audio of this car crash, you can hear three distinct sections: horns honking, tires skidding, and cars colliding.

In the original audio of this car crash, you can hear three distinct sections: horns honking, tires skidding, and cars colliding

The sound of the horns only as isolated by team’s algorithm

The sound of the tires only as isolated by team’s algorithm

The sound of the collision only as isolated by team’s algorithm

So, will we see this technology in cars anytime soon? Best case scenario, it’s going to be a long road. Debuting technology in the lab is the first step toward bringing it to the streets, but there’s a lot of work that needs to happen in between. That’s especially true when human lives are on the line. In the meantime, PhysicsBuzz will keep an ear out for progress and keep you posted!

 –Kendra Redmond

You may also read these articles