Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

New chip from MIT could mean power-efficient AI in all your electronics

Researchers at MIT have developed a low-power chip specialized for automatic speech recognition that could result in a power savings of up to 99 percent.
By

Published onFebruary 13, 2017

Researchers at MIT have developed a low-power chip specialized for automatic speech recognition that could result in a power savings of up to 99 percent.

Google Assistant may soon help you make online purchases
News

Although far from being perfect, Apple’s Siri transformed how we perceive mobile artificial intelligence. Since then we’ve seen similar attempts from various companies – from the disastrous S-Voice to the most recent Google Assistant. In fact, 2017 is shaping up to be the year of AI: Android Wear 2.0 has Google’s virtual assistant built in, Samsung is rumored to bring an improved AI assistant with the Galaxy S8, and IoT home devices are becoming more and more commonplace.

However, these advanced virtual assistants rely on speech recognition, and often, it has to be always on in order to detect your command. That means even the most power-efficient devices go from 100 percent to 0 pretty fast. Now, researchers at MIT have built a chip that’s specifically designed with that problem in mind. They explain that whereas a normal phone would use around 1 watt of power for speech-recognition, the new chip would only require a fraction of that: 0.2 to 10 milliwatts.

They explain that whereas a normal phone would use around 1 watt of power for speech-recognition, the new chip would only require a fraction of that: 0.2 to 10 milliwatts.

The way the new chip works is instead of running full-scale neural networks all the time to detect every sound and noise, it will have a simpler “voice activity detection” circuit that can spot human speech. Once it does detect human speech, the chip “fires up the larger, more complex speech-recognition circuit” according to MIT. The result is that we could see a power savings of anywhere from 90 percent all the way up to 99 percent, meaning even small and simple electronic devices could be powered by advanced speech-recognition systems and AI assistants.

As Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT, explains, the sheer efficiency of this new chip could mean wider use of speech input capabilities:

Speech input will become a natural interface for many wearable applications and intelligent devices. The miniaturization of these devices will require a different interface than touch or keyboard. It will be critical to embed the speech functionality locally to save system energy consumption compared to performing this operation in the cloud.

As wearables – smartwatches, earphones, glasses, etc. – grow, speech will become an essential mode of human-to-device communication. Given wearable devices’ usual size, MIT’s new power-efficient chip could be an answer to creating gadgets that last for long enough to be actually useful.