Forgot your password?



Back to login

MIT designs new chip to cater to AI needs
February 4, 2018, 10:37 pm
Share/Bookmark

Machine learning and artificial intelligence (AI) have been moving at a rapid rate in recent years, but the computer processors on which these programs run have failed to keep pace. While processor manufacturers have toyed with rejigging the processor architecture in a bid to boost speeds to meet the growing demand from AI, it has clearly not been enough.

Now researchers at the Massachusetts Institute of Technology (MIT) say they are working on a new chip that takes us closer to processors that work like the brain, and with much lower energy needs. The new design called ‘neuromorphic computing’ could potentially lead to processors that run machine learning tasks with up to 1,000 times lower energy demands and enable more devices to have AI abilities such as voice and image recognition.

The key difference between neuromorphic chips and the ones currently being used in computers is that they process data in an analog, rather than a digital fashion. This means that instead of sending information in a series of on / off electrical bursts, they vary the intensity of these signals — just like how connections called synapses work in the human brain network.

What this effectively means is that more information can be packed into each jolt, drastically reducing the amount of power needed. It could be likened to the difference between Morse code and human speech. The former encodes data using just two outputs, dots, and dashes — making meanings easy to understand but lengthy to communicate. Speech, by comparison, can be difficult to interpret (think fuzzy phone lines and noisy cafes) but each individual utterance holds much more data.

A big difficulty with building neuromorphic chips, though, is being able to precisely control these analog signals. Their intensity needs to vary in a controlled and consistent fashion. Attempts to find a suitable medium for these varying electrical signals to travel through have previously proven unsuccessful, because the current ends up spreading out all over the place. To fix this, researchers at MIT used crystalline forms of silicon and germanium that resemble lattices at the microscopic level. Together, these create clear pathways for the electrical signals, leading to much less variance in the strength of the signals.

Using the innovative medium the researchers were able to train a neural network that could recognize handwriting (a standard training task for new forms of AI) with 95 percent accuracy. That was less than the 97 percent baseline using existing algorithms and chips, but it is quite promising for a new technology.

 

Share your views
CAPTCHA
 

"It is hard to fail, but it is worse never to have tried to succeed."

"Envy comes from wanting something that isn't yours. But grief comes from losing something you've already had."

Photo Gallery