It’s no secret that today’s computers are struggling to keep up with
the enormous demands of data processing and bandwidth, and the whole
electronics industry is searching for new ways to enable that.
The traditional approach is to continue to push the limits of today’s
systems and chips. Another way is to go down the non-traditional route,
including an old idea that is generating steam today—neuromorphic
computing.
Originally conceived by engineering guru
Carver Mead
in the 1980s, neuromorphic computing and its previous incarnation,
artificial neural networks, make use of specialized chips that are
inspired by the computational functions of the brain. Neuromorphic
technology, sometimes called brain-inspired computing, is a paradigm
shift that breaks away from Moore’s Law. Neuromorphic chips don’t
require costly leading-edge processes.
In simple terms, neuromorphic chips are fast pattern-matching engines
that process the data in the memory. In theory, these chips promise to
enable systems that can perform several tasks, such as computer vision,
data analytics and machine learning. The ultimate goal is to realize
true artificial intelligence (AI).
Today, Facebook, Google and others are handling many of these
intelligent-like tasks using traditional computers and chips. In this
topology, sometimes called the von Neumann architecture, the system has
three main components—a processor, memory, and storage. They are
connected via a systems bus.
The industry, however, is running into an I/O bottleneck with today’s
systems, at least for many intelligent-like applications. So for these
apps, the industry hopes to develop a new class of neuromorphic systems
and chips, although they won’t replace traditional technology for the
foreseeable future.
“For many problems going forward, (von Neumann hardware) will still
be the right solution,” said Geoffrey Burr, a principal research staff
member at IBM Research. “But there’s an enormous amount of work that
needs to be done to make those (intelligent-like) algorithms work in
software on regular von Neumann hardware. The problem is that you need
this steady stream of data through the bus. So, you’re spending a lot of
energy and time shipping that data in and out.
“It would be ideal to bring the computation to where the data is,”
Burr said. “That’s where we see the opportunities for these neuromorphic
systems. It will accelerate machine learning.”
For that reason, neuromorphic technology is finally heating up after
years of research under the radar. Until recently, General Vision was
one of the few vendors shipping neuromorphic chips. But in a move that
could propel the technology, General Vision recently licensed its
intellectual-property to
Intel, which is shipping embedded processors based on the technology.
In addition,
IBM
recently launched TrueNorth, a 1 million-neuron processor. Meanwhile, a
European consortium, as well as HP, Qualcomm, Samsung and several chip
startups, are also pursuing the technology. And several universities and
government agencies, such as the U.S. Department of Defense (DoD), are
working on it.
Still, neuromorphic technology faces an uphill battle to gain broad
adoption. Some vendors are shipping neuromorphic chips, but others are
struggling. It’s difficult to replicate the functions of the human brain
in silicon, and the industry’s understanding of the brain works remains
limited.
In addition, neuromorphic chips require a different way of
programming the data. Plus, the current chips may need a memory
overhaul. All told, the neuromorphic chip market is a small business
today, but it is expected to grow at an annual rate of 26% and reach
$4.8 billion by 2022, according to Markets and Markets, a publisher of
research reports.
AI winter
The field of neuromorphic technology and its previous incarnation,
artificial neural networks, was a hot market in the 1980s. At the time,
several companies were pursuing this and other technologies to enable AI
and so-called expert systems. An expert system is a computer that
mimics the decision-making ability of a human.
“Back in the 1980s, everybody thought expert systems were going to
take over the world,” said Dave Schubmehl, an analyst at International
Data Corp., a market research firm. “But we ended up having what we call
an AI winter. We really didn’t have enough data. We really didn’t have
enough compute power to make these expert systems actually useful.”
Needless to say, compute power has dramatically increased over the years, but the amount of data also has exploded.
Nevertheless, thanks to the increase in compute power, Facebook,
Google and others have developed an AI-like technology called machine
learning or deep learning. This technology makes use of software
algorithms, which in turn can learn and make predictions from various
data.
A subset of this technology is called
unsupervised machine learning.
This makes use of artificial neural networks to crunch the data.
“Essentially what happens is that the neural network algorithms crunch
on the data long enough to identify patterns and identify a set of
attributes associated with those patterns. Over time, it learns which of
those attributes are important,” Schubmehl said.
“A lot of these applications are able to run effectively on
(traditional) GPUs,” he said. “The question is if we take that to a
non-von Neumann architecture, can we potentially get a quantum leap?
It’s really too early to tell if (neuromorphic technology) can really be
applied on a broad scale.”
Neuromorphic technology is appealing, though. “The von Neumann
architecture is more like serial execution of the instructions. You
access the memory from outside and somewhat closer to the CPU,” said
Srinivasa Banna, a fellow and director of advanced device architecture
at GlobalFoundries. “With neural networks, you can do things in parallel
with efficient elements. Those are all energy efficient. But it
requires innovation in the way the data is processed in parallel.”
What are neuromorphic chips?
To some degree, the industry has been shipping neuromorphic chips for
select markets, particularly for pattern matching applications. Here are
some hypothetical examples of the possible apps:
• Military systems.
Using neuromorphic technology, a drone could identify and match
potential targets on the battlefield. It also could learn new data
in-flight.
• Facial recognition. With a camera and neuromorphic chips, a system could accurately match images in real time.
• Industrial gear. Using the technology, a
camera-enabled system could find small defects in chips in the fab
without the need of today’s wafer inspection equipment.
“Today, computing is not good for technologies like parallel pattern
recognition,” said Guy Paillet, chief executive of General Vision, a
supplier of neuromorphic chips. “I am not saying a computer, a CPU, a
GPU and all can’t do it. They are not very efficient.”
Indeed, neuromorphic technology is different than traditional chips.
According to Stanford, there are two classes of neuromorphic
computing—artificial neural networks, and biology-based learning models
that mimic the brain.
To some degree, chips based on the neural network approach have
gained traction. In contrast, chips based on the biology model are still
trying to get off the ground, as the industry has hit some roadblocks
in the arena.
In both approaches, neuromorphic chips consist of multiple neurons
and synapses. These aren’t biological neurons and synapses, but they
mimic the functions of these structures.
For example, General Vision’s neuromorphic chip consists of 1,024
neurons, all interconnected and working in parallel. A neuron is a
256-byte memory based on SRAM, plus 3,000 logic gates.
Multiple chips can be daisy-chained in a network, forming the basis
of an artificial neural network. A network could have a multitude of
neurons and synapses. The synapse connects one neuron to another.
Basically, an artificial neural network has three layers—input,
hidden, and output. In operation, a pattern is first written in a neuron
in the input layer. The pattern is broadcast to the other neurons in
the hidden layers. Each neuron reacts to the data. Using a weighted
system of connections, one neuron in the network reacts the strongest
when it senses a matching pattern. The answer is revealed in the output
layer.
A neural network also consists of a learning mechanism. The weights of the connections are modified based on the input patterns.
And unlike today’s chips, neuromorphic devices conduct the processing
in the memory. This enables faster processing, but the current chips
are based on SRAM. SRAMs are fast, but they are power-hungry and take up
too much area. So neuromorphic chipmakers hope to move from
SRAM toward a next-generation memory type, such as phase-change or
ReRAM.
In addition, phase-change and ReRAM promise to enable spike-based
learning techniques in chips. In biology, neurons send messages to each
other via precisely-timed pulses or spikes.
The neuromorphic community also wants to bring spiking, or a timing
element, into their chips, but this is one of the major roadblocks in
the industry. “From an engineering and electrical perspective, spike
coding can be more complex to implement,” said Christian Gamrat, a
researcher from
CEA-Leti.
For its part, CEA-Leti is developing a memristive-based device array
for use in spike-based coding. The array is based on a 1T-1R CBRAM
technology. “Although this is a work in progress, we believe that the
combination of memristive technologies and spike-based coding is a
promising way for the efficient implementation of embedded neuromorphic
architectures,” Gamrat said.
In any case, after years of promises, the industry is finally showing
results on several fronts. On one front, General Vision for some time
has been shipping 130nm neuromorphic chips, mainly for industrial and
related applications.
General Vision also licensed its IP to Intel, which is using the
technology in its so-called Curie Module. The module features a 32-bit
Quark SE embedded processor as well as General Vision’s pattern-matching
IP.
The module could propel neuromorphic technology into some broader
markets. “Curie’s dedicated sensor hub can be used in a variety of
ways,” said Brian Krzanich, chief executive of Intel, at a recent event.
The module is targeted for fitness trackers and other wearables.
General Vision’s IP optimizes the analysis of the sensor data in
systems, enabling fast identification of actions and motions.
Others are making progress in more traditional computing
applications. In 2013, for example, the European Union launched the
Human Brain Project, an effort to gain a better understanding of the
brain. As part of this effort, the project is developing two
neuromorphic computing platforms.
The goal of these projects is to accelerate machine learning times in
high-end systems. One project, dubbed SpiNNaker, is a
massively-parallel computer architecture based on 1,036,800 ARM9-based
cores.
Meanwhile, the second project, called BrainScaleS, aims to develop a
180nm chip with 512 neurons and 131,072 synapses. The BrainScaleS
project also involves a chip based on mixed-signal technology. “A
mixed-signal approach combines space and energy efficient local analog
computing with the scalability of a binary system,” said Karlheinz
Meier, a co-leader of the project.
Meanwhile, IBM is going down the digital path. Last year, IBM rolled
out TrueNorth, a chip that consists of 1 million neurons, 256 million
synapses and 4,096 parallel cores. The 5.4 billion transistor device is
based on SRAM and a 28nm FD-SOI process.
In addition, IBM is also exploring the idea of migrating from SRAM to
phase-change memory. This architecture is potentially 25 times faster
at lower power than systems based on traditional GPUs, according to IBM.
“It’s going to be difficult for any nonvolatile memory element to have
the performance of SRAM. There will always be a place for SRAM, but SRAM
is area hungry,” IBM Research’s Burr said. “One of the nice things with
phase-change is that it has a huge range of resistive states, from very
resistive to quite conductive. So when you want it as an analog
element, it’s very attractive.
“By performing computation at the location of data, non-von Neumann
computing ought to provide power and speed benefits. For one such
non–von Neumann approach, on-chip training of large-scale artificial
neural networks using nonvolatile memory-based synapses viability will
require at least two things,” Burr said. “First, despite the inherent
imperfections of nonvolatile memory devices, such as phase-change memory
or resistive RAM, (they) must achieve competitive performance levels
versus artificial neural networks trained using CPUs or GPUs. Second,
the benefits of performing computation at the data must confer a decided
advantage in either training, power or speed or preferably both.”
In any case, the question is clear: “Where is the technology heading?”
“Machine learning exists today,” Burr said. “We realize that it’s not
enough. We would like to have machines that learn from their own
experiences. Beyond that, we are at the beginning of transcending from
machine learning and driving towards machine intelligence.”
For this, the industry could go in several directions. In the near-term, companies will continue to use today’s systems.
Long term? The industry may use traditional chips. “There is also a
possibility that technologies like TrueNorth will lead to things that
are non-von Neumann hardware,” Burr said. “That may accelerate machine
learning. Five years from now, we will know whether it will happen or
not.”
http://semiengineering.com/neuromorphic-chip-biz-heats-up/