2020 Embedded Processor Report: Back to the Future with Analog Computing
February 03, 2020
Analog computing ? and even analog signal processing ? appears to be making a comeback. Why?
While studying at MIT, Claude Shannon, widely regarded as “the father of information theory,” worked extensively with the Differential Analyzer developed a decade earlier. The Differential Analyzer was essentially the first general-purpose analog computer, and Shannon’s experience with the machine would have a seminal influence on later works such as A Mathematical Theory of Communication.
About the same time, Shannon’s contemporaries were making strides in digital computing systems that would be fully realized over the next two decades and culminate in the digital signal processing revolution of the 1970s and 80s.
But now, roughly 80 years after Shannon’s introduction to the Differential Analyzer, analog computing – and even analog signal processing – appear to be making a comeback.
“There are two significant reasons,” said Gene Frantz, VP of Engineering at Octavo Systems and member of the design team responsible for the first digital signal processor, the Texas Instruments TMS5100.
“First, IC technology has advanced over the last five decades to make many of the things that were impossible or impractical now very doable,” Frantz continued. “Second, we are finding new problems worth solving that digital solutions are not adequate for – specifically, the need for higher performance and at the same time the need for significantly lower power dissipation.”
As Moore’s law draws to a close, the need for lower power and higher performance will be felt in an increasing number of application areas. And it is already generating renewed interest in analog computation in tasks ranging from mixed-signal signal processing (MSSP) for neural network workloads to dynamic system simulation using differential equations.
Back to basics with physical simulation
To illustrate the most basic advantages of analog computing, consider processing analog signals that are described by a set of differential equations. Because continuous time doesn’t exist in a digital computing paradigm, a digital computer must sample the input every clock cycle to develop a sample signal. This can result in many, many computations, which has the cascading effect of higher latency, increased power consumption, and so on.
Compare this the massive parallelism of an analog computer. Rather than deconstructing inputs into sequential tasks, analog computing circuitry can be configured as basic units (adders/subtractors, multipliers, integrators, fanouts, nonlinear functions, etc.) to solve the differential equations in question, and then sample the entire input signal continuously (Figure 1).
Analog computing chips are capable of executing differential equations significantly faster and at much lower power than digital alternatives. And while one drawback of analog computers is that you have to scale the system linearly with the size of the equation you’re trying to solve, their massive parallelism means that the power and performance benefits scale as well.
Limited benchmarks support these claims. Figure 2 compares the power consumption and time taken to solve the Van der Pol equation on Sendyne’s Apollo IC versus a 25 MHz Texas Instruments MSP430 MCU.
The Sendyne Apollo IC is a 4 x 4 mm2 general-purpose analog computer fabricated on CMOS technology. The chip, originally developed by a team of researchers at Columbia University, contains 16 analog integrators and uses 1 V circuitry to generate outputs at the expense of only microjoules of energy. It also contains specialized ADCs that minimize conversion costs.
“If you’re dealing with analog signals and can skip the step of converting from analog to digital and then back to analog, that is obviously the advantage,” said John Milios, CEO of Sendyne. “There are special ADCs that basically don’t do any conversion or consume any power unless there is a change in the input signal, so you don’t have any significant power loss.
“If you think about the complicated problems that require a lot of parallel operations and have to execute millions of times, then you can see a very significant benefit,” he added.
(Editor's Note: For more on the evolution of analog computing read "Not Your Father's Analog Computer" by Columbia University professor Yannis Tsividis.)
Neural networks mix it up with analog signal processing
Beyond the realm of differential equations, analog-based arithmetic logic units (ALUs) are also gaining traction in the world of MSSP.
“Each time we reduce the size of the multiplier in half – say from a 32-bit multiply to a 16-bit multiply – the performance increases roughly by an order of magnitude while the power dissipation is reduced for each multiply by the same ratio,” Frantz explained. “So going from a digital 32-bit multiplier to a 1-bit analog multiplier improves the performance by several orders of magnitude while reducing the power dissipation by the same several orders of magnitude.
“At the same time, the number of transistors necessary to do the multiply is reduced from tens of thousands to tens,” he said.
One application area that is starting to capitalize on analog signal processing is the nascent field of neural networking. Here, there are use cases like keyword recognition and certain types of image processing that can afford to trade the lower accuracy of analog for the power and performance improvements it provides.
“There is a growing recognition that machine learning workloads present a different type of workload than the applications that previous processors have been designed for,” said Jeremy Holleman, Ph.D., Chief Scientist at Syntiant. “It is computationally demanding, memory-centric, mostly deterministic, and can tolerate modest precision. All of those factors play to the strengths of analog computation.”
Syntiant is a Bosch-backed AI semiconductor startup out of Irvine, CA that focuses on processing deep learning algorithms in resource-constrained systems like wearables, earbuds, remote controls, and sensors.
“The whole idea is to stay in the analog domain from the sensor front end all the way until after the neural network processing,” said Marcellino Gemelli, Director of Business Development at Bosch Sensortec. “Another way to see it is to imagine neural network processing before the signal conditioning that occurs in an ADC like a sigma-delta.”
The goal of the architecture Gemelli describes is simple: keep digital cores asleep at all costs. Aspinity, another AI startup based in Pittsburgh, PA, has developed a reconfigurable chip called RAMP that replicates digital signal processing tasks in analog for precisely this purpose (Figure 3).
“All sensed data is naturally analog, yet we take all of that data and we automatically digitize it, and process it downstream in a digital core,” said Tom Doyle, CEO of Aspinity. “But if you implement that in analog transistors, you can do it efficiently and accurately as well.
“What we’re able to do is be precise enough early in the signal chain to monitor very specific changes in frequency,” he continued. “So right after the sensor, you have Aspinity’s RAMP core that’s looking at all of the raw analog sensor data. When we detect something like voice or a glass break, we wake up the ADC and DSP to run an FFT to get all the gobs of data that one would need to determine what they want to do next.”
According to Doyle, applications like glass-break detection and voice activation have experienced power savings of 10x or more using RAMP technology.
Are we back to the future? Not quite.
While the potential of analog as an alternative or complementary processing technology is clear, it suffers from an extended absence in the commercial market. For one, there is limited information on how analog circuitry responds to the effects of temperature and aging. Another consideration is simply the ubiquity of digital interfaces today.
“In order to take full advantage of the low power and small die size, the signals will need to be tapped at the analog level more upstream, which in turn requires the sensor vendors to introduce architecture changes,” Gemelli explained. “Currently it’s a hard sell because tapping in the analog domain requires a significant redesign of the sensors’ front ends.”
What could ultimately drive those redesigns is more widespread use of analog computing technology. Development tools that provide access to analog circuitry from digital environments would help in that regard, and progress is being made there now that analog hardware targets are becoming available (See Sidebar 1).
Of course, I use “becoming available” in the most literal sense. Sendyne’s chip is the product of academic research, and first-generation products from Aspinity, Syntiant, and others are just barely reaching the market.
However, our rapidly diminishing ability to advance speeds and densities in the digital domain is undeniable. And, at the same time, our demand for computing power is increasing exponentially.
What will it take to go back to the future with analog computing?
“It needs someone to invest in this high-risk opportunity and create the first solution,” Frantz said. “My estimate is that a new computer architecture costs in the range of $100 million up to $1 billion.
“The risk is great. The people who can do it are few. But the reward is great.”