Singing to your data: A new generation of acoustic memory technology
December 04, 2015
The most exciting technology to fit the memory market since flash harkens back to Alan Turing's early computing technology. It's a tenuous argument th...
The most exciting technology to fit the memory market since flash harkens back to Alan Turing’s early computing technology.
It’s a tenuous argument that one of the homes of embedded computing technology is none other than my own hometown of Letchworth Garden City, UK – for it was here the world-celebrated Enigma code breaking machines were manufactured, having been designed at (the far more famous) Bletchley Park. The cryptanalysis of the Enigma is said to have shortened the Second World War by two to four years and its realization into a physical device was Alan Turing’s masterpiece; for his work he is widely considered to be the father of theoretical computer science.
At this fledgling stage of computing, and before technology we could realistically label as “electronics”, Turing’s early “computers” often implemented a form of memory that by today’s standards is archaic; though it appears, all those decades ago, that Turing’s memory laid the foundations for the next revolution in memory technology, that of acoustic delay line memory. This cleverly employs longitudinal waves generated via a transducer, sent through a carefully selected liquid housed in a tube, then converted back to electrical energy by a second transducer.
The logic was easy to comprehend. The speed of sound slows through liquid and by a measurable rate, having determined the key variables of temperature and viscosity, lower values of each decreasing acoustic velocity. As for the liquid, mercury was originally utilized due to its near identical acoustic impedance match to piezoelectric quartz crystals, though variants of the tubes substituting expensive and toxic mercury for oil of varying viscosities were trialled; low-viscosity liquids facilitated greater data throughput but at a slower rate, and high viscosities achieved the opposite. The memory device functionally operated identically as FIFO (first in, first out) memory does today, though the term “volatile memory” has never been more appropriate (Figure 1).
Figure 1. Mercury delay line memory. (Click to enlarge) Source: Computer Desktop Encyclopedia. 1981-2015. The Computer Language Company Inc. http://encyclopedia2.thefreedictionary.com/Mercury+delay+line+memory
Whilst the technology was proven, liquid-based memory was always doomed to failure, with environmental attributes having such a profound effect on its reliability. Some attempts were made to resolve this, including dynamically adjusting the clock rate according to the ambient temperature to negate its effect on the acoustic impedance of the liquid – but a more fundamental change was required.
By the late 1950s, delay line memory migrated from liquids to adopting a solid metallic core. One successful derivative employed a coiled 95-foot special nickel-iron-titanium wire that was able to store 9,800 bits of data at any one time with a clock frequency of 1 MHz. Today, the technology is widely known as “racetrack” memory – and it’s easy to see why in an early implementation example in Figure 2.
Figure 2. A torsion wire delay line. (Click to enlarge) Source: Image by Coronium, via Wikimedia Commons. https://commons.wikimedia.org/wiki/File:Torsion_wire_delay_line.jpg
Those developing computer memory later diverted their attention to pursuing the promising field of magnetisation within ferrites, which drove the invention of the hugely successful electromechanical magnetic hard disk drives that reigned for decades until the new revolution of solid state memory pushed non-mechanical, high-speed storage into our computers. SSD technology vastly increased read/write speeds, but typically at the expense of the long operational life of a traditional hard disk drive.
Racetrack memory is a method pioneered by IBM of moving magnetically stored data electronically. It’s comprised of millions of single micron diameter U-shaped wires placed perpendicularly on a silicon substrate, with 100 bits of data magnetically written to each wire and moved up and down the wire by applying a positive or negative pulse at one end – at speeds of 300 feet per second!
As if this wasn’t exciting enough, Research Fellow Dr. Tom Hayward at the University of Sheffield and Professor John Cunningham at the University of Leeds are proposing a new way of driving data through these wires. The existing implementation, employing magnetic fields or electrical current to move the data creates heat and reduces power efficiency, important attributes in today’s world of battery power and consciousness of environmental impact, particularly CO2 emissions. Dr. Hayward’s solution pays homage to those early liquid acoustic delay line tubes by exploiting those same high-speed sound waves used all those years ago.
Sound is employed via the medium of surface acoustic waves (SAW); whilst being the most destructive wave that emanates from an earthquake, those in the embedded industry will be familiar with the term from SAW touchscreens. These use multiple acoustic waves projected across an LCD display to determine the precise location of a stylus or finger, designed for where traditional resistive touchscreens are fallible, though today they have increasingly been replaced by newer projected capacitive touchscreens.
“The key advantage of surface acoustic waves in this application is their ability to travel up to several centimetres without decaying, which at the nano-scale is a huge distance,” Dr. Hayward says. “Because of this, we think a single sound wave could be used to ‘sing’ to large numbers of nanowires simultaneously, enabling us to move a lot of data using very little power. We’re now aiming to create prototype devices in which this concept can be fully tested.”
The potential for this technology is mind-boggling. Storage remains the main bottleneck in computer operation, with CPU and RAM throughput many factors higher, and SAW-driven racetrack memory has the potential to combine the capacity of hard drives with the bandwidth of SDRAM, ultimately blurring the lines between the two. The technology promises to be scalable down to the smallest ICs, so this isn’t just a revolution for conventional computing – our embedded industry should be just as excited.
The technology isn’t just about speed and capacity as the fundamental flaw of today’s storage technology is operational life; SAW-driven racetrack memory, being nanomagnetic, doesn’t degrade in the same way as SSDs. Incidentally, the data is stored in a not dissimilar way to magnetic hard drives and were it not for inherent mechanical failure of the surrounding infrastructure, hard drives would last far longer than their already impressive life span. What personally captured my imagination was the discovery that that the direction of data flow inherently depends upon the pitch of the sound generated – effectively how you “sing” to the data dictates its movement.
The future for the embedded industry promises an end to the predominant Achilles Heel restricting operational longevity, flash lifetime, and performance increases so substantial that RAM itself may struggle to justify its raison d’être. Commercialisation remains at least five years away, but as we’ve seen from early sequential refreshable memory, it does show what goes around comes around.