November_EDFA_Digital

edfas.org 57 ELECTRONIC DEV ICE FA I LURE ANALYSIS | VOLUME 23 NO . 4 GUEST COLUMNIST computing, which is discussed in an article fromthis issue on page 28. Analog in-memory computing is another promising method of achieving performance per watt beyond what is possible in digital systems. This paradigm promises efficient processing and training of deep neural networks (DNNs), by storing the weights as analog memory values. Incoming data is processed while the weights remain physically in place, and the ubiquitous matrix vector multiply is processed as an analog sum of currents. This architectural technique can achieve 10 to possibly over 100 TOPS/W efficiency for neural network inference at the industry-standard resolution of 8 bits, which would enable revolutionary deep neural net capabilities even in highly power-constrained systems. We are already seeing a few of these early analog processors enter the market, such as Mythic, and several major companies have public development activities in place. But we must remember that analog computing is an old concept that lost ground to digital because it could not deliver sufficiently precise answers. Now it is reemerging due to synergies with modern algorithms. DNNs turn out to be in the sweet spot for theaccuracypossiblewith theseanalog systems;where applications that require higher precisionwouldnot work. For example, I would not want my bank account balance calculated with an analog system. Analog processors have a key challenge over their digital counterparts: the final computational result depends on the hardware—down to the device physics . In a digital chip, reliability is typically an all or nothing affair: if the chip is operating normally, a given input set will deter- ministically return the same result. Occasional bit errors may occur, which are corrected or the chip may crash. In other words, there is no mode of stable operation for a digital processor which includes significant numbers of uncorrected errors. Conversely, analog processors always operate with some amount of error. This is harmonious A lthoughwe oftenmeasure computingperformance in terms of operational speed, the advancement of computing is closely intertwined with energy efficiency. The power available for a particular applica- tion doesn’t change very much over time. For example, a battery-operated system can handle a few watts and for decades, a plug-in desktop system has been limited to hundreds of watts. Hence, the performance per watt, or energy efficiency, is the main factor that determines performance for a particular computing form factor. Since the dawn of computing in the 1940s, state-of-the-art computing systems have gained more than one order of magnitude improvement inenergyefficiencyeachdecade. The result is astounding: modern computing systems are more than one trillion times more energy efficient than the first computers. Although these trends started decades beforeMoore’s Law, CMOS scaling is one of the major contributors to these efficiency improvements. During Dennard scaling, each successive generation of logic transistor switched with lower voltage and current, resulting in significant energy improvements for each CMOS node. Although CMOS continues to scale to smaller dimensions, the supply voltage used by each technology generation since 2004 has remained fairly constant, resulting in only modest energy efficiency improvements from technology scaling alone.However, architectural enhancementshaveenabled continued improvements in performance per watt. Some of these innovations are tailored to specific application domains, and this has been especially important for the proliferation of deep learning over the last decade across many industries. Now, even these architectural tricks are starting to saturate to an efficiency of around 1 to 5 tera operations per second per watt (TOPS/W, equivalent to 0.2 to 1 pJ per operation), leaving the community to search for new innovative techniques to make comput- ing more energy efficient. One of these is quantum RELIABILITY IMPLICATIONS OF ENERGY-EFFICIENT ANALOG PROCESSING Matthew J. Marinella, Sandia National Laboratories matthew.marinella@sandia.gov

RkJQdWJsaXNoZXIy MTMyMzg5NA==