February_EDFA_Digital
edfas.org 31 ELECTRONIC DEVICE FAILURE ANALYSIS | VOLUME 23 NO. 1 different convolutions (size: 1×1, 3×3, and 5×5) and one max pooling operation (size 3×3). This inception module is used here as a simulation block for the study of the Fig. 2 (a) Illustration of the switching phenomena in the RRAM device and use of a threshold resistance ( R TH ) to encode the analog RRAM resistance values in the LRS and HRS states into binary 0 and 1; (b) Bit-wise replacement of ‘0’ & ‘1’ with RRAM resistance encoding where regions of resistance overlap would correspond to encoding of false-0 and false-1 (the bits in red); (c) Incorporation of hardware encoded weights into the CNN framework and comparison with the algorithm trained weights; (d) Quantification of the classification error for a classical image recognition study for edge computing. (b) (a) (c) (d) Fig. 3 (a) Naive inception framework shown with different convolution andmax pooling data paths; (b) Overview of the Keras inception framework wherein the inputs are the data image, CNN weights, and RRAM variability coded LUT based CNN weights. The output is the difference in prediction error from the two parallel pipelines, namely “software flow” and “hardware simulated flow.” (b) (a) computational failure analysis. The simulation framework as shown in Fig. 3b, takes three inputs: (A) original input image, (B) GPU trained GoogleNet weights, referred to as “software weights” and (C) the RRAM resistance encoded LUT-based con- figured weights (illustrated earlier in Fig. 2b). The simulation framework outputs the prediction error of image classification inducedby theRRAMresis- tance variability and distribution tail overlap. The difference in the resulting accuracy for the software and hardware (RRAM resistance encoded LUT) pipeline gives the prediction error rate induced by the hardware realization. ACCURACY QUANTIFICA- TION AND POWER CONSUMPTION TRENDS The Keras framework outputs the image classification prediction error difference between the hardware and software-trained weights. The predic- tion error is captured for different com- plexity of the CNN (1×1 conv, 1×1 → 3×3 conv, 1×1 → 3×3 → 3×3 conv, and 1×1 → 3×3 → 3×3 → 3×3 conv) and the simulations are repeated for all three I comp values (2, 5, and 10 μA) as shown in Fig. 4a. The logic-0 and logic-1 data are encoded with the quantized LRS
Made with FlippingBook
RkJQdWJsaXNoZXIy MjA4MTAy