August_EDFA_Digital
edfas.org 21 ELECTRONIC DEV ICE FA I LURE ANALYSIS | VOLUME 24 NO . 3 Now, rather than focusing on the overall accuracy, how the model behaves in the presence of adversaries is scru- tinized. To introduce adversaries, available cell images for three different classes weremodified and passed through the network to check the output behavior (Fig. 8). From Fig. 8, it is seen that the change in confidence is largely dependent on the change in the region of the higher activation. A significant change inactivation regions lowers the confidence score. FromFig. 8 it is also seen that even after obfuscating part of the cells, the confidence score remains high for class 4 as the activation regions haven’t changedmuch. For classes 2 and 3, the network is less confident as the activation regions are vastly affected by the obfuscation. In summary, the attacker might be able to circumvent the system by slightly tweaking the cell structure. To handle the issue, a network adaptive to out-of- distribution (OOD) data must be built up. The network will have the ability to detect both unknown and slightly modified gates. The authors plan to address these issues in future literature. CONCLUSION This article scrutinizes a few challenges of an end- to-end Trojan detection system and presents probable solutions. In addition to that, the paper also focuses on one of the main bottlenecks of image analysis-based research in the failure analysis domain and that is image acquisition time. An adversarial translation system from layout to SEM image is proposed, which is believed to be the first attempt for the layout to image translation in such amanner. In addition, to solve the data insufficiency, the adversarial learning of synthetic image generation is adopted, which will pave the way for future research in this arena. The experiments have been conducted on the data collected in the lab and the results strongly encour- age more exploration in this direction. Besides present- ing one, the proposed method opens new directions for real-time Trojan detection systems in hardware assurance and shows multiple research directions in the SEM image processing domain. REFERENCES 1. Q. Shi, et al.: “Golden Gates: A New Hybrid Approach for Rapid Hardware Trojan Detection using Testing and Imaging,” Proceedings of the 2019 IEEE International Symposium on Hardware Oriented Security and Trust, HOST 2019 , May 2019, p. 61–71, DOI: 10.1109/ HST.2019.8741031. 2. M. Tehranipoor and F. Koushanfar: “A Survey of Hardware Trojan Taxonomy and Detection,” IEEE Des. Test Comput., Vol. 27, No. 1, 2010, p. 10–25 , DOI: 10.1109/MDT.2010.7. 3. N. Vashistha, et al.: “TrojanScanner: DetectingHardwareTrojanswith Rapid SEM Imaging Combined with Image Processing and Machine Learning,” Proc. Int. Symp. Test. Fail. Anal. (ISTFA), 2018, p. 256–265, DOI: 10.31399/asm.cp.istfa2018p0256. 4. N. Vashistha, et al.: “Detecting Hardware Trojans Inserted by Un- trusted Foundry using Physical Inspection and Advanced Image Processing,” Hardw. Syst. Secur., 2018. 5. A. Krizhevsky, I. Sutskever, andG.E. Hinton: “ImageNet Classification withDeepConvolutional Neural Networks, Commun. ACM60, 6, 2017, p. 84–90, https://doi.org/10.1145/3065386. 6. F. Courbon, et al.: “A High Efficiency Hardware Trojan Detection Technique based on Fast SEM Imaging,” Proc. Design, Automation and Test in Europe, DATE , Apr. 2015, Vol. 2015-April, p. 788–793, DOI: 10.7873/date.2015.1104. 7. C. Bao, D. Forte, and A. Srivastava: “On Reverse Engineering-Based Hardware Trojan Detection,” IEEE Trans. Comput. Des. Integr. Circuits Syst., Vol. 35, No. 1, 2016, p. 49–57, DOI: 10.1109/TCAD.2015.2488495 . 8. I. Goodfellow, et al.: “Generative Adversarial Networks. Advances in Neural Information Processing Systems,” 3, 2014, DOI: 10.1145/ 3422622. 9. T.-C. Wang, et al.: “High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs,” 2018, p. 8798-8807, DOI: 10.1109/CVPR.2018.00917. 10. J. Johnson, A. Alahi, and L. Fei-Fei: “Perceptual Losses for Real-Time Style Transfer and Super-Resolution,” 9906, 2016, p. 694-711, DOI: 10.1007/978-3-319-46475-6_43. 11. M.M. Al Hasan, et al.: “Generative Adversarial Network for Integrated Circuits Physical Assurance Using Scanning Electron Microscopy,” 2021 IEEE Proc. Symp. Phys. Fail. Anal. Integr. Circuits (IPFA), 2021, p. 1-12, DOI: 10.1109/IPFA53173.2021.9617416. 12. Q. Mao, et al.: “Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis,” Accessed: May 24, 2021, [Online], https:// github.com/HelenMao/MSGAN/. 13. K. He, et al.: “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, p. 770-778. 14. P. Isola, et al.: “Image-to-Image Translation with Conditional Adversarial Networks,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, p. 5967-5976, DOI: 10.1109/ CVPR.2017.632. ABOUT THE AUTHORS Md. Mahfuz Al Hasan is a Ph.D. student in electrical and computer engineering at University of Florida. He received his B.S. in computer science and engineering in 2017 fromBangladeshUniversity of Engineering andTechnology (BUET). His research interest includes deep learning and its implication onhardware assurance, representation learning for out-of-distribution images, and self-explainableAI.
Made with FlippingBook
RkJQdWJsaXNoZXIy MTMyMzg5NA==