Compression or Corruption? A Study on the Effects of Transient Faults on BNN Inference Accelerators

Document Type

Conference Proceeding

Date of Original Version

3-1-2020

Abstract

Over past years, the philosophy for designing the artificial intelligence algorithms has significantly shifted towards automatically extracting the composable systems from massive data volumes. This paradigm shift has been expedited by the big data booming which enables us to easily access and analyze the highly large data sets. The most well-known class of big data analysis techniques is called deep learning. These models require significant computation power and extremely high memory accesses which necessitate the design of novel approaches to reduce the memory access and improve power efficiency while taking into account the development of domain-specific hardware accelerators to support the current and future data sizes and model structures. The current trends for designing application-specific integrated circuits barely consider the essential requirement for maintaining the complex neural network computation to be resilient in the presence of soft errors. The soft errors might strike either memory storage or combinational logic in the hardware accelerator that can affect the architectural behavior such that the precision of the results fall behind the minimum allowable correctness. In this study, we demonstrate that the impact of soft errors on a customized deep learning algorithm called Binarized Neural Network might cause drastic image misclassification. Our experimental results show that the accuracy of image classifier can drastically drop by 76.70% and 19.25% in IfcW1A1 and cnvW1A1 networks, respectively across CIFAR-10 and MNIST datasets during the fault injection for the worst-case scenarios.

Publication Title, e.g., Journal

Proceedings - International Symposium on Quality Electronic Design, ISQED

Volume

2020-March

Share

COinS