Abstract
In DNNs, a huge number of multiply-and-accumulate (MAC) operations need to be executed. To accelerate such operations efficiently, analog in-memory computing platforms based on emerging devices, e.g., resistive RAM (RRAM), have been introduced. These acceleration platforms rely on analog properties of the devices and thus suffer from process variations and noise. Consequently, weights in neural networks configured into these platforms can deviate from the expected values, which may lead to feature errors and a significant degradation of inference accuracy. In this talk, I will present an error correction method and statistical training to improve inference accuracy of neural networks implemented with in-memory computing platforms.
Short CV
Grace Li Zhang received the Dr.-Ing. degree from the Technical University of Munich (TUM), Munich, Germany, in 2018. Since 2020, she has been pursuing her Habilitation at the Chair of Electronic Design Automation, Technical University of Munich, where she leads a research group on heterogeneous computing. She joined TU Darmstadt in October 2022 as a Tenure Track Assistant Professor. She leads the Hardware for Artificial Intelligence Group in the Department of Electrical Engineering and Information Technology. Her research focuses on efficient machine learning hardware-software architectures, hardware acceleration for AI algorithms and systems, AI computing with emerging devices, e.g., RRAM and optical components, circuit and system design methodologies for AI, explainability of AI and neuromorphic computing. She has served/is serving on the technical committee of several conferences including DAC, ICCAD, ASP-DAC, GLSVLSI etc.