Efficient Deep Learning at Scale
Following technology advances in high-performance computation systems and fast growth of data acquisition, machine learning, especially deep neural networks (DNNs), made remarkable success in many research areas and application domains. Such a success, to a great extent, is enabled by developing large-scale network models that learn from a huge volume of data. Though the research on hardware acceleration for neural networks has been extensively studied, the progress of hardware development still falls far behind the upscaling of DNN models at the software level. Thus, the efficient deployment of DNN models emerges as a major challenge. For example, the massive number of parameters and high computation demand make it difficult to deploy state-of-the-art DNNs onto resource-constrained devices. Compared to inference, training a DNN is much more complicated and has more significant computation and communication intensity. A common practice is distributing the training on multiple nodes or heterogeneous accelerators, while the balance between the data processing and exchange remains critical. We envision that software/hardware co-design for efficient deep learning is necessary. In this talk, I will start with the trends of machine learning study, followed by our latest explorations on DNN model compression, architecture search, distributed learning, and corresponding optimization at the hardware level.
Prof. Hai “Helen” Li
Professor, Duke University on October 23, 2020 at 11:45 AM in Zoom Webinar
Join Zoom Webinar
Hai “Helen” Li is Clare Boothe Luce Professor and Associate Chair for Operations of the Department of Electrical and Computer Engineering at Duke University. She received her B.S and M.S. from Tsinghua University and Ph.D. from Purdue University. At Duke, she co-directs Duke University Center for Computational Evolutionary Intelligence and NSF IUCRC for Alternative Sustainable and Intelligent Computing (ASIC). Her research interests include machine learning acceleration and security, neuromorphic circuit and system for brain-inspired computing, conventional and emerging memory design and architecture, and software and hardware co-design. She received the NSF CAREER Award, the DARPA Young Faculty Award, TUM-IAS Hans Fischer Fellowship from Germany, ELATE Fellowship, eight best paper awards and another nine best paper nominations. Dr. Li is a fellow of IEEE and a distinguished member of ACM. For more information, please see her webpage at http://cei.pratt.duke.edu/.
The Department of Electrical and Computer Engineering hosts a regularly scheduled seminar series with preeminent and leading reseachers in the US and the world, to help promote North Carolina as a center of innovation and knowledge and to ensure safeguarding its place of leading research.