Pruning deep neural networks from a sparsity perspective
报告人： Ganghua Wang (University of Minnesota）
地点：Room 1114, Sciences Building No. 1
Recently, deep network pruning has attracted significant attention in order to enable the rapid deployment of AI into small devices with computation and memory constraints. Many deep pruning algorithms have been proposed with impressive empirical success. However, a theoretical understanding of model compression is still limited. One problem is to understand if a network is more compressible than another of the same structure. Another problem is to quantify how much one can prune a network with theoretically guaranteed accuracy degradation. This talk address these two fundamental problems by using the sparsitysensitive lq norm, (0
About the Speaker:
Ganghua Wang received the B.S. degree from Peking University, Beijing, China, in 2019. Since 2019, he has been a Ph.D. student with the School of Statistics, University of Minnesota, Twin Cities, MN, USA. His research interests include the foundations of machine learning theory and trustworthy machine learning.
Your participation is warmly welcomed!