报告人:Ming Yan (严明)
报告地点:腾讯会议
报告时间:2020年11月27日星期五9:00-10:00
邀请人:刘 俊
报告摘要:
Abstract: Large-scale machine learning models are trained by parallel stochastic gradient descent algorithms on distributed or decentralized systems. The communications for gradient aggregation and model synchronization become the major obstacles for efficient learning as the number of nodes and the model's dimension scale up. In this talk, I will introduce several ways to compress the transferred data and reduce the overall communication such that the obstacles can be immensely mitigated. In particular, I will present two algorithms that significantly reduce communication. DORE is a distributed learning algorithm that compresses both transfers to and from the server; and LEAD compresses the transfer between decentralized nodes.
会议网址:https://meeting.tencent.com/s/J1fUo7UEPBkT
会议 ID:729 722 501
会议密码:666666
主讲人简介:
Ming Yan is an assistant professor in the Department of Computational Mathematics, Science and Engineering (CMSE) and the Department of Mathematics at Michigan State University. His research interests lie in computational optimization and its applications in image processing, machine learning, and other data-science problems. He received his B.S. and M.S in mathematics from University of Science and Technology of China in 2005 and 2008, respectively, and then Ph.D. in mathematics from University of California, Los Angeles in 2012. After completing his PhD, Ming Yan was a Postdoctoral Fellow in the Department of Computational and Applied Mathematics at Rice University from July 2012 to June 2013, and then moved to University of California, Los Angeles as a Postdoctoral Scholar and an Assistant Adjunct Professor from July 2013 to June 2015. He received a Facebook faculty award in 2020.