Speaker: Tianbao Yang
Time: 2015-06-12 10:00
Place: Room 117, West Building of Science and Technology, West Campus
In this talk, I will present my recent research work on randomized algorithms in machine learning. Compared to traditional algorithms, the use of randomization could bring us several benefits including (i) it often leads to faster algorithms that can scale to the large size of data; (ii) it can lead to simpler algorithms that are easier to analyze; (iii) it can lead implicitly to regularization and more robust output; and (iv) randomized algorithms can often be organized to exploit modern computational architectures better than traditional algorithms.
First I will briefly talk about the use of randomization in optimization and matrix approximation for big data. Then I will focus on two algorithms for recovery of high-dimensional sparse and non-sparse vectors.
Dr. Tianbao Yang is currently an assistant professor at the University of Iowa (UI). He received his Ph.D. degree in Computer Science from Michigan State University in 2012. Before joining UI, he was a researcher in NEC Laboratories America at Cupertino (2013-2014) and a Machine Learning Researcher in GE Global Research (2012-2013), mainly focusing on developing distributed optimization system for various classification and regression problems. Dr. Yang has board interests in machine learning and he has focused on several research topics, including large-scale optimization in machine learning, online optimization and distributed optimization. His recent research interests revolve around randomized algorithms for solving big data problems. He has published over 25 papers in prestigious machine learning conferences and journals. He has won the Mark Fulk Best student paper award at 25th Conference on Learning Theory (COLT) in 2012.