Nikolai Matni, University of Pennsylvania
Representation learning for dynamics and control

Dec 7, 2023, 10am; EEB 132 (note day/time change)

Host: Lars Lindemann

Abstract

Representation learning, in which common features are extracted using data from heterogeneous sources or tasks, has underpinned much of the exciting recent progress in machine learning. Intuitively, using all of one's data to learn a common representation function benefits both computational effort and statistical generalization by leaving a smaller number of parameters to fine-tune on a given target task, and indeed, recent results support this intuition in the context of classification and regression over i.i.d. data. However, in order to reap the benefits of representation learning in the context of dynamics and control applications, algorithmic and analytical tools need to accommodate sequential data that is emphatically not i.i.d.. Towards that goal, we will first overview our recent progress in understanding how and when empirical risk minimization-based representation learning over data generated by a dynamical system is statistically beneficial, with a focus on applications to imitation learning. Then, time permitting, we turn our attention to optimization challenges (and solutions!) related to learning representations over non-isotropic non-i.i.d. data, and show how simple modifications to alternating-descent-methods can significantly improve their convergence properties.

Biosketch

Nikolai Matni is an Assistant Professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania, where he is also a member of the Department of Computer and Information Sciences (by courtesy), the GRASP Lab, the PRECISE Center, and the Applied Mathematics and Computational Science graduate group. He has held positions as a Visiting Faculty Researcher at Google Brain Robotics, NYC, as a postdoctoral scholar in EECS at UC Berkeley, and as a postdoctoral scholar in the Computing and Mathematical Sciences at Caltech. He received his Ph.D. in Control and Dynamical Systems from Caltech in June 2016. He also holds a B.A.Sc. and M.A.Sc. in Electrical Engineering from the University of British Columbia, Vancouver, Canada. His research interests broadly encompass the use of learning, optimization, and control in the design and analysis of autonomous systems. Nikolai is a recipient of the NSF CAREER Award (2021), a Google Research Scholar Award (2021), the 2021 IEEE CSS George S. Axelby Award, and the 2013 IEEE CDC Best Student Paper Award. He is also a co-author on papers that have won the 2022 IEEE CDC Best Student Paper Award and the 2017 IEEE ACC Best Student Paper Award.

Acknowledgement: seminar series is supported by the Ming Hsieh Institute and Quanser.