Spring ’24 CSC@USC/CommNetS-MHI Seminar Series
AbstractIn the first part of my talk I will describe a novel machine-learning inspired approach to optimal control. Classical approaches to optimal control first posit a model of the environment and then design a controller which is optimal for that specific environment. We instead consider control from the perspective of competitive analysis, with the goal of obtaining an adaptive controller with a strong performance guarantee in any environment. Using operator-theoretic techniques from robust control, we derive computationally efficient state-space descriptions of controllers with optimal competitive ratio and optimal regret. In the second part of the talk, I will describe recent work that revisists Kalman Filtering in the age of generative AI. Large Language Models (LLMs) such as ChatGPT are powered by Transformers, a class of autoregressive deep learning architectures which iteratively predict the next token in a stream of tokens. Can such architectures also be used for filtering in dynamical systems? BiosketchGautam Goel is a postdoc at the Simons Institute for the Theory of Computing at UC Berkeley, where he is part of the Foundations of Data Science Institute. His research interests include online learning, statistical learning theory, and connections to signal processing and control. Before joining Berkeley he was a PhD student in the Computing and Mathematical Sciences (CMS) department at Caltech, where he was advised by Babak Hassibi. His thesis was awarded the Bhansali Family Doctoral Prize in Computer Science, which is awarded by the CMS department to a single outstanding dissertation in computer science each year.
|