Spring ’20 Joint CSC@USC/CommNetS-MHI Seminar Series
AbstractDespite high-profile advances in various decision-making and classification tasks, Deep Neural Networks (DNNs) face several fundamental challenges that limit their adoption in physical or safety-critical domains. In particular, DNNs can be vulnerable to adversarial attacks and input perturbations. This issue becomes even more pressing when DNNs are used in closed-loop systems, where a small perturbation (caused by, for example, noisy measurements, uncertain initial conditions, or disturbances) can substantially impact the system being controlled. Therefore, it is of utmost importance to develop tools that can provide useful certificates of stability, safety, and robustness for DNN-driven systems.
BiosketchMahyar Fazlyab received the Bachelor's and Master's degrees in mechanical engineering from Sharif University of Technology, Tehran, Iran, in 2010 and 2013, respectively. He earned a Master's degree in statistics and a Ph.D. degree in Electrical and Systems Engineering (ESE) from the University of Pennsylvania (UPenn), Philadelphia, PA, USA, in 2018. Currently, he is a Postdoctoral Researcher at UPenn. His research interests are at the intersection of optimization, control, and machine learning. His current work focuses on developing optimization-based methods for safety verification of learning-enabled control systems. Dr. Fazlyab won the Joseph and Rosaline Wolf Best Doctoral Dissertation Award in 2019, awarded by the ESE Department at UPenn. |