Events
IFML Seminar
Meta Optimization
Elad Hazan, Princeton Professor and Director and co-founder, Google AI Princeton
-The University of Texas at Austin
Gates Dell Complex (GDC 6.302)
United States
Abstract: How can we find and apply the best optimization algorithm for a given problem? This question is as old as mathematical optimization itself, and is notoriously hard: even special cases such as finding the optimal learning rate for gradient descent is nonconvex in general. In this talk we will discuss a dynamical systems approach to this question. We start by discussing an emerging paradigm in differentiable reinforcement learning called “online nonstochastic control”. The new approach applies techniques from online convex optimization and convex relaxations to obtain new methods with provable guarantees for classical settings in optimal and robust control. We then show how this methodology can yield global guarantees for learning the best algorithm in certain cases of stochastic and online optimization.
Speaker Bio: Elad Hazan is a professor of computer science at Princeton University. His research focuses on the design and analysis of algorithms for basic problems in machine learning and optimization. Amongst his contributions are the co-invention of the AdaGrad algorithm for deep learning, and the first sublinear-time algorithms for convex optimization. He is the recipient of the Bell Labs prize, the IBM Goldberg best paper award twice, in 2012 and 2008, a European Research Council grant, a Marie Curie fellowship and twice the Google Research Award. He served on the steering committee of the Association for Computational Learning and has been program chair for COLT 2015. In 2017 he co-founded In8 inc. focusing on efficient optimization and control, acquired by Google in 2018. He is the co-founder and director of Google AI Princeton. No background is required for this talk, but relevant material can be found in this new text on online control and paper on meta optimization.
Event Registration