Imperial College London > Talks@ee.imperial > Featured talks > Online Optimization

Online Optimization

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Kin K Leung.

This is the first of two seminars on “Online Optimization and Reinforcement Learning,” aiming to introduce online optimization and learning techniques to our post-graduate students, although everyone is welcome to attend.

The abundance of data and proliferation of computational resources are leading to a significant paradigm shift towards data-driven engineering design. While past practice has often relied on physical models or approximations thereof, it is common nowadays to learn models directly from data. Most commonly, this is achieved by parameterizing a sufficiently generic model (such as a neural network), defining a loss function that quantifies model fit, and subsequently optimizing the loss over the parameter space. As such, optimization is key to our ability to learn.

Models are classically fit by collecting a large amount of training data, shuffling, and fitting a single model to the entire batch. Such an approach is likely to be effective if the data is independently and identically distributed over time, but will fail in dynamic environments. In reality, the world around us is dynamic. The state of nature evolves and changes over time, in response to effects, which can be unknown, observable, or a result of our own actions. To learn in such environments, we need to design learning algorithms, which are dynamic themselves, and which are able to adapt models to the changing state of nature over time.

In the first 2-hour seminar on “Online Optimization and Reinforcement Learning,” we will introduce online optimization as a tool to imbue classical optimization algorithms, such as (stochastic) gradient descent, with the ability to learn from dynamic, streaming realizations of data, rather than static data sets. We will introduce the tools necessary to develop and analyze algorithms for online optimization, including models for unstructured time-varying learning problems, performance metrics, and trade-offs. We will also introduce distributed paradigms, such as a federated and decentralized learning as a tool to cope with dynamic large-scale learning problems, and some recent research directions.

While the material will be presented in a self-contained manner, some background in random variables and stochastic processes (conditional expectations, martingales), optimization (gradient descent, convexity, Lipschitz smoothness), and linear algebra (eigen- and Jordan decompositions), will be advantageous.

A Teams link to attend this seminar is provided through a separate Teams invite. The link to join on Teams is: https://teams.microsoft.com/l/meetup-join/19%3ameeting_YWI1OTVlN2QtMjg2OS00NDRlLWFhZDYtNjAxYzA2MWM3ZTA0%40thread.v2/0?context=%7b%22Tid%22%3a%222b897507-ee8c-4575-830b-4f8267c3d307%22%2c%22Oid%22%3a%229ddee623-e996-457c-8025-c76d4162f7f7%22%7d

This talk is part of the Featured talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity