Imperial College London > Talks@ee.imperial > Featured talks > Unifying task specification in reinforcement learning

Unifying task specification in reinforcement learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Joan P O'Brien.

Abstract

Markov decision processes have long been the standard formalism for sequential decision-making within reinforcement learning. But, this is not the full story; in reality, there are specialized instances that require separate treatment, with the most notable being episodic and continuing problems. In this talk, I will discuss a generalization to the discount that enables a more unified formalism for these settings. I will discuss some advantages of this generalization, for specifying a broader class of policy evaluation questions and in terms of unifying the theoretical treatment of these different settings.

Bio

Martha White is an assistant professor of Computing Science at the University of Alberta. Previously she was an assistant professor in the School of Informatics and Computing at Indiana University in Bloomington, and received her PhD in Computing Science from the University of Alberta in 2015. Her primary research goal is to develop algorithms for autonomous agents learning from streams of data. She focuses on developing practical algorithms for reinforcement learning and representation learning.

This talk is part of the Featured talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity