Department of Computer Science News
CS Colloquium 3/1/2013 "Bayesian Hierarchical Reinforcement Learning"Posted Feb. 11, 2013
Bayesian Hierarchical Reinforcement Learning
Dr. Soumya Ray, Ph.D.
Case Western Reserve University
Humans are extremely good at handling complex decision making tasks. Among other things, we use two key ideas: decomposing tasks into smaller pieces which are easier to solve, and exploiting approximate prior knowledge of how our actions affect the world when solving a new problem. How do we extend similar capabilities to autonomous agents? In this talk I will overview a line of research attempting to answer this question in the context of reinforcement learning agents. I will describe (i) how agents can learn decomposed task hierarchies using causal analysis of the underlying models, (ii) how agents can use probabilistic prior knowledge from previous tasks to solve new (undecomposed) tasks using Bayesian priors and finally (iii) how agents can combine probabilistic prior knowledge with task hierarchies to exploit both sources of information simultaneously when solving a complex problem.
Soumya Ray received his undergraduate degree from the Indian Institute of Technology, Kharagpur, and his MS and PhD degrees from the University of Wisconsin, Madison. He then did a postdoc with Tom Dietterich, Alan Fern and Prasad Tadepalli at Oregon State University before joining Case Western Reserve University, where he has been an Assistant Professor since 2008. His research interests are in algorithms and applications of statistical machine learning, reinforcement learning and automated planning.