Identifiability and Generalizability from Multiple Experts in Inverse Reinforcement Learning

Details

Serval ID
serval:BIB_4D9800A239E4
Type
Inproceedings: an article in a conference proceedings.
Collection
Publications
Institution
Title
Identifiability and Generalizability from Multiple Experts in Inverse Reinforcement Learning
Title of the conference
[Proceedings of NEURIPS 2022]
Author(s)
Rolland Paul Thierry Yves, Viano Luca, Schürhoff Norman, Nikolov Boris, Cevher Volkan
Publication state
Published
Issued date
2022
Peer-reviewed
Oui
Pages
28
Language
english
Abstract
While Reinforcement Learning (RL) aims to train an agent from a reward function in a given environment, Inverse Reinforcement Learning (IRL) seeks to recover the reward function from observing an expert’s behavior. It is well known that, in general, various reward functions can lead to the same optimal policy, and hence, IRL is ill-defined. However, [1] showed that, if we observe two or more experts with different discount factors or acting in different environments, the reward function can under certain conditions be identified up to a constant. This work starts by showing an equivalent identifiability statement from multiple experts in tabular MDPs based on a rank condition, which is easily verifiable and is shown to be also necessary. We then extend our result to various different scenarios, i.e., we characterize reward identifiability in the case where the reward function can be represented as a linear combination of given features, making it more interpretable, or when we have access to approximate transition matrices. Even when the reward is not identifiable, we provide conditions characterizing when data on multiple experts in a given environment allows to generalize and train an optimal agent in a new environment. Our theoretical results on reward identifiability and generalizability are validated in various numerical experiments.
Create date
27/06/2023 17:43
Last modification date
28/06/2023 6:55
Usage data