Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning

Aviv Netanyahu*, Tianmin Shu*, Joshua B. Tenenbaum, and Pulkit Agrawal

Massachusetts Institute of Technology

(*Equal contribution)

Introduction

Abstract

In this work, we consider one-shot imitation learning for object rearrangement tasks, where an AI agent needs to watch a single expert demonstration and learn to perform the same task in different environments. To achieve a strong generalization, the AI agent must infer the spatial goal specification for the task. However, there can be multiple goal specifications that fit the given demonstration. To address this, we propose a reward learning approach, Graph-based Equivalence Mappings (GEM), that can discover spatial goal representations that are aligned with the intended goal specification, enabling successful generalization in unseen environments. Specifically, GEM represents a spatial goal specification by a reward function conditioned on i) a graph indicating important spatial relationships between objects and ii) state equivalence mappings for each edge in the graph indicating invariant properties of the corresponding relationship. GEM combines inverse reinforcement learning and active reward learning to efficiently improve the reward function by utilizing the graph structure and domain randomization enabled by the equivalence mappings. We conducted experiments with simulated oracles and with human subjects. The results show that GEM can drastically improve the generalizability of the learned goal representations over strong baselines.

Paper

Paper

Aviv Netanyahu, Tianmin Shu, Joshua B. Tenenbaum, and Pulkit Agrawal. ADiscovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning. 39th International Conference on Machine Learning (ICML), 2022. [Paper] [Code]

Contact

Any questions? Please contact Aviv Netanyahu (avivn [at] mit.edu) and Tianmin Shu (tshu [at] mit.edu)