Three papers from RLLAB are accepted to ICRA 2022

[2022.02.01]

Following papers are accepted to the IEEE International Conference on Robotics and Automation (ICRA 2022)

  • Visually Grounding Language Instruction for History-Dependent Manipulation by Hyemin Ahn, Obin Kwon, Kyungdo Kim, Jaeyeon Jeong, Howoong Jun, Hongjung Lee, Dongheui Lee, and Songhwai Oh
    • Abstract: This paper emphasizes the importance of a robot’s ability to refer to its task history, especially when it executes a series of pick-and-place manipulations by following language instructions given one by one. The advantage of referring to the manipulation history can be categorized into two folds: (1) the language instructions omitting details or using expressions referring to the past can be interpreted, and (2) the visual information of objects occluded by previous manipulations can be inferred. For this, we introduce a history-dependent manipulation task whose objective is to visually ground a series of language instructions for proper pick-and-place manipulations by referring to the past. We also suggest a relevant dataset and model which can be a baseline, and show that our network trained with the proposed dataset can also be applied to the real world based on the CycleGAN.
    • Video
  • Dynamics-Aware Metric Embedding: Metric Learning in a Latent Space for Visual Planning by Mineui Hong, Kyungjae Lee, Minjae Kang, Wonsuhk Jung, and Songhwai Oh
    • Abstract: In this paper, we consider vision-based control tasks of which the desired goals are given as target images. The problems are often addressed by an autonomous agent minimizing a manually designed cost function. However, it is challenging to design a suitable cost function for each goal by hand, especially, when the current and the goal states of the system are only described by visual observations. In order to tackle this issue, we propose a method called dynamics-aware metric embedding (DAME), which generates cost functions in a self-supervised manner to help the agent plan the controls to accomplish the desired goals. The proposed method learns a metric function that reflects how easy to find a path connecting two states considering the dynamics of the system. To learn the metric between states, we utilize a measure named probabilistic reachability, which is computed using the probability of reaching from one state to the other state via random walk. We evaluate the proposed method in several vision-based control tasks, including both various simulation benchmarks and real-world table-top manipulation tasks, and observe that DAME outperforms other baseline algorithms by over 30% in the terms of success rate.
    • Video 
  • TRC: Trust Region Conditional Value at Risk for Safe Reinforcement Learning by Dohyeong Kim and Songhwai Oh
    • Abstract: As safety is of paramount importance in robotics, reinforcement learning that reflects safety, called safe RL, has been studied extensively. In safe RL, we aim to find a policy which maximizes the desired return while satisfying the defined safety constraints. There are various types of constraints, among which constraints on conditional value at risk (CVaR) effectively lower the probability of failures caused by high costs since CVaR is a conditional expectation obtained above a certain percentile. In this paper, we propose a trust region-based safe RL method with CVaR constraints, called TRC. We first derive the upper bound on CVaR and then approximate the upper bound in a differentiable form in a trust region. Using this approximation, a subproblem to get policy gradients is formulated, and policies are trained by iteratively solving the subproblem. TRC is evaluated through safe navigation tasks in simulations with various robots and a sim-to-real environment with a Jackal robot from Clearpath. Compared to other safe RL methods, the performance is improved by 1.93 times while the constraints are satisfied in all experiments.
    • Video