Two papers from RLLAB are accepted to ICRA 2023

[2023.01.17]

Following papers are accepted to the IEEE International Conference on Robotics and Automation (ICRA 2023):

  • SCAN: Socially-Aware Navigation Using Monte Carlo Tree Search by Jeongwoo Oh, Jaeseok Heo, Junseo Lee, Gunmin Lee, Minjae Kang, Jeongho Park, and Songhwai Oh

    • Abstract: Designing a socially-aware navigation method for crowded environments has become a critical issue in robotics. In order to perform navigation in a crowded environment without causing discomfort to nearby pedestrians, it is necessary to design a global planner that is able to consider both human-robot interaction (HRI) and prediction of future states. In this paper, we propose a socially-aware global planner called SCAN, which is a global planner that generates appropriate local goals considering HRI and prediction of future states. Our method simulates future states considering the effects of the robot’s actions on the future intentions of pedestrians using Monte Carlo tree search (MCTS), which estimates the quality of local goals. For fast simulation, we execute pedestrian motion prediction using Y-net and future state simulation using MCTS in parallel. Neural networks are only used in Y-net and not in MCTS, which enables fast simulation and prediction of a long horizon of future states. We evaluate the proposed method based on the proposed socially-aware navigation metric using realistic pedestrian simulation and real-world experiments. The results show that the proposed method outperforms existing methods significantly, indicating the importance of considering human-robot interaction for socially-aware navigation.
    • Video
  • SDF-Based Graph Convolutional Q-Network for Multi-Object Rearrangement by Hogun Kee, Minjae Kang, Dohyeong Kim, Jaegu Choy, and Songhwai Oh

    • Abstract: In this paper, we propose a signed distance field (SDF)-based deep Q-learning framework for multi-object rearrangement. Our method learns to rearrange objects with non-prehensile manipulation, e.g., pushing, in unstructured environments. To reliably estimate Q-values in various scenes, we train the Q-network using an SDF-based scene graph as the state-goal representation. To this end, we introduce SDFGCN, a scalable Q-network structure which can estimate Q-values from a set of SDF images satisfying permutation invariance by using graph convolutional networks. In contrast to grasping-based rearrangement methods that rely on the performance of grasp predictive models for perception and movement, our approach enables rearrangements on unseen objects, including hard-to-grasp objects. Moreover, our method does not require any expert demonstrations. We observe that SDFGCN is capable to unseen objects in challenging configurations, both in the simulation and real world.
    • Video