[SW Star Lab] Robot Learning: Efficient, Safe, and Socially-Acceptable Machine Learning

starlab.pngThe goal of this project is to develop efficient, safe, and socially friendly machine learning so that autonomous robots can coexist with people in various environments. In this project, we develop socially friendly robot learning technology that enables efficient reinforcement learning with fewer data and ensures safety. The developed technology will be applied to autonomous robots for verification and further refinement. The main applications of this project are a delivery robot based on an autonomous driving algorithm and a safe and socially friendly housekeeping robot using image and language information. In addition, we plan to develop core technologies for autonomous robots and share developed software with AI and robotics communities.

[ SW Star Lab | Robot Learning Laboratory ]


Funded by the Ministry of Science and ICT (MSIT).

Metaverse Deep Reinforcement Learning

metaverse_RL.jpgRecently, the metaverse has been receiving increasing attention as it is expected to be the biggest revolution after the Internet. Metaverse is expected to create a virtual world indistinguishable from the real world and become the center of politics, economy, society, and culture. In this metaverse, a digital twin of the real world can be created in the virtual world, and it is expected that various problems in the real world, such as smart factories, smart cities, logistics, and robotics, can be solved using digital twins. The development of metaverse's digital twin technology makes it possible to apply deep reinforcement learning to various real-world problems, and it is expected that deep reinforcement learning will play an important role in the development of metaverse.


Funded by the National Research Foundation (NRF).

AI Technology for Guidance of Mobile Robots with Uncertain Maps



For reliable navigation in public places, a highly accurate map is required for a mobile robot. However, it is extremely time-consuming and expensive to maintain accurate maps of all places at all times. In this project, we develop a new class of machine learning techniques to overcome this challenge in order for a mobile robot to reliably navigate public places without the need for highly accurate maps. The ultimate goal of the project is to develop human-like navigation skills for mobile robots.


Funded by the Ministry of Science and ICT (MSIT).

Brain-Inspired AI with Human-Like Intelligence



The goal of this project is to understand the progressive developmental process of the basic principles of intelligence and cognitive abilities of the human brain using developmental cognitive theory, computational neuroscience, and brain-based artificial intelligence. In addition, we aim to develop the next-generation machine learning technology which can simulate a brain with child-level cognitive abilities through incremental growth.


Funded by the Ministry of Science and ICT (MSIT).

Cloud Robotics: Intelligence Augmentation, Sharing, and Framework Technology



With the advances in communication and cloud computing, we can overcome the limits of an individual robot by connecting multiple robots using a cloud system. A cloud robotic system can be used to share and augment knowledge collected from a large number of robots deployed in various places and advance the capabilities of robots by learning from newly collected data and optimizing the learning mechanisms.


 Funded by the Ministry of Science and ICT (MSIT).


Past Projects   

Robot Learning from Demonstrations with Mixed Qualities


As robots are appearing in our daily lives, they are tasked with more complicated missions under complex environments, making it extremely difficult to prescribe what to do under what conditions. Imitation learning is a promising technique to teach a robot how to perform complex tasks. However, existing imitation learning methods are developed under the assumption that demonstrations are collected from experts. In this project, we propose a new framework for imitation learning which allows demonstrations from multiple demonstrators with mixed qualities. We assume that there is no labeling on the level of expertise in collected demonstrations, allowing a wide range of demonstrations for training. The proposed framework can also increase the safety and social acceptability of robots by integrating counterexamples.


Funded by the National Research Foundation (NRF).

Biomimetic Recognition Technology



This project develops low-complexity object and situation recognition methods optimized for small-sized resource-limited biomimetic robots.

Funded by the Defense Acquisition Program Administration (DAPA). 


Realistic 4D Reconstruction of Dynamic Objects



This project aims to reconstruct non-rigid objects that undergo dynamic changes over time with a wide range of applications in virtual reality (VR) and augmented reality (AR). With multiple RGB and IR cameras, the motion of subjects is captured from multiple viewpoints. The collected images are then combined to generate realistic 4D reconstruction of subjects in terms of geometric accuracy. The project develops techniques to overcome challenges, such as the high degree of freedom in human motion, complex non-rigid motion of different surfaces, and occlusion, to name a few.


Funded by the Ministry of Science and ICT (MSIT).

On-the-Fly Machine Learning for Evolving Intelligent CPSs


With the advances in IoT, cloud, and machine learning, especially deep learning, it is envisioned that IoT devices collect data and the collected data get processed in the cloud using machine learning techniques at different levels. As IoT devices are operated in diverse environments and utilized by different users, the success of intelligent IoT-Cloud platforms depends on their ability to learn on-the-fly adapting to their operating environments and users. In order to support such capability, a new versatile deep learning architecture is required to meet the various capabilities of diverse IoT devices. In this project, we investigate a new deep learning architecture and personalized machine learning techniques to support different requirements, such as real-time operation and limitations in memory and energy usage.

Funded by the Ministry of Science and ICT (MSIT).

Human-Level Lifelong Machine Learning



In order to act intelligently in an open environment, it is necessary to have the ability to learn and predict physical phenomena under different contexts and surroundings. Such dynamic phenomena can be mathematically modeled as stochastic processes.  In this project, we develop novel algorithms and methods for real-time nonparametric learning and prediction of time-varying stochastic processes. 

Funded by the Ministry of Science, ICT, and Future Planning (MSIP).

Wireless Camera Network Technology for Public Safety


 Cameras have become an important tool for public safety in terms of crime prevention and evidence gathering. While more cameras are installed in public spaces and we are witnessing the proliferation of images and videos in the Internet, there are limited tools to process, transfer, and analyze visual data. This project develops techniques for real-time information processing, camera network communication, source coding, and analysis of large-scale societal images and videos generated in a wireless camera network.


Funded by the National Research Foundation (NRF).

Practical Action Recognition and Prediction Technology


Action recognition is the most important component of video analysis. Once we can reliably recognize the actions of persons appearing in a video, it can enable a number of new applications, including video search and tagging, intelligent surveillance, situation understanding, gesture recognition, human-robot, and interaction. However, action recognition is still a challenging problem due to occlusions, clutters, changes in viewpoints, changes in lighting conditions, and different scales, to name a few. We think the fundamental problem is rooted in our attempt to recognize an action from 2D data. If we can recover 3D information from 2D data reliably, we can robustly recognize actions from videos. In this project, we develop robust non-rigid structure from motion (NRSfM) methods based on the recently proposed Procrustean normal distribution (PND). The developed NRSfM methods are applied to develop robust action recognition and prediction algorithms.

Funded by the National Research Foundation (NRF).

Human-Centric Networked Robotics Technology



There is a growing interest in service robots for applications such as housekeeping, military, personal care and support, transportation, rehabilitation, and entertainment, to name a few. Service robots are different from industrial robots in many aspects. The most important aspect of service robots is that they have to operate in an environment with humans. Hence, they have to be able to adapt to changing and uncertain operation domains. This project develops new methods and prototype applications for human-centric networked robots for seamless operations of service robots in our daily lives.

Funded by the National Research Foundation (NRF). 

Resilient Cyber-Physical Systems


A cyber-physical system is highly networked and highly embedded into our physical world and offers a portal to an unprecedented quantity of information about our environment, bringing about a revolution in the amount of control individuals have over their environment. It is envisioned that the cyber-physical systems of tomorrow will dramatically improve adaptability, autonomy, efficiency, functionality, reliability, safety, and usability of engineered systems. This project develops algorithms and methods for resilient cyber-physical systems. 

Funded by the Ministry of Science, ICT, and Future Planning (MSIP). 

Wireless Camera Sensor Networks Technology


Wireless camera sensor networks are an emerging technology based on the recent advances in embedded systems, low-power wireless communication, image sensing devices, and computer vision algorithms.

This project develops the core technology for the commercialization of wireless camera sensor networks and prototype systems. The project focuses on developing convergent technology for camera sensor networks by integrating innovative approaches in communication and information processing. The development of prototype applications will present new possibilities of wireless camera sensor networks technology. 


Funded by the National Research Foundation (NRF).


Mobile Sensor Networks: Algorithms and Applications


In order to address societal issues with growing importance, such as environment protection, efficient management of resources, and security, it is necessary to collect and process sensing data in real-time, potentially under dynamic and unstructured environments. While wireless sensor networks can provide some solutions, they are not adaptive enough to handle dynamic changes in the environment. Mobility in a sensor network can increase its sensing coverage both in space and time and robustness against dynamic changes in the environment and have received extensive attention recently. The goals of this project are to develop the core technology for mobile sensor networks and prototype applications and to present new possibilities using mobile sensor networks technology.

Funded by the National Research Foundation (NRF). 

Situation Understanding for Smart Devices



We are witnessing the emergence of smart devices, such as smartphones, smart pads, and smart TVs. For example, using a smartphone, we can organize our schedule, browse the web, and check emails. Beyond these simple tasks, we can find out our current location and get a direction to our destination using the GPS unit inside a smartphone. As GPS-enabled smartphones have made the location-based service possible, it is envisioned that many new useful and previously unavailable services will be possible using the next generation smart devices. We can provide better services and enable new applications if these devices are aware of the current situation (or context) of users or their surroundings. This project develops algorithms and new smart devices for understanding the situations of users and their surroundings.

Funded by the Korea Creative Content Agency (KOCCA).


Micro Autonomous Systems and Technology (MAST)



MAST is a research project to develop autonomous, multifunctional, collaborative ensembles of agile, mobile micro-systems to enhance tactical situational awareness in urban and complex terrain for small unit operations. MAST is a partnership across 9 member institutions with many more participating institutions. To achieve this objective, the program plans to advance fundamental science and technology in several key areas including: microsystem mechanics, processing for autonomous operation, microelectronics, and platform integration.

CITRIS: Mobile Sensor Networks for Independent Living and Safety at Home




Guardian Angel is a mobile sensor network system to monitor the activities of occupants, detect abnormal behaviors or emergency situations, and alarm a third party in the case of an emergency in an indoor environment.



Heterogeneous Sensor Networks (HSN)


Traditional sensor network research has focused largely on the low-bandwidth sensors, such as temperature, acoustic, infrared, etc. As a result, we have limited applications of sensor networks. In the HSN project, we consider a network of high-bandwidth, low-bandwidth, and mobile sensors. The high-bandwidth sensors include image sensors for in-depth situation awareness and recognition. The main research objectives are to: (1) study the tradeoff between performance and cost, (2) develop design principles for heterogeneous sensor networks, (3) develop (near-)optimal multi-modal, multi-sensor fusion algorithms for heterogeneous sensor networks, and (4) develop a new communication theory and algorithms for heterogeneous sensor networks.

Network Embedded Systems Technology (NEST)


In the NEST project, our main research focus is to develop a real-time control system using wireless sensor networks. The main challenge is the inconsistency in sensor measurements due to packet loss, communication delay, and false detections. These challenges are addressed by robust multiple layers of data fusion and hierarchical architecture. The implemented system is successfully demonstrated using a large-scale outdoor sensor network.