본문 바로가기
bar_progress

Text Size

Close

"I am the ultimate quadruped walker"... Walking with AI without vision

KAIST Develops Blind Walking Robot System

Recently, quadruped robots have gained attention due to investment from Samsung Electronics. A domestic research team has developed quadruped robot technology that can walk quickly even in dark, visionless conditions or environments with many irregularities and obstacles by utilizing artificial intelligence (AI). In disaster situations where smoke obscures vision, the robot can ascend and descend stairs without the aid of separate visual or tactile sensors. It can move without falling in uneven environments such as tree roots.


The Korea Advanced Institute of Science and Technology (KAIST) announced on the 29th that a research team led by Professor Myunghyun from the Department of Electrical Engineering has developed robot control technology capable of robust "blind locomotion" in various unstructured environments.


"I am the ultimate quadruped walker"... Walking with AI without vision

Humans can walk to the bathroom in the dark after waking up during sleep. Because this kind of "blind locomotion" is possible, the technology was named "DreamWaQ." The robot itself is called "DreamWaQer."


Existing walking robot controllers are based on kinematic or dynamic models. This is referred to as model-based control. To achieve stable walking in unstructured environments such as fields or mountain paths, it is necessary to quickly obtain characteristic information of the model. Enhancing environmental perception capabilities imposes significant software and hardware burdens.


One of the AI learning methods developed by the research team is a deep reinforcement learning-based controller that can quickly calculate appropriate control commands for each motor of the walking robot through data from various environments obtained from a simulator. While previously a separate tuning process was required for controllers trained in simulation to work well on actual robots, the controller developed by the research team has the advantage of requiring no additional tuning, making it expected to be easily applied to various walking robots.


The controller developed by the research team, DreamWaQ, consists mainly of a context estimation network that estimates information about the ground and the robot, and a policy network that outputs control commands. The context estimation network implicitly estimates ground information and explicitly estimates the robot’s state through inertial and joint information. This information is input into the policy network to generate optimal control commands. Both networks are trained together in simulation.


While the context estimation network is trained through supervised learning, the policy network is trained using the actor-critic method, a deep reinforcement learning approach. The actor network can only implicitly estimate surrounding terrain information. In simulation, terrain information is known, and the critic network, which has access to this terrain information, evaluates the policy of the actor network.


The entire training process takes only about one hour, and only the trained actor network is installed on the actual robot. Without seeing the surrounding terrain, the robot uses only internal inertial measurement unit (IMU) sensors and joint angle measurements to imagine which of the various environments learned in simulation is most similar. When suddenly encountering a step such as stairs, the robot cannot know until its foot touches the step, but the moment the foot contacts the step, it quickly imagines the terrain information. It then sends appropriate control commands to each motor based on this inferred terrain information, enabling rapid adaptive walking.


The DreamWaQer robot has demonstrated robust performance by overcoming obstacles such as stairs approximately two-thirds the height from the ground to the body during walking, not only in laboratory environments but also in university campus environments with many curbs and speed bumps, and in outdoor environments with tree roots and gravel. The research team also confirmed stable walking at speeds ranging from a slow 0.3 m/s to a relatively fast 1.0 m/s, regardless of the environment.


The research results will be presented at ICRA (IEEE International Conference on Robotics and Automation), the world’s most prestigious conference in the field of robotics, to be held in London, UK, at the end of May.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top