Domestic Scientists Announce Successive Achievements on the 1st
AI-based robot gripper performing pack-in-hole work. Photo by Korea Institute of Industrial Technology.
[Asia Economy Reporter Kim Bong-su] Is the day when robots based on artificial intelligence (AI) replace humans not far off? On the 1st, domestic researchers unveiled 'world-first' technologies.
First, the research team led by Dr. Bae Ji-hoon at the Korea Institute of Industrial Technology announced that they have developed the world's first 'AI-based object assembly technology' that enables robots to grasp, move, and assemble any object without human intervention by utilizing AI. The robot independently establishes and executes the optimal work plan considering difficulty, time, and stability. The team integrated AI with the smart gripper technology they developed in 2020. In particular, they applied a proprietary assembly algorithm technology called 'Peg-in-hole using fingers,' enabling the robot gripper to identify the hole's position and assemble accurately in mid-air without prior information. Even with positional errors, the object tilts and slides toward the hole direction, self-correcting during assembly, similar to how a person assembles objects using fingertip sensations with eyes closed. Without attaching a separate 'force-torque sensor' to the fingertips, the robot can stably grasp and manipulate various objects using only finger joint movements, making it suitable for robot assembly tasks that require simultaneous control of position and force.
The research team subsequently developed 'AI-based work planning and object recognition technology' and combined it with smart gripper technology to complete an object assembly technology capable of working without human intervention even in complex assembly processes. They equipped the robot hand with a 'hand-eye camera' to implement object recognition technology that can identify the position, posture, and angle of randomly placed objects. They also developed and integrated 'scheduling AI' that optimizes the combination of unit tasks considering the difficulty, required time, and stability of individual tasks.
Two robot grippers and robot arms equipped with scheduling AI that performs efficient work through reinforcement learning generate optimal work schedules to quickly and accurately assemble randomly placed pegs and holes in different situations. They can respond as precisely and flexibly as human hands, making them applicable across industries as versatile collaborative robots not limited to specific objects.
Dr. Bae said, "This is a result of the fusion of AI and robots, realizing the imagination of combining AlphaGo with robot hands to play Go without human intervention," adding, "We plan to conduct follow-up applied research so that these robots can be deployed in dangerous sites such as order picking in large supermarkets or logistics warehouses and wire work in live current environments."
Laboratory for Automated Catalyst Performance Evaluation Using Robots. Photo by Korea Institute of Energy Research.
Technology to create unmanned laboratories using robot technology was also developed. On the same day, the Korea Institute of Energy Research announced that the research team led by Dr. Park Ji-chan of the Clean Fuel Research Lab established the nation's first automated catalyst performance evaluation laboratory using robots, opening the era of unmanned laboratories. Until now, experiments required manual operation and measurement of numerous equipment and reagents in the lab. Now, by utilizing robot technology, a fully automated laboratory operating 365 days a year is within reach.
The research team developed a fully automated catalyst performance evaluation system that uses robots to perform catalyst pre-evaluation experiments, which previously skilled researchers could only conduct about three times a day, up to six times per hour unmanned and stably. This can replace about 30 to 50 specialized personnel monthly. The team integrated robots from the domestic collaborative robot manufacturer Rainbow Robotics Co., Ltd., with a vibration stirrer, micropipette, and UV/Vis spectrometer. They also incorporated a self-developed automation program designed to accurately analyze the progress of catalytic reactions in real time.
Dr. Park Ji-chan, who led the research, said, "Replacing repetitive catalyst evaluation experiments, which only skilled researchers could perform smoothly, with unmanned automated robots to conduct them quickly and reliably was a significant achievement," adding, "We aim to implement a smart laboratory for small-batch multi-product nano catalysts, further develop autonomous laboratories integrated with AI algorithms, and ultimately complete a national catalyst sharing platform center based on this."
Development of Scene Recognition Technology That Learns Object Concepts Independently. Photo by KAIST.
KAIST also announced on the same day that it developed AI technology capable of identifying objects in videos autonomously without human guidance. The research team led by Professor Ahn Sung-jin of the Department of Computer Science, in collaboration with Rutgers University in the United States, developed artificial intelligence technology that can identify objects in videos without human labeling. This model is the first AI model that can identify objects in complex videos without explicit labeling of objects in each scene.
For machines to intelligently perceive and reason about their surroundings, the ability to understand objects composing visual scenes and their relationships is essential. However, most research in this field has used supervised learning methods requiring humans to label objects corresponding to each pixel in the video. Such manual work is prone to errors and demands significant time and cost.
In contrast, the technology developed by the research team adopts a self-supervised learning approach, similar to humans, where the AI autonomously learns object concepts solely from environmental observations without human guidance. AI capable of learning object concepts independently without human supervision has been anticipated as a core next-generation cognitive technology.
Previous studies using unsupervised learning had the limitation of identifying objects only in simple scenes where object shapes and backgrounds are clearly distinguishable. Unlike these, the technology developed by Professor Ahn's team is the first model applicable to realistic scenes containing many objects of complex shapes.
The research was inspired by image generation studies like the AI software DALL-E, which can generate realistic images from text input. Instead of inputting text, the team trained the model to detect objects in scenes and generate images from the representations of those objects. They also noted that using a transformer decoder similar to DALL-E was a key factor enabling the model to handle realistic and complex videos.
The team evaluated the model's performance not only on complex and unrefined videos but also on real-world videos such as aquariums with many fish and traffic-congested roads from YouTube. The results confirmed that the proposed model segments and generalizes objects much more accurately than existing models.
This research was presented at the 36th Conference on Neural Information Processing Systems (NeurIPS), a machine learning conference held in New Orleans, USA, starting on the 28th of last month.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

