Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) are turning science fiction into reality by developing a shape-shifting robot that resembles a shapeless slime, capable of performing various tasks without traditional mechanical components.

The MIT team's discovery involves a machine learning technique designed for a robot that doesn't have a conventional structure, such as arms, legs or skeletal supports, points out New Atlas.

Instead, this robot can change its shape by squashing, bending and stretching to interact with its environment.

Shape-shifting ‘slime’ robots learn to reach, kick, dig, and catch
Formless ‘slime’ robots that shape-change to complete complex tasks – it sounds like science fantasy. However, MIT researchers have developed a machine-learning technique that brings shape-changing soft robots a step closer to reality.

This innovation marks a significant departure from previous attempts at shape-shifting robots, which depended on external magnetic controls and were unable to move independently.

"When people think of soft robots, they tend to think of robots that are elastic but return to their original shape. Our robot is like slime and can actually change its morphology," points out Boyuan Chen, from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-author of the study available for consultation on arXiv.

"It's impressive that our method worked so well because we're dealing with something very new," adds Chen.

To manage this highly adaptable form, the team turned to artificial intelligence, taking advantage in particular of reinforcement learning to navigate the complexities of controlling such a versatile structure.

Reinforcement learning, normally used in more rigid and defined robotic systems, was uniquely adapted for this project. The researchers treated the robot's potential movements as an "action space", visualized as an image made up of pixels.

DittoGym: Learning to Control Soft Shape-Shifting Robots
Robot co-design, where the morphology of a robot is optimized jointly with a learned policy to solve a specific task, is an emerging area of research. It holds particular promise for soft robots, which are amenable to novel manufacturing techniques that can realize learned morphologies and actuators. Inspired by nature and recent novel robot designs, we propose to go a step further and explore the novel reconfigurable robots, defined as robots that can change their morphology within their lifetime. We formalize control of reconfigurable soft robots as a high-dimensional reinforcement learning (RL) problem. We unify morphology change, locomotion, and environment interaction in the same action space, and introduce an appropriate, coarse-to-fine curriculum that enables us to discover policies that accomplish fine-grained control of the resulting robots. We also introduce DittoGym, a comprehensive RL benchmark for reconfigurable soft robots that require fine-grained morphology changes to accomplish the tasks. Finally, we evaluate our proposed coarse-to-fine algorithm on DittoGym and demonstrate robots that learn to change their morphology several times within a sequence, uniquely enabled by our RL algorithm. More results are available at https://dittogym.github.io.

This method allows the robot to coordinate movements through what can be considered its "limbs", despite not having a fixed shape, allowing it to carry out coordinated actions such as stretching or compressing parts of itself.

To refine and improve the robot's movements, the researchers used a "coarse to fine policy learning" strategy. Initially, the algorithm works at a lower resolution, managing broader movements and identifying effective patterns.

It then moves to a higher resolution to fine-tune the actions, increasing the robot's precision and its ability to perform complex tasks.

The team tested their innovative control system in a simulated environment they developed, known as DittoGym.

This platform presented the robot with various challenges, such as shape matching or object manipulation, which are key to assessing the robot's adaptability and control efficiency.

Their findings showed that the coarse-to-fine approach significantly outperformed the other methods, providing a promising basis for further development.

Although practical real-world applications may still be some years away, the implications of this research are vast. Potential future uses of this technology could range from navigation in the human body for medical purposes to integration into wearable technologies.