


With just one command, you can make coffee, pour red wine, and hammer nails. Tsinghua's embodied smart CoPa is now available.
Recently, a lot of progress has been made in the direction of embodied intelligence. From Google’s RT-H to Figure 01 jointly created by OpenAI and Figure, robots are becoming more interactive and versatile.
If robots become assistants in people’s daily lives in the future, what tasks do you expect them to be able to complete? Make a steaming cup of hand-brewed coffee, tidy up the desktop, and even help you arrange a romantic date. Tsinghua's new embodied intelligence framework "CoPa" can complete these tasks with just one command.
CoPa (Robotic Manipulation through Spatial Constraints of Parts) is the latest intelligent framework proposed by the Tsinghua University robotics research team under the leadership of Professor Gao Yang. This framework realizes for the first time the robot's generalization ability when facing long-distance tasks and complex 3D behaviors in a variety of scenarios.
Paper address: https://arxiv.org/abs/2403.08248
Project home page: https://copa-2024.github.io/
Thanks to the unique application of visual language large models (VLMs), CoPa can be used in the open without any specific training. It can be generalized in various scenarios and can handle complex instructions. What is most striking about CoPa is its ability to demonstrate a deep understanding of the physical properties of objects in the scene, as well as its precise planning and manipulation capabilities.
For example, CoPa can help researchers make a cup of hand-brewed coffee:
In this task, CoPa can not only understand each object in a complex table display function, and their physical operations can also be completed through precise control. For example, in the task of "pour water from the kettle into the funnel", the robot moves the kettle over the funnel and accurately rotates it to the appropriate angle so that the water can flow from the mouth of the kettle into the funnel.
CoPa can also carefully arrange a romantic date. After understanding the researchers’ dating needs, CoPa helped them set up a beautiful Western dining table.
#While deeply understanding user needs, CoPa also demonstrates the ability to accurately manipulate objects. For example, in the task of "inserting a flower into a vase", the robot first grabs the stem of the flower, rotates it until it faces the vase, and finally inserts it.

Method introduction
Algorithm process
Most operating tasks can be decomposed There are two stages: the grasping of the object, and the subsequent actions required to complete the task. For example, when opening a drawer, we need to grasp the handle of the drawer first, and then pull the drawer out along a straight line. Based on this, the researchers designed two stages, that is, first through the "Task-Oriented Grasping module (Task-Oriented Grasping)" to generate the pose of the robot grasping the object, and then through the "Task-related motion planning module (Task-Aware)" Motion Planning)" generates the pose required to complete the task after grabbing. The robot's transfer between adjacent poses can be achieved through traditional path planning algorithms.
Important Part Detection Module
Researchers observed that most operational tasks require detailed "part-level understanding" of objects in the scene. For example, when cutting something with a knife, we hold the handle instead of the blade; when wearing glasses, we hold the frame instead of the lenses. Based on this observation, the research team designed a "coarse-to-fine part grounding module" to locate task-related parts of the scene. Specifically, CoPa first locates task-relevant objects in the scene through coarse-grained object detection, and then locates task-relevant parts of these objects through fine-grained part detection.
In the "task-oriented grabbing module", CoPa first locates the grabbing position (such as the handle of the tool) through the important part detection module, and the position information is Used to filter the grasping poses generated by GraspNet (a model that can generate all possible grasping poses in the scene) to obtain the final grasping pose.
Task-related motion planning module
In order to allow a large visual language model to help the robot perform operating tasks, this research needs to design an interface that can both The model reasons in a language and is conducive to robot operation. The research team found that during the execution of tasks, task-related objects are usually subject to many spatial geometric constraints. For example, when charging a mobile phone, the charging head must be facing the charging port; when capping a bottle, the cap must be placed squarely on the mouth of the bottle. Based on this, the research team proposed using spatial constraints as a bridge between visual language large models and robots. Specifically, CoPa first uses a large visual language model to generate the spatial constraints that task-related objects need to meet when completing the task, and then uses a solving module to solve the robot's pose based on these constraints.
Experimental results
CoPa Capability Assessment
CoPa real-world operational tasks Demonstrated strong generalization ability. CoPa has a deep understanding of the physical properties of objects in the scene, thanks to the utilization of common-sense knowledge embedded in large models of the visual language.
For example, in the "Hammer a Nail" task, CoPa first grabbed the handle of the hammer, then rotated the hammer until the hammer head was facing the nail, and finally hammered downwards. The task required precise identification of the hammer handle, hammer face, and nail face, and a full understanding of their spatial relationships, demonstrating CoPa's in-depth understanding of the physical properties of objects in the scene.
In the task of "putting the eraser into the drawer", CoPa first located the eraser, then found that part of the eraser was wrapped in paper, so it cleverly grabbed it This part, make sure the eraser doesn't get stained.
In the task of "inserting the spoon into the cup", CoPa first grabbed the handle of the spoon, translated and rotated it until it was facing vertically downward and facing the cup, and finally Inserting it into a cup demonstrates that CoPa has a good understanding of the spatial geometric constraints that an object needs to meet to complete its task.
The research team conducted sufficient quantitative experiments on 10 real-world tasks. As shown in Table 1, CoPa significantly outperforms baseline methods as well as many ablation variants on these complex tasks.
Ablation Experiment
The researchers proved the importance of the following three components in the CoPa framework through a series of ablation experiments: Basic model, coarse-to-fine part detection, spatial constraint generation. The experimental results are shown in Table 1 above.
Basic model
The CoPa w/o foundation ablation experiment in the table removes the use of the basic model in CoPa and instead uses the detection model to locate objects, and a rule-based approach to generate spatial constraints. The experimental results show that the success rate of this ablation variant is very low, proving the important role of the rich common sense knowledge contained in the basic model in CoPa. For example, in the "Sweeping Nuts" task, the ablation variant does not know which tool in the scene is suitable for sweeping.
Detection of parts from coarse to fine
In the table, CoPa w/o coarse-to-fine ablation experiment removes CoPa from coarse to fine Fine part detection design, instead directly using fine-grained segmentation to localize objects. This variant significantly degrades performance on the relatively difficult task of locating important parts of an object. For example, in the "Hammer a Nail" task, the lack of a "coarse to fine" design makes it difficult to identify the hammer surface.
Spatial constraint generation
The CoPa w/o constraint ablation experiment in the table removes the spatial constraint generation module of CoPa and instead allows visual language The large model directly outputs the specific numerical value of the robot's target pose. Experiments show that it is very difficult to directly output the robot target pose based on scene pictures. For example, in the "pour water" task, the kettle needs to be tilted at a certain angle, and this variant is completely unable to generate the robot's posture at this time.
For more information, please refer to the original paper.
The above is the detailed content of With just one command, you can make coffee, pour red wine, and hammer nails. Tsinghua's embodied smart CoPa is now available.. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

Conference Introduction With the rapid development of science and technology, artificial intelligence has become an important force in promoting social progress. In this era, we are fortunate to witness and participate in the innovation and application of Distributed Artificial Intelligence (DAI). Distributed artificial intelligence is an important branch of the field of artificial intelligence, which has attracted more and more attention in recent years. Agents based on large language models (LLM) have suddenly emerged. By combining the powerful language understanding and generation capabilities of large models, they have shown great potential in natural language interaction, knowledge reasoning, task planning, etc. AIAgent is taking over the big language model and has become a hot topic in the current AI circle. Au
