Home Technology peripherals AI A new chain of three-dimensional perception of embodied intelligence, TeleAI & Shanghai AI Lab proposed a multi-perspective fusion embodied model 'SAM-E'

A new chain of three-dimensional perception of embodied intelligence, TeleAI & Shanghai AI Lab proposed a multi-perspective fusion embodied model 'SAM-E'

Jun 05, 2024 pm 04:09 PM
machine learning industry SAM-E

具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

When we pick up a mechanical watch, we will see the dial from the front and hands, you can see the crown and bracelet from the side, and when you open the back of the watch, you can see the complex gears and movement. Each perspective provides different information that is combined to understand the overall three-dimensional view of the object being manipulated.

If you want the robot to learn to perform complex tasks in real life, you first need tomake the robot understand the properties of the operating object and the operated object, and the corresponding three-dimensional operating space , including object position, shape, occlusion relationship between objects, and the relationship between the object and the environment, etc.

Secondly, the robot needs to understand natural language instructions, carry out long-term planning and efficient execution of future actions. It is challenging to equip robots with the capabilities from environment perception to action prediction.

##Recently,
China Telecom Artificial Intelligence Research Institute (TeleAI) Professor Li Xuelong team jointly Shanghai Artificial Intelligence Laboratory, Tsinghua University and other units, simulates the human cognitive process of "perception-memory-thinking-imagination", and proposes a universal embodied operation algorithm driven by multi-perspective fusion, providing a feasible solution for robots to learn complex operations. The paper was accepted by International Machine Learning Conference ICML 2024, laying the foundation for building a general three-dimensional embodied strategy. The SAM-E video introduction is as follows: 具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
In recent years, the ability of visual basic models to understand images has developed rapidly. However, there are still many challenges in understanding three-dimensional space. Can use large visual models to help embodied agents understand three-dimensional operating scenes and enable them to complete various complex operating tasks in three-dimensional space? Inspired by the cognitive process of "Perception-Memory-Thinking-Imagination", the paper proposes a new embodied base model SAM-E based on the visual segmentation model Segment Anything (SAM) .

First of all, SAM-E has a powerful promptable "
perception
" ability, applying SAM's unique segmentation structure to the specific language instructions. In the personal task, the model pays attention to the operating objects in the scene by parsing text instructions.

Subsequently, a multi-view Transformer is designed to fuse and align depth features, image features and instruction features to achieve object "
memory
" and Operate "Thinking" to understand the three-dimensional operating space of the robotic arm.

Finally, a
new action sequence prediction network
is proposed to model action sequences of multiple time steps and "imagine" actions Instructions realize end-to-end output from three-dimensional scene perception to embodied actions.
具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
  • Paper title: SAM-E: Leveraging Visual Foundation Model with Sequence Imitation for Embodied Manipulation
  • Paper link: https://sam-embodied.github.io/static/SAM-E.pdf
  • Project address: https://sam-embodied.github.io/

From two-dimensional perception to three-dimensional perception

In numbers In the tide of the times, with the rapid development of artificial intelligence technology, we are gradually entering a new era - the era of embodied intelligence. Giving an intelligent agent a body and the ability to directly interact with the real world has become one of the key directions of current research.

#To achieve this goal, the agent must have strong three-dimensional perception capabilities so that it can accurately understand the surrounding environment.

Traditional two-dimensional perception methods are inadequate when faced with complex three-dimensional space. How to enable embodied intelligence to master the accurate modeling ability of three-dimensional space through learning? has become a key issue that needs to be solved urgently.

Existing workRestore and reconstruct the three-dimensional space from multiple perspectives such as front view, top view, side view, etc. However, the calculations required The resources are relatively large, and the generalization ability in different scenarios is limited.

In order to solve this problem, this work explores a new approach-Applying the powerful generalization ability of large visual models to the three-dimensional representation of embodied agents Perceptual field.

SAM-E proposes to use SAM, a general visual large model with strong generalization ability, for visual perception. Through efficient fine-tuning in embodied scenes, it It has generalizable and promptable feature extraction capabilities, instance segmentation capabilities, complex scene understanding and other capabilities that can be effectively transferred to embodied scenes.

In order to further optimize the performance of the SAM base model, the concept of action sequence network is introduced, which can not only capture the prediction of a single action, but also deeply understand the relationship between consecutive actions. Internal connections can fully exploit the temporal information between actions, thereby further improving the base model's understanding and adaptability to embodied scenes.

具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」

#                                                                     Figure 1. SAM-E overall framework

SAM-E method

The core viewpoint of the SAM-E method mainly includes two aspects:

    Using SAM's prompt-driven structure, a powerful
  • base model
    is constructed, which has excellent generalization performance under task language instructions. Through LoRA fine-tuning technology, the model is adapted to specific tasks, further improving its performance.
  • Adopt
  • sequential action modeling technology
    to capture the timing information in the action sequence, better understand the dynamic changes of the task, and adjust the robot's strategy and execution in a timely manner This way, the robot can maintain a high execution efficiency.

##can prompt perception and fine-tuning
SAM- E The core lies in the network structure driven by task instruction prompts, including a powerful visual encoder and a lightweight decoder.
In the embodied scene
the task "prompt" is presented in the form of natural language
. As a task description instruction, the visual encoder exerts its prompting perceptual ability to extract and task Relevant characteristics. The policy network acts as a decoder and outputs actions based on the fused visual embedding and language instructions.

In the training phase, SAM-E uses LoRA for efficient fine-tuning

, which greatly reduces the training parameters and enables the basic vision model to quickly adapt to specific tasks.

Multi-perspective three-dimensional fusion

SAM-E introduces a multi-perspective Transformer network to fuse visual input from multiple perspectives and provide in-depth Understand three-dimensional space. Its work is divided into two stages: View-wise Attention and Cross-view Attention.

First, perform intra-view attention processing on multi-view features, and then combine multiple views and language descriptions for mixed-view attention to achieve multi-view information fusion. and image-language alignment.

Action sequence modeling

In the robot arm execution, the end effector Position and rotation usually show continuous and smooth changes. This feature allows for a close connection and continuity between adjacent actions. Based on this observation, a novel temporal smoothing hypothesis is proposed, aiming to fully utilize the intrinsic correlation between adjacent actions to achieve effective imitation learning of action sequences.

Specifically, the SAM-E framework captures patterns and relationships in action sequences through sequence modeling technology, providing an implicit prior knowledge for action prediction. And constrain the continuity of actions, thereby significantly improving the accuracy and consistency of action prediction.

In practical applications, SAM-E allows subsequent multi-step actions to be executed in one action prediction, greatly improving execution efficiency.

具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」

##                                                                                                      . ##                                                                                                                                                                                                                 Figure 4. Action sequence prediction network

具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」

##Experiment
The experiment uses a challenging collection of robotic arm tasks - RLBench, to conduct a comprehensive evaluation of 3D operation tasks under multi-view observation. The SAM-E model is significantly better than other traditional methods in many aspects. method.
Under
multi-task scenarios
, the SAM-E model significantly improves the mission success rate.

When
faced with the situation of migrating a small number of samples to new tasks
    , SAM-E effectively improved the new tasks by virtue of its strong generalization performance and efficient execution efficiency Performance.
  • ##                                                                                                                                                                    
6. 3D operation task example

## In addition, the action sequence of the action sequence is significantly improved The execution efficiency of SAM-E, at the same time, in the policy execution phase, compared to a single action, the execution of action sequences significantly reduces the number of model inferences. In the test, the corresponding task can even be completed through one model inference.
具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
##                                                                                                                                                                                                                                  ​##SAM-E is equally effective in real robotic arm control, using two third-person cameras to capture multi-view visual input, with real-time inference capabilities on five real-world tasks.

##                                                                                                                                                                                                            
##Summary
具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
This work pioneered a general approach based on multi-perspective fusion. Using embodied operation algorithms, visual segmentation of large models and multi-view fusion are used to achieve three-dimensional physical space perception of embodied agents.
Through efficient parameter fine-tuning, the pre-trained visual model is transferred to the specific scene, which can solve the complex 3D robot arm operation tasks of natural language instructions. In addition, the model can quickly generalize to new tasks by learning a small number of expert examples, showing superior training efficiency and action execution efficiency.
More importantly, SAM-E uses the cognitive link of "Perception-Memory-Thinking-Imagination
" to realize the process from data to End-to-end mapping of actions. Its significance lies not only in its application in embodied intelligence, but also in its inspiration for improving the cognitive ability of intelligence.

By simulating human perception and decision-making methods, intelligent agents can better understand and adapt to complex environments, thereby playing a greater role in a wider range of fields.

Team leader introduction:

Li Xuelong, CTO and chief scientist of China Telecom, artificial intelligence of China Telecom Director of the Intelligent Research Institute (TeleAI). Mainly focusing on artificial intelligence, local security, image processing, and embodied intelligence.

The above is the detailed content of A new chain of three-dimensional perception of embodied intelligence, TeleAI & Shanghai AI Lab proposed a multi-perspective fusion embodied model 'SAM-E'. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1662
14
PHP Tutorial
1262
29
C# Tutorial
1236
24
DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners Aug 09, 2024 pm 04:01 PM

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home Aug 21, 2024 pm 07:33 PM

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

Claude has become lazy too! Netizen: Learn to give yourself a holiday Claude has become lazy too! Netizen: Learn to give yourself a holiday Sep 02, 2024 pm 01:56 PM

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded Aug 22, 2024 pm 10:35 PM

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI ​​robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI ​​side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award Aug 15, 2024 pm 04:37 PM

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Sep 03, 2024 pm 05:18 PM

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Aug 08, 2024 am 07:02 AM

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

Distributed Artificial Intelligence Conference DAI 2024 Call for Papers: Agent Day, Richard Sutton, the father of reinforcement learning, will attend! Yan Shuicheng, Sergey Levine and DeepMind scientists will give keynote speeches Distributed Artificial Intelligence Conference DAI 2024 Call for Papers: Agent Day, Richard Sutton, the father of reinforcement learning, will attend! Yan Shuicheng, Sergey Levine and DeepMind scientists will give keynote speeches Aug 22, 2024 pm 08:02 PM

Conference Introduction With the rapid development of science and technology, artificial intelligence has become an important force in promoting social progress. In this era, we are fortunate to witness and participate in the innovation and application of Distributed Artificial Intelligence (DAI). Distributed artificial intelligence is an important branch of the field of artificial intelligence, which has attracted more and more attention in recent years. Agents based on large language models (LLM) have suddenly emerged. By combining the powerful language understanding and generation capabilities of large models, they have shown great potential in natural language interaction, knowledge reasoning, task planning, etc. AIAgent is taking over the big language model and has become a hot topic in the current AI circle. Au

See all articles