


Log into Nature, how does the brain control movement? DeepMind designs virtual animals with 'AI brains”

Editor | Radish Skin
Animals have very subtle control over their bodies, so they can perform a variety of behaviors. However, how the brain achieves this control remains unclear. To deepen our understanding, we need models that can link control principles to the structure of neural activity in animals.
To achieve this, researchers from Harvard University and Google DeepMind built a "virtual drunk animal" that uses artificial neural networks to drive a biomechanical simulation model of rats in a physical simulator.
The team used deep reinforcement learning to train virtual agents to imitate the behavior of freely moving mice, allowing the researchers to compare "recorded neural activity from real mice" with "virtual agent behavior" models that simulated their behavior Compare. This "network activity of virtual agent animals" can be used to explore the brain's learning and reasoning processes, thereby increasing understanding of these processes. Additionally, the team's deep learning models could help develop smarter robots and other autonomous systems.
The model was able to accurately imitate the movements of real mice, a significant achievement that is expected to improve scientists' understanding of how the brain controls complex coordinated movements.
It is difficult for the most advanced robots to replicate this result, but the research team believes that their discovery can greatly improve the flexibility of future robots.
The research was titled "A virtual rodent predicts the structure of neural activity across behaviors" and was published in "Nature" on June 11, 2024.
#Humans and animals control their bodies with ease and efficiency that is difficult for engineered systems to imitate. This is caused by computational simulation, the technical aspect of sports neuroscience. The reason is that, relative to models of causal production of complex, natural movements, neural activity in motor systems rarely has clear explanations.
These models of biogenesis differ in that neuroscientists attempt to infer motor system function by linking neural activity in relevant brain regions to measurable movement characteristics, such as the kinematics and dynamics of different body parts.
This approach is problematic, however, because the laws of physics inherently relate motion characteristics and therefore can only describe behavior rather than generate it. To solve this problem, the research team proposed a new approach: using virtual animal models associated with control models to infer computational principles.
The research team developed a "virtual rodent" in which an artificial neural network (ANN) drives a biomechanically realistic rat operating in a physical simulator Model.
When building this system, a balance needs to be struck between tractability, expressiveness, and biological realism. The researchers chose the simplest model that could reproduce the mice's behavior and predict neural activity.
The model uses deep reinforcement learning to train ANN to implement the inverse dynamics model. The input is the future movement reference trajectory and current body state of the real animal, and the output is the actions required to achieve the desired state. Researchers can compare the neural activity of real rats with the activity of virtual rodent networks based on related data.
This approach has two main advantages: First, the model is causal and can physically reproduce the behavior of interest, not just describe it. The second is to focus on identifying the functions implemented by brain areas rather than just descriptions of information flow.
“We learned a lot from the challenge of building ‘embodied agents’: AI systems must not only think intelligently, but they must also translate that thinking into practical actions in complex environments.” Matthew from Google Deepmind "Taking the same approach in a neuroscience context appears to provide insights into behavior and brain function," Botvinick said. The results showed that neural activity in the sensorimotor striatum and motor cortex was altered by virtual rodents. of network activity is more accurately predicted, consistent with these two regions achieving inverse dynamical control.
Illustration: virtual mouse. (Source: Deepmind website)
Furthermore, underlying changes in the network predict the structure of neural changes across behaviors and confer system robustness in a manner consistent with the minimal intervention principle of optimal feedback control.
These findings reveal that biomechanically realistic virtual animals through physical simulations can help explain the structure of neural activity across behaviors and link this to theoretical principles of motor control.
Furthermore, this approach demonstrates the potential for artificial controllers to manipulate biomechanical models to reveal computational principles of neural circuits. Virtual animals can serve as a platform for virtual neuroscience to simulate the effects of variables that are difficult to infer in experiments on neural activity and behavior.
This area of research is critical to the development of advanced prosthetics and brain-computer interfaces. By reconstructing neural circuits, insights gained from this study could lead to new ways to treat movement disorders. Additionally, the study noted that virtual rats provide a transparent model for studying neural circuits and the impact of disease on these circuits.
Next, the researchers plan to let the virtual mice autonomously solve tasks encountered by real mice to further deepen their understanding of the brain's skill acquisition algorithms.
In the future, scientists may build brain-inspired network architectures to improve performance and interpretability, and explore the role of specific circuit structures and neural mechanisms in behavioral computation.
Paper link: https://www.nature.com/articles/s41586-024-07633-4
Related reports: https:// decrypt.co/235086/virtual-rat-ai-brain-harvard-google-deepmind-robotics
The above is the detailed content of Log into Nature, how does the brain control movement? DeepMind designs virtual animals with 'AI brains”. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

Conference Introduction With the rapid development of science and technology, artificial intelligence has become an important force in promoting social progress. In this era, we are fortunate to witness and participate in the innovation and application of Distributed Artificial Intelligence (DAI). Distributed artificial intelligence is an important branch of the field of artificial intelligence, which has attracted more and more attention in recent years. Agents based on large language models (LLM) have suddenly emerged. By combining the powerful language understanding and generation capabilities of large models, they have shown great potential in natural language interaction, knowledge reasoning, task planning, etc. AIAgent is taking over the big language model and has become a hot topic in the current AI circle. Au

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide
