Table of Contents
"Fluid Mechanics" Advanced Player
Different states have different algorithms
Home Technology peripherals AI For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

Apr 07, 2023 pm 04:46 PM
robot api fluid force

Robots can also do the work of baristas!

For example, let it stir the milk foam and coffee evenly. The effect is like this:

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

Then make it more difficult, make a latte, and then stir it It is easy to make a pattern with a stick:

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

These are based on a study that has been accepted as Spotlight by ICLR 2023. They launched the proposed fluid Control the new benchmark FluidLab and the multi-material differentiable physics engine FluidEngine.

The research team members come from CMU, Dartmouth College, Columbia University, MIT, MIT-IBM Watson AI Lab, and the University of Massachusetts Amherst.

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

With the support of FluidLab, it will be easy for robots to handle fluid work in more complex scenarios in the future.

What are the "hidden skills" of FluidLab? Let’s have fun together~

"Fluid Mechanics" Advanced Player

FluidLab relies on FluidEngine as the engine support. As the name says, the main simulation object is fluid, different materials, and various types. It can fully grasp the details of sports.

Let’s try simulating various scenes of making coffee. The movement trajectories of coffee and milk foam are also very realistic.

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

Of course, simulating ice cream is also a matter of sprinkling water.

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

Or simulate the movement trajectory of water flow under different conditions.

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

If you still can’t see the strength of FluidLab, then go straight to the difficulty level.

For example, let’s start with a comparison simulation and let the platform simulate the collision of different materials with the container when they fall. From left to right they are: hard materials, elastic materials and plastics.

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

Or the trajectories of different non-viscous liquids and viscous liquids when they fall.

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

#More difficult and simulate the state when gas and liquid meet.

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

Easily done!

At this time, some friends may wonder: Does the simulation in so many states conform to physics or fluid mechanics?

You can rest assured that the research team directly disclosed the verification video. When it comes to some specific physical phenomena, FluidEngine can accurately simulate it.

Common physical phenomena such as Karman vortex and dam failure can be accurately simulated.

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

Buoyancy, incompressibility and volume stability of liquids can also be easily reflected in the simulation.

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

Come to the advanced level, use the Magnus effect to verify: translation, slow counterclockwise rotation of translation, fast counterclockwise rotation of translation, and fast clockwise rotation of translation. All accurate.

For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform

Add 100 million points more difficulty and try conservation of momentum and Rayleigh-Taylor instability.

......

How did the research team achieve such a simulation that is so close to the real world?

Different states have different algorithms

First of all, in terms of programming language, FluidEngine chose Python and Taichi. Taichi is a recently proposed domain-specific programming language for GPU accelerated simulation.

This provides a user-friendly set of APIs for building simulation environments. At a higher level, it also follows the standard OpenAI Gym API and is compatible with standard reinforcement learning and optimization algorithms. .

The reason why it is possible to achieve realistic virtual simulation effects can perhaps be gleaned from the process of creating an environment with FluidEngine.

The environment it creates consists of five parts:

  • A robot agent equipped with a user-defined end effector (an external robot)
  • From the external network Objects imported from the grid and represented as signed distance fields (SDFs)
  • Objects created using shape primitives or external meshes, used to represent particles
  • For simulations on Eulerian meshes Gas fields for gas phenomena (including velocity fields and other advection fields such as smoke density and temperature)
  • A set of user-defined geometric boundaries to support sparse calculations

where, During the simulation process, different calculation methods are used for materials in different states.

For solid and liquid materials, the simulation process uses the moving least squares material point method (MLS-MPM), which is a hybrid Lagrangian-Eulerian method that uses particles and meshes to simulate continuous body material.

For gases such as smoke or air, the advection-projection scheme is used in the simulation process to simulate them as incompressible fluids on a Cartesian grid.

In this way, realistic effects can be simulated for specific situations.

The paper, project address and code link are attached at the end of the article. Interested friends can click to view.

Project homepage: https://fluidlab2023.github.io/Paper link: ​https://arxiv.org/abs/2303.02346 Code link: ​https://github.com/zhouxian/FluidLab

The above is the detailed content of For robots to learn coffee latte art, we have to start with fluid mechanics! CMU&MIT launches fluid simulation platform. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

The second generation Ameca is here! He can communicate with the audience fluently, his facial expressions are more realistic, and he can speak dozens of languages. The second generation Ameca is here! He can communicate with the audience fluently, his facial expressions are more realistic, and he can speak dozens of languages. Mar 04, 2024 am 09:10 AM

The humanoid robot Ameca has been upgraded to the second generation! Recently, at the World Mobile Communications Conference MWC2024, the world's most advanced robot Ameca appeared again. Around the venue, Ameca attracted a large number of spectators. With the blessing of GPT-4, Ameca can respond to various problems in real time. "Let's have a dance." When asked if she had emotions, Ameca responded with a series of facial expressions that looked very lifelike. Just a few days ago, EngineeredArts, the British robotics company behind Ameca, just demonstrated the team’s latest development results. In the video, the robot Ameca has visual capabilities and can see and describe the entire room and specific objects. The most amazing thing is that she can also

How can AI make robots more autonomous and adaptable? How can AI make robots more autonomous and adaptable? Jun 03, 2024 pm 07:18 PM

In the field of industrial automation technology, there are two recent hot spots that are difficult to ignore: artificial intelligence (AI) and Nvidia. Don’t change the meaning of the original content, fine-tune the content, rewrite the content, don’t continue: “Not only that, the two are closely related, because Nvidia is expanding beyond just its original graphics processing units (GPUs). The technology extends to the field of digital twins and is closely connected to emerging AI technologies. "Recently, NVIDIA has reached cooperation with many industrial companies, including leading industrial automation companies such as Aveva, Rockwell Automation, Siemens and Schneider Electric, as well as Teradyne Robotics and its MiR and Universal Robots companies. Recently,Nvidiahascoll

After 2 months, the humanoid robot Walker S can fold clothes After 2 months, the humanoid robot Walker S can fold clothes Apr 03, 2024 am 08:01 AM

Editor of Machine Power Report: Wu Xin The domestic version of the humanoid robot + large model team completed the operation task of complex flexible materials such as folding clothes for the first time. With the unveiling of Figure01, which integrates OpenAI's multi-modal large model, the related progress of domestic peers has been attracting attention. Just yesterday, UBTECH, China's "number one humanoid robot stock", released the first demo of the humanoid robot WalkerS that is deeply integrated with Baidu Wenxin's large model, showing some interesting new features. Now, WalkerS, blessed by Baidu Wenxin’s large model capabilities, looks like this. Like Figure01, WalkerS does not move around, but stands behind a desk to complete a series of tasks. It can follow human commands and fold clothes

The first robot to autonomously complete human tasks appears, with five fingers that are flexible and fast, and large models support virtual space training The first robot to autonomously complete human tasks appears, with five fingers that are flexible and fast, and large models support virtual space training Mar 11, 2024 pm 12:10 PM

This week, FigureAI, a robotics company invested by OpenAI, Microsoft, Bezos, and Nvidia, announced that it has received nearly $700 million in financing and plans to develop a humanoid robot that can walk independently within the next year. And Tesla’s Optimus Prime has repeatedly received good news. No one doubts that this year will be the year when humanoid robots explode. SanctuaryAI, a Canadian-based robotics company, recently released a new humanoid robot, Phoenix. Officials claim that it can complete many tasks autonomously at the same speed as humans. Pheonix, the world's first robot that can autonomously complete tasks at human speeds, can gently grab, move and elegantly place each object to its left and right sides. It can autonomously identify objects

Cloud Whale Xiaoyao 001 sweeping and mopping robot has a 'brain'! | Experience Cloud Whale Xiaoyao 001 sweeping and mopping robot has a 'brain'! | Experience Apr 26, 2024 pm 04:22 PM

Sweeping and mopping robots are one of the most popular smart home appliances among consumers in recent years. The convenience of operation it brings, or even the need for no operation, allows lazy people to free their hands, allowing consumers to "liberate" from daily housework and spend more time on the things they like. Improved quality of life in disguised form. Riding on this craze, almost all home appliance brands on the market are making their own sweeping and mopping robots, making the entire sweeping and mopping robot market very lively. However, the rapid expansion of the market will inevitably bring about a hidden danger: many manufacturers will use the tactics of sea of ​​machines to quickly occupy more market share, resulting in many new products without any upgrade points. It is also said that they are "matryoshka" models. Not an exaggeration. However, not all sweeping and mopping robots are

The humanoid robot can do magic, let the Spring Festival Gala program team find out more The humanoid robot can do magic, let the Spring Festival Gala program team find out more Feb 04, 2024 am 09:03 AM

In the blink of an eye, robots have learned to do magic? It was seen that it first picked up the water spoon on the table and proved to the audience that there was nothing in it... Then it put the egg-like object in its hand, then put the water spoon back on the table and started to "cast a spell"... …Just when it picked up the water spoon again, a miracle happened. The egg that was originally put in disappeared, and the thing that jumped out turned into a basketball... Let’s look at the continuous actions again: △ This animation shows a set of actions at 2x speed, and it flows smoothly. Only by watching the video repeatedly at 0.5x speed can it be understood. Finally, I discovered the clues: if my hand speed were faster, I might be able to hide it from the enemy. Some netizens lamented that the robot’s magic skills were even higher than their own: Mag was the one who performed this magic for us.

Ten humanoid robots shaping the future Ten humanoid robots shaping the future Mar 22, 2024 pm 08:51 PM

The following 10 humanoid robots are shaping our future: 1. ASIMO: Developed by Honda, ASIMO is one of the most well-known humanoid robots. Standing 4 feet tall and weighing 119 pounds, ASIMO is equipped with advanced sensors and artificial intelligence capabilities that allow it to navigate complex environments and interact with humans. ASIMO's versatility makes it suitable for a variety of tasks, from assisting people with disabilities to delivering presentations at events. 2. Pepper: Created by Softbank Robotics, Pepper aims to be a social companion for humans. With its expressive face and ability to recognize emotions, Pepper can participate in conversations, help in retail settings, and even provide educational support. Pepper's

Is robotic IoT the future of manufacturing? Is robotic IoT the future of manufacturing? Mar 01, 2024 pm 06:10 PM

Robotic IoT is an emerging development that promises to bring together two valuable technologies: industrial robots and IoT sensors. Will the Internet of Robotic Things become mainstream in manufacturing? What is the Internet of Robotic Things? The Internet of Robotic Things (IoRT) is a form of network that connects robots to the Internet. These robots use IoT sensors to collect data and interpret their surroundings. They are often combined with various technologies such as artificial intelligence and cloud computing to speed up data processing and optimize resource utilization. The development of IoRT enables robots to sense and respond to environmental changes more intelligently, bringing more efficient solutions to various industries. By integrating with IoT technology, IoRT can not only realize autonomous operation and self-learning, but also

See all articles