


ECCV 2024 Workshop Multi-modal Understanding and Video Generation of Autonomous Driving Difficult Scenarios Call for Papers and Challenge is now open!
Workshop home page: https://www.php.cn/link/f73850aa36d8564629a0d62c51009acf
Overview
This seminar aims to explore the gap between the current state-of-the-art autonomous driving technology and comprehensive and reliable intelligent autonomous driving agents. In recent years, large multi-modal models (such as GPT-4V) have demonstrated unprecedented progress in multi-modal perception and understanding. Using MLLMs to deal with complex scenarios in autonomous driving, especially rare but critical hard-case scenarios, is an unsolved challenge. This workshop aims to promote innovative research in multi-modal large model perception and understanding, the application of advanced AIGC technology in autonomous driving systems, and end-to-end autonomous driving.
WorkshopCall for Papers
This draft paper focuses on multi-modal perception and understanding of autonomous driving scenes, automatic driving scene image and video generation, terminal Topics such as end-to-end autonomous driving and next-generation industrial-grade autonomous driving solutions, including but not limited to:
- Corner case mining and generation for autonomous driving.
- 3D object detection and scene understanding.
- Semantic occupancy prediction.
- Weakly supervised learning for 3D Lidar and 2D images.
- One/few/zero-shot learning for autonomous perception.
- End-to-end autonomous driving systems with Large Multimodal Models.
- Large Language Models techniques adaptable for self-driving systems.
- Safety/explainability/robustness for end-to-end autonomous driving.
- Domain adaptation and generalization for end-to-end autonomous driving.
Submission rules:
This submission will be approved The OpenReview platform implements double-blind review and accepts submissions in two forms:
- Full paper: The paper is within 14 pages in ECCV format, and there is no limit on the length of references and supplementary materials. Accepted papers will become part of the official ECCV proceedings and are not allowed to be resubmitted to other conferences.
- Extended abstract: The paper must be within 4 pages in CVPR format. There is no limit on the length of references and supplementary materials. Accepted papers will not be included in the official ECCV proceedings and are allowed to be resubmitted to other conferences.
Submission entrance:
- Full paper: ECCV 2024 Workshop W-CODA | OpenReview
- Extended abstract: ECCV 2024 Workshop W-CODA Abstract Paper Track | OpenReview
Autonomous Driving Difficult Scene Multimodal Understanding and Video Generation Challenge
This competition aims to improve the multi-modal model’s perception and understanding of extreme situations in autonomous driving, and to generate the ability to depict these extreme situations. We offer generous prizes and bonuses and sincerely invite you to participate!
Track 1: Perception and understanding of difficult autonomous driving scenarios
This track focuses on the perception of multimodal large models (MLLMs) in difficult autonomous driving scenarios and understanding capabilities, including overall scene understanding, regional understanding, and driving suggestions, aiming to promote the development of more reliable and explainable autonomous driving agents.
Track 2: Video Generation of Difficult Autonomous Driving Scenarios
This track focuses on the diffusion model’s ability to generate multi-view autonomous driving scene videos. Based on the given 3D geometric structure of the autonomous driving scene, the model needs to generate the corresponding autonomous driving scene video and ensure timing consistency, multi-view consistency, specified resolution and video duration.
Competition time: June 15, 2024 to August 15, 2024
Prize setting: The winner is US$1,000, the runner-up is US$800 , 600 US dollars for the third place (per track)
Time node (AoE Time, UTC-12)
Full Paper Submission | ||
##Full Paper Submission Deadline | 1 st Aug, 2024 |
|
##Full Paper Notification to Authors | 10 | th Aug, 2024 |
##Full Paper Camera Ready Deadline | 15 | th Aug, 2024
|
Abstract Paper Submission Deadline |
1st Sep, 2024 |
|
Abstract Paper Notification to Authors |
7th Sep, 2024 |
|
Abstract Paper Camera Ready Deadline |
10th Sep , 2024 |
|
Challenge | ||
# #Challenge Open to Public | 15 th Jun, 2024 |
|
##Challenge Submission Deadline | ##15 | th Aug, 2024
|
Challenge Notification to Winner |
##1 st | Sep, 2024|
30 th | Sep, 2024
If you have any questions about the Workshop and the Challenge, please contact: w-coda2024@googlegroups.com.
The above is the detailed content of ECCV 2024 Workshop Multi-modal Understanding and Video Generation of Autonomous Driving Difficult Scenarios Call for Papers and Challenge is now open!. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

Conference Introduction With the rapid development of science and technology, artificial intelligence has become an important force in promoting social progress. In this era, we are fortunate to witness and participate in the innovation and application of Distributed Artificial Intelligence (DAI). Distributed artificial intelligence is an important branch of the field of artificial intelligence, which has attracted more and more attention in recent years. Agents based on large language models (LLM) have suddenly emerged. By combining the powerful language understanding and generation capabilities of large models, they have shown great potential in natural language interaction, knowledge reasoning, task planning, etc. AIAgent is taking over the big language model and has become a hot topic in the current AI circle. Au
