Table of Contents
Research Introduction
Home Technology peripherals AI Reinforcement learning is on the cover of Nature again, and the new paradigm of autonomous driving safety verification significantly reduces test mileage

Reinforcement learning is on the cover of Nature again, and the new paradigm of autonomous driving safety verification significantly reduces test mileage

Mar 31, 2023 pm 10:38 PM
ai reinforcement learning

Introduce dense reinforcement learning and use AI to verify AI.

Rapid advances in autonomous vehicle (AV) technology have us on the cusp of a transportation revolution on a scale not seen since the advent of the automobile a century ago. Autonomous driving technology has the potential to significantly improve traffic safety, mobility, and sustainability, and therefore has attracted the attention of industry, government agencies, professional organizations, and academic institutions.

The development of autonomous vehicles has come a long way over the past 20 years, especially with the advent of deep learning. By 2015, companies were starting to announce that they would be mass-producing AVs by 2020. But so far, no level 4 AV is available on the market.

There are many reasons for this phenomenon, but the most important is that the safety performance of self-driving cars is still significantly lower than that of human drivers. For the average driver in the United States, the probability of a collision in the natural driving environment (NDE) is approximately 1.9 × 10^−6 per mile. By comparison, the disengagement rate for state-of-the-art autonomous vehicles is about 2.0 × 10^−5/mile, according to California’s 2021 Disengagement Reports.

Note: The disengagement rate is an important indicator for evaluating the reliability of autonomous driving. It describes the number of times the system requires the driver to take over every 1,000 miles of operation. The lower the disengagement rate of the system, the better the reliability. When the disengagement rate is equal to 0, it means that the autonomous driving system has reached the driverless level to some extent.

Although the disengagement rate can be criticized for being biased, it has been widely used to evaluate the safety performance of autonomous vehicles.

A key bottleneck in improving the safety performance of autonomous vehicles is the low efficiency of safety verification. It is currently popular to test the non-destructive testing of autonomous vehicles through a combination of software simulation, closed test track and road testing. As a result, AV developers must incur significant economic and time costs for evaluation, hindering the progress of AV deployment.

Verifying AV security performance in an NDE environment is very complex. For example, driving environments are complex in space and time, so the variables required to define such environments are high-dimensional. As the dimensionality of variables increases exponentially, so does the computational complexity. In this case, deep learning models are difficult to learn even given large amounts of data.

In this article, researchers from the University of Michigan, Ann Arbor, Tsinghua University and other institutions propose a dense deep-reinforcement learning (D2RL) method to solve this challenge.

The study appears on the cover of Nature.

Reinforcement learning is on the cover of Nature again, and the new paradigm of autonomous driving safety verification significantly reduces test mileage

  • Paper address: https://www.nature.com/articles/s41586-023-05732-2
  • Project address: https ://github.com/michigan-traffic-lab/Dense-Deep-Reinforcement-Learning

The thesis was awarded a master's degree, and he is currently a Tenure-Track Assistant Professor in the Department of Automation, Tsinghua University. , Additionally, he is an Assistant Research Scientist at the University of Michigan Transportation Research Institute (UMTRI). He received his bachelor's and doctoral degrees from the Department of Automation, Tsinghua University, in 2014 and 2019, under the supervision of Professor Zhang Yi. From 2017 to 2019, he was a visiting doctoral student in Civil and Environmental Engineering at the University of Michigan, studying under Professor Henry X. Liu (corresponding author of this article).

Research Introduction

The basic idea of ​​the D2RL method is to identify and remove non-safety-critical data, and use safety-critical data to train the neural network. Since only a small portion of the data is security-critical, the remaining data will be heavily densified with information.

Compared with the DRL method, the D2RL method can significantly reduce the variance of the policy gradient estimate by multiple orders of magnitude without losing unbiasedness. This significant variance reduction can enable neural networks to learn and complete tasks that are intractable for DRL methods.

For AV testing, this research utilizes the D2RL method to train background vehicles (BV) through neural networks to learn when to perform what adversarial operations, aiming to improve testing efficiency. D2RL can reduce the test mileage required for AVs by multiple orders of magnitude in an AI-based adversarial testing environment while ensuring unbiased testing.

The D2RL method can be applied to complex driving environments, including multiple highways, intersections, and roundabouts, which was not possible with previous scenario-based methods. Moreover, the method proposed in this study can create intelligent testing environments that use AI to verify AI. This is a paradigm shift that opens the door for accelerated testing and training of other safety-critical systems.

In order to prove that the AI-based testing method is effective, this study trained BV using a large-scale actual driving data set, and conducted simulation experiments and field experiments on physical test tracks. The experimental results are as follows Figure 1 shown.

Reinforcement learning is on the cover of Nature again, and the new paradigm of autonomous driving safety verification significantly reduces test mileage

Dense Deep Reinforcement Learning

To take advantage of AI technology, this study formulated the AV testing problem as a Markov Decision Process (MDP), The operation of BV is determined based on the current status information. The study aims to train a policy (DRL agent) modeled by a neural network that controls the actions of BVs interacting with AVs to maximize evaluation efficiency and ensure unbiasedness. However, as mentioned above, due to the limitations of dimensionality and computational complexity, it is difficult or even impossible to learn effective policies if the DRL method is directly applied.

Since most states are non-critical and cannot provide information for security-critical events, D2RL focuses on removing data from these non-critical states. For AV testing problems, many security metrics can be leveraged to identify critical states with varying efficiency and effectiveness. The criticality metric utilized in this study is an external approximation of the AV collision rate within a specific time frame of the current state (e.g., 1 second). The study then edited the Markov process, discarding data for non-critical states, and used the remaining data for policy gradient estimation and Bootstrap for DRL training.

As shown in Figure 2 below, compared to DRL, the advantage of D2RL is that it can maximize the reward during the training process.

Reinforcement learning is on the cover of Nature again, and the new paradigm of autonomous driving safety verification significantly reduces test mileage

AV Simulation Test

To evaluate the accuracy, efficiency, scalability and generality of the D2RL method, this study was conducted simulation test. For each test set, the study simulated a fixed distance of traffic travel and then recorded and analyzed the test results, as shown in Figure 3 below.

Reinforcement learning is on the cover of Nature again, and the new paradigm of autonomous driving safety verification significantly reduces test mileage

In order to further study the scalability and generalization of D2RL, this study conducted AV-I models with different number of lanes (2 lanes and 3 lanes) and driving distance. (400 m, 2 km, 4 km and 25 km) experiments. This article examines 25-kilometer trips because the average commuter in the United States travels approximately 25 kilometers one-way. The results are shown in Table 1:

Reinforcement learning is on the cover of Nature again, and the new paradigm of autonomous driving safety verification significantly reduces test mileage

The above is the detailed content of Reinforcement learning is on the cover of Nature again, and the new paradigm of autonomous driving safety verification significantly reduces test mileage. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to use the chrono library in C? How to use the chrono library in C? Apr 28, 2025 pm 10:18 PM

Using the chrono library in C can allow you to control time and time intervals more accurately. Let's explore the charm of this library. C's chrono library is part of the standard library, which provides a modern way to deal with time and time intervals. For programmers who have suffered from time.h and ctime, chrono is undoubtedly a boon. It not only improves the readability and maintainability of the code, but also provides higher accuracy and flexibility. Let's start with the basics. The chrono library mainly includes the following key components: std::chrono::system_clock: represents the system clock, used to obtain the current time. std::chron

How to measure thread performance in C? How to measure thread performance in C? Apr 28, 2025 pm 10:21 PM

Measuring thread performance in C can use the timing tools, performance analysis tools, and custom timers in the standard library. 1. Use the library to measure execution time. 2. Use gprof for performance analysis. The steps include adding the -pg option during compilation, running the program to generate a gmon.out file, and generating a performance report. 3. Use Valgrind's Callgrind module to perform more detailed analysis. The steps include running the program to generate the callgrind.out file and viewing the results using kcachegrind. 4. Custom timers can flexibly measure the execution time of a specific code segment. These methods help to fully understand thread performance and optimize code.

Decryption Gate.io Strategy Upgrade: How to Redefine Crypto Asset Management in MeMebox 2.0? Decryption Gate.io Strategy Upgrade: How to Redefine Crypto Asset Management in MeMebox 2.0? Apr 28, 2025 pm 03:33 PM

MeMebox 2.0 redefines crypto asset management through innovative architecture and performance breakthroughs. 1) It solves three major pain points: asset silos, income decay and paradox of security and convenience. 2) Through intelligent asset hubs, dynamic risk management and return enhancement engines, cross-chain transfer speed, average yield rate and security incident response speed are improved. 3) Provide users with asset visualization, policy automation and governance integration, realizing user value reconstruction. 4) Through ecological collaboration and compliance innovation, the overall effectiveness of the platform has been enhanced. 5) In the future, smart contract insurance pools, forecast market integration and AI-driven asset allocation will be launched to continue to lead the development of the industry.

What are the top ten virtual currency trading apps? The latest digital currency exchange rankings What are the top ten virtual currency trading apps? The latest digital currency exchange rankings Apr 28, 2025 pm 08:03 PM

The top ten digital currency exchanges such as Binance, OKX, gate.io have improved their systems, efficient diversified transactions and strict security measures.

Which of the top ten currency trading platforms in the world are the latest version of the top ten currency trading platforms Which of the top ten currency trading platforms in the world are the latest version of the top ten currency trading platforms Apr 28, 2025 pm 08:09 PM

The top ten cryptocurrency trading platforms in the world include Binance, OKX, Gate.io, Coinbase, Kraken, Huobi Global, Bitfinex, Bittrex, KuCoin and Poloniex, all of which provide a variety of trading methods and powerful security measures.

Recommended reliable digital currency trading platforms. Top 10 digital currency exchanges in the world. 2025 Recommended reliable digital currency trading platforms. Top 10 digital currency exchanges in the world. 2025 Apr 28, 2025 pm 04:30 PM

Recommended reliable digital currency trading platforms: 1. OKX, 2. Binance, 3. Coinbase, 4. Kraken, 5. Huobi, 6. KuCoin, 7. Bitfinex, 8. Gemini, 9. Bitstamp, 10. Poloniex, these platforms are known for their security, user experience and diverse functions, suitable for users at different levels of digital currency transactions

Bitcoin price today Bitcoin price today Apr 28, 2025 pm 07:39 PM

Bitcoin’s price fluctuations today are affected by many factors such as macroeconomics, policies, and market sentiment. Investors need to pay attention to technical and fundamental analysis to make informed decisions.

How much is Bitcoin worth How much is Bitcoin worth Apr 28, 2025 pm 07:42 PM

Bitcoin’s price ranges from $20,000 to $30,000. 1. Bitcoin’s price has fluctuated dramatically since 2009, reaching nearly $20,000 in 2017 and nearly $60,000 in 2021. 2. Prices are affected by factors such as market demand, supply, and macroeconomic environment. 3. Get real-time prices through exchanges, mobile apps and websites. 4. Bitcoin price is highly volatile, driven by market sentiment and external factors. 5. It has a certain relationship with traditional financial markets and is affected by global stock markets, the strength of the US dollar, etc. 6. The long-term trend is bullish, but risks need to be assessed with caution.

See all articles