


AI completes painting on mobile phone within 12 seconds! Google proposes new method to accelerate diffusion model inference
It only takes 12 seconds to use Stable Diffusion to generate an image using only the computing power of the mobile phone.
And it’s the kind that has completed 20 iterations.
You must know that current diffusion models basically exceed 1 billion parameters. If you want to quickly generate a picture, you must either rely on cloud computing or the local hardware must be powerful enough. .
As large model applications gradually become more popular, running large models on personal computers and mobile phones is likely to be a new trend in the future.
As a result, Google researchers have brought this new result, called Speed is all you need: Accelerate the inference speed of large-scale diffusion models on devices through GPU optimization .
Three-step optimization acceleration
This method is optimized for Stable Diffusion, but it can also be adapted to other diffusion models. The task is to generate images from text.
Specific optimization can be divided into three parts:
- Design a special kernel
- Improve the efficiency of the Attention model
- Winograd Convolution acceleration
First look at the specially designed kernel, which includes group normalization and GELU activation functions.
Group normalization is implemented throughout the UNet architecture. The working principle of this normalization is to divide the channels of feature mapping into smaller groups and normalize each group independently, so that Group normalization is less dependent on batch size and can adapt to a wider range of batch sizes and network architectures.
The researchers designed a unique kernel in the form of a GPU shader that can execute all kernels in a single GPU command without any intermediate tensors.
The GELU activation function contains a large number of numerical calculations, such as penalties, Gaussian error functions, etc.
A dedicated shader is used to integrate these numerical calculations and the accompanying division and multiplication operations, so that these calculations can be placed in a simple draw call.
Draw call is an operation in which the CPU calls the image programming interface and instructs the GPU to render.
Next, when it comes to improving the efficiency of the Attention model, the paper introduces two optimization methods.
One is the partial fusion of the softmax function.
In order to avoid performing the entire softmax calculation on the large matrix A, this study designed a GPU shader to calculate the L and S vectors to reduce calculations, ultimately resulting in a tensor of size N×2. Then the softmax calculation and matrix multiplication of matrix V are merged.
This method significantly reduces the memory footprint and overall latency of the intermediate program.
It should be emphasized that the parallelism of the computational mapping from A to L and S is limited because the number of elements in the result tensor is smaller than the number of elements in the input tensor A Much more.
In order to increase parallelism and further reduce latency, this study organized the elements in A into blocks and divided the reduction operations into multiple parts.
The calculation is then performed on each block and then reduced to the final result.
Using carefully designed threading and memory cache management, lower latency can be achieved in multiple parts using a single GPU command.
Another optimization method is FlashAttention.
This is the IO-aware precise attention algorithm that became popular last year. There are two specific acceleration technologies: incremental calculation in blocks, that is, tiling, and recalculating attention in backward pass to operate all attention Integrated into CUDA kernel.
Compared with standard Attention, this method can reduce HBM (high bandwidth memory) access and improve overall efficiency.
However, the FlashAttention core is very register-intensive, so the team uses this optimization method selectively.
They use FlashAttention on Adreno GPU and Apple GPU with attention matrix d=40, and use partial fusion softmax function in other cases.
The third part is Winograd convolution acceleration.
Its principle is simply to use more addition calculations to reduce multiplication calculations, thereby reducing the amount of calculations.
But the disadvantages are also obvious, which will bring more video memory consumption and numerical errors, especially when the tile is relatively large.
The backbone of Stable Diffusion relies heavily on 3×3 convolutional layers, especially in the image decoder, where 90% of the layers are composed of 3×3 convolutional layers.
After analysis, researchers found that when using 4×4 tiles, it is the best balance point between model calculation efficiency and video memory utilization.
Experimental results
In order to evaluate the improvement effect, the researchers first conducted a benchmark test on a mobile phone.
#The results show that after using the acceleration algorithm, the speed of image generation on both phones has been significantly improved.
Among them, the delay on Samsung S23 Ultra was reduced by 52.2%, and the delay on iPhone 14 Pro Max was reduced by 32.9%.
Generate a 512×512 pixel image from text end-to-end on Samsung S23 Ultra, with 20 iterations and taking less than 12 seconds.
Paper address: https://www.php.cn/link/ba825ea8a40c385c33407ebe566fa1bc
The above is the detailed content of AI completes painting on mobile phone within 12 seconds! Google proposes new method to accelerate diffusion model inference. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











Using the chrono library in C can allow you to control time and time intervals more accurately. Let's explore the charm of this library. C's chrono library is part of the standard library, which provides a modern way to deal with time and time intervals. For programmers who have suffered from time.h and ctime, chrono is undoubtedly a boon. It not only improves the readability and maintainability of the code, but also provides higher accuracy and flexibility. Let's start with the basics. The chrono library mainly includes the following key components: std::chrono::system_clock: represents the system clock, used to obtain the current time. std::chron

DMA in C refers to DirectMemoryAccess, a direct memory access technology, allowing hardware devices to directly transmit data to memory without CPU intervention. 1) DMA operation is highly dependent on hardware devices and drivers, and the implementation method varies from system to system. 2) Direct access to memory may bring security risks, and the correctness and security of the code must be ensured. 3) DMA can improve performance, but improper use may lead to degradation of system performance. Through practice and learning, we can master the skills of using DMA and maximize its effectiveness in scenarios such as high-speed data transmission and real-time signal processing.

Handling high DPI display in C can be achieved through the following steps: 1) Understand DPI and scaling, use the operating system API to obtain DPI information and adjust the graphics output; 2) Handle cross-platform compatibility, use cross-platform graphics libraries such as SDL or Qt; 3) Perform performance optimization, improve performance through cache, hardware acceleration, and dynamic adjustment of the details level; 4) Solve common problems, such as blurred text and interface elements are too small, and solve by correctly applying DPI scaling.

C performs well in real-time operating system (RTOS) programming, providing efficient execution efficiency and precise time management. 1) C Meet the needs of RTOS through direct operation of hardware resources and efficient memory management. 2) Using object-oriented features, C can design a flexible task scheduling system. 3) C supports efficient interrupt processing, but dynamic memory allocation and exception processing must be avoided to ensure real-time. 4) Template programming and inline functions help in performance optimization. 5) In practical applications, C can be used to implement an efficient logging system.

In MySQL, add fields using ALTERTABLEtable_nameADDCOLUMNnew_columnVARCHAR(255)AFTERexisting_column, delete fields using ALTERTABLEtable_nameDROPCOLUMNcolumn_to_drop. When adding fields, you need to specify a location to optimize query performance and data structure; before deleting fields, you need to confirm that the operation is irreversible; modifying table structure using online DDL, backup data, test environment, and low-load time periods is performance optimization and best practice.

Measuring thread performance in C can use the timing tools, performance analysis tools, and custom timers in the standard library. 1. Use the library to measure execution time. 2. Use gprof for performance analysis. The steps include adding the -pg option during compilation, running the program to generate a gmon.out file, and generating a performance report. 3. Use Valgrind's Callgrind module to perform more detailed analysis. The steps include running the program to generate the callgrind.out file and viewing the results using kcachegrind. 4. Custom timers can flexibly measure the execution time of a specific code segment. These methods help to fully understand thread performance and optimize code.

The built-in quantization tools on the exchange include: 1. Binance: Provides Binance Futures quantitative module, low handling fees, and supports AI-assisted transactions. 2. OKX (Ouyi): Supports multi-account management and intelligent order routing, and provides institutional-level risk control. The independent quantitative strategy platforms include: 3. 3Commas: drag-and-drop strategy generator, suitable for multi-platform hedging arbitrage. 4. Quadency: Professional-level algorithm strategy library, supporting customized risk thresholds. 5. Pionex: Built-in 16 preset strategy, low transaction fee. Vertical domain tools include: 6. Cryptohopper: cloud-based quantitative platform, supporting 150 technical indicators. 7. Bitsgap:

How to achieve the effect of mouse scrolling event penetration? When we browse the web, we often encounter some special interaction designs. For example, on deepseek official website, �...
