Table of Contents
Method Introduction
Pre-training results
Fine-tuning results
Inference results
Home Technology peripherals AI A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

Jan 04, 2024 pm 01:05 PM
Model train

Large-scale language models (LLMs) have made tremendous progress in both academia and industry. But training and deploying LLM is very expensive and requires a lot of computing resources and memory, so researchers have developed many open source frameworks and methods for accelerating LLM pre-training, fine-tuning, and inference. However, the runtime performance of different hardware and software stacks can vary significantly, making it difficult to choose the best configuration.

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

Recently, a new paper titled "Dissecting the Runtime Performance of the Training, Fine-tuning, and Inference of Large Language Models" The runtime performance of LLM training, fine-tuning, and inference is analyzed in detail from macro and micro perspectives.

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

Please click the following link to view the paper: https://arxiv.org/pdf/2311.03687.pdf

Specifically, this study first conducted a full-process performance benchmark test on LLM of different sizes (7B, 13B and 70B parameters) on three 8-GPUs for pre-training, fine-tuning and service without changing the original meaning. . The tests covered platforms with and without individual optimization technologies, including ZeRO, Quantize, Recalculate, and FlashAttention. The study then further provides a detailed runtime analysis of sub-modules of computation and communication operators in LLM

Method Introduction

The study The benchmark test adopts a top-down approach, covering the end-to-end step time performance, module-level time performance and operator time performance of Llama2 on three 8-GPU hardware platforms, as shown in Figure 3.

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

The three hardware platforms are RTX4090, RTX3090 and A800. The specific specifications are shown in Table 1 below.

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

On the software side, the study compares DeepSpeed ​​and Megatron-LM end-to-end in terms of pre-training and fine-tuning step time. To evaluate optimization techniques, the study used DeepSpeed ​​to enable the following optimizations one by one: ZeRO-2, ZeRO-3, offloading, activation recomputation, quantization, and FlashAttention to measure performance improvements and reductions in time and memory consumption.

In terms of LLM services, there are three highly optimized systems, vLLM, LightLLM and TGI, and this study compared their performance (latency and throughput) on three test platforms .

In order to ensure the accuracy and reproducibility of the results, this study calculated the average length of instructions, inputs and outputs of the commonly used LLM data set alpaca, that is, 350 tokens per sample, And randomly generate strings to achieve a sequence length of 350.

In the inference service, in order to comprehensively utilize computing resources and evaluate the robustness and efficiency of the framework, all requests are scheduled in burst mode. The experimental data set consists of 1000 synthetic sentences, each sentence contains 512 input tokens. This study always maintains the "maximum generated token length" parameter in all experiments on the same GPU platform to ensure the consistency and comparability of results.

No need to change the original meaning, the whole process performance

This study passed pre-training and fine-tuning And infer the step time, throughput and memory consumption of Llama2 models of different sizes (7B, 13B and 70B) to measure the full performance on the three test platforms without changing the original meaning. Three widely used inference serving systems: TGI, vLLM, and LightLLM are also evaluated, focusing on metrics such as latency, throughput, and memory consumption.

Module Level Performance

LLM usually consists of a series of modules (or layers) , these modules may have unique computing and communication characteristics. For example, the key modules that make up the Llama2 model are Embedding, LlamaDecoderLayer, Linear, SiLUActivation, and LlamaRMSNorm.

Pre-training results

In the pre-training experiment session, the researcher first analyzed the pre-training of different size models (7B, 13B and 70B) on three test platforms performance (iteration time or throughput, memory consumption), and then micro-benchmarks at module and operational levels were conducted.

No need to change the original meaning, the whole process performance

The researchers first conducted experiments to compare The performance of Megatron-LM and DeepSpeed, both of which did not use any memory optimization technology (such as ZeRO) when pre-training Llama2-7B on the A800-80GB server.

They used a sequence length of 350 and provided two sets of batch sizes for Megatron-LM and DeepSpeed, from 1 to the maximum batch size. The results are shown in Table II below, benchmarked against training throughput (tokens/second) and consumer GPU memory (in GB).

The results show that when the batch size is 1, Megatron-LM is slightly faster than DeepSpeed. However, DeepSpeed ​​is the fastest in training speed when the batch size reaches its maximum. When the batch sizes are the same, DeepSpeed ​​consumes more GPU memory than the tensor parallel-based Megatron-LM. Even with small batch sizes, both systems consumed significant amounts of GPU memory, causing memory overflow on the RTX4090 or RTX3090 GPU servers.

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

When training Llama2-7B (sequence length 350, batch size 2), the researcher used DeepSpeed ​​with quantization to study Scaling efficiency on different hardware platforms. The results are shown in Figure 4 below. The A800 scales almost linearly, and the scaling efficiency of RTX4090 and RTX3090 is slightly lower, at 90.8% and 85.9% respectively. On the RTX3090 platform, NVLink connections are 10% more efficient than without NVLink.

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

Researchers used DeepSpeed ​​to evaluate training performance under different memory and computationally efficient methods. For fairness, all evaluations are set to a sequence length of 350, a batch size of 1, and a default loaded model weight of bf16.

For ZeRO-2 and ZeRO-3 with offloading capabilities, they offload the optimizer state and optimizer state model to CPU RAM respectively. For quantization, they used a 4bits configuration with dual quantization. Also reported is the performance of the RTX3090 when NVLink is disabled (i.e. all data is transferred over the PCIe bus). The results are shown in Table III below.

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

#To obtain maximum throughput, the researchers further utilized the computing power of different GPU servers by maximizing the batch size for each method. The results are shown in Table IV, showing that increasing the batch size can easily improve the training process. Therefore, GPU servers with high bandwidth and large memory are more suitable for full-parameter mixed precision training than consumer-grade GPU servers

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

Module level analysis

Table V below shows the overall and computational core time of the forward, backward and optimizer of the single-step pre-trained Llama2-7B model cost. For the backward phase, since the total time includes non-overlapping time, the computational core time is much smaller than the forward phase and optimizer. If the non-overlapping time is removed from the backward phase, the value becomes 94.8.

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

##Need to recalculate and re-evaluate the impact of FlashAttention

Techniques for accelerating pre-training can be roughly divided into two categories: saving memory, increasing batch size, and accelerating computing cores. As shown in Figure 5 below, the GPU spends 5-10% of its time idle during the forward, backward, and optimizer phases.

The researchers believed this idle time was due to smaller batch sizes, so they tested all techniques with the largest batch sizes available. Ultimately, they used recalculation to increase the batch size and used FlashAttention to speed up core analysis

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

As shown in Table VII below, as the batch size increases, the time of the forward and backward phases increases significantly, leaving almost no GPU idle time.

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

According to Table VIII below, FlashAttention can accelerate the forward and backward attention modules by 34.9% and 24.7% respectively

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

Fine-tuning results

In the fine-tuning session, the researchers mainly discussed the parameter efficient fine-tuning method (PEFT) and demonstrated LoRA and QLoRA Fine-tuned performance under various model sizes and hardware settings. Use a sequence length of 350, a batch size of 1, and load the model weights into bf16 by default.

According to the results in Table IX below, the performance trends after fine-tuning Llama2-13B using LoRA and QLoRA are consistent with Llama2-7B. Compared with Llama2-7B, the throughput of fine-tuned Llama2-13B dropped by about 30%

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

However, when all optimization techniques are combined, Even RTX4090 and RTX3090 can fine-tune Llama2-70B to achieve a total throughput of 200 tokens/second.

Inference results

No need to change the original meaning, full performance

Figure 6 below shows a comprehensive analysis of throughput under various hardware platforms and inference frameworks, in which the relevant inference data of Llama2-70B is omitted. Among them, the TGI framework demonstrated excellent throughput, especially on GPUs with 24GB of memory such as the RTX3090 and RTX4090. In addition, LightLLM significantly outperforms TGI and vLLM on the A800 GPU platform, with throughput almost doubling.

These experimental results show that the TGI inference framework has excellent performance on the 24GB memory GPU platform, while the LightLLM inference framework exhibits the highest throughput on the A800 80GB GPU platform. This finding suggests that LightLLM is optimized specifically for the A800/A100 series of high-performance GPUs.

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

The delay performance is shown in Figures 7, 8, 9 and 10 under different hardware platforms and reasoning frameworks

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput

#As shown above, the A800 platform is significantly better than the two consumer-grade platforms RTX4090 and RTX3090 in terms of throughput and latency. And among the two consumer-grade platforms, the RTX3090 has a slight advantage over the RTX4090. The three inference frameworks TGI, vLLM, and LightLLM show no substantial difference in throughput when running on consumer-grade platforms. In comparison, TGI consistently outperforms the other two in terms of latency. On the A800 GPU platform, LightLLM performs best in terms of throughput and its latency is also very close to the TGI framework.

Please refer to the original text for more experimental results

The above is the detailed content of A800 significantly surpasses Llama2 inference RTX3090 and 4090, performing excellent latency and throughput. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Roblox: Bubble Gum Simulator Infinity - How To Get And Use Royal Keys
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Nordhold: Fusion System, Explained
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Mandragora: Whispers Of The Witch Tree - How To Unlock The Grappling Hook
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1666
14
PHP Tutorial
1273
29
C# Tutorial
1252
24
Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Apr 03, 2024 pm 12:04 PM

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao Apr 09, 2024 am 11:52 AM

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! May 06, 2024 pm 04:13 PM

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

FisheyeDetNet: the first target detection algorithm based on fisheye camera FisheyeDetNet: the first target detection algorithm based on fisheye camera Apr 26, 2024 am 11:37 AM

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

The latest from Oxford University! Mickey: 2D image matching in 3D SOTA! (CVPR\'24) The latest from Oxford University! Mickey: 2D image matching in 3D SOTA! (CVPR\'24) Apr 23, 2024 pm 01:20 PM

Project link written in front: https://nianticlabs.github.io/mickey/ Given two pictures, the camera pose between them can be estimated by establishing the correspondence between the pictures. Typically, these correspondences are 2D to 2D, and our estimated poses are scale-indeterminate. Some applications, such as instant augmented reality anytime, anywhere, require pose estimation of scale metrics, so they rely on external depth estimators to recover scale. This paper proposes MicKey, a keypoint matching process capable of predicting metric correspondences in 3D camera space. By learning 3D coordinate matching across images, we are able to infer metric relative

See all articles