Home Technology peripherals AI Supporting 1024 frames and nearly 100% accuracy, NVIDIA 'LongVILA” begins to develop long videos

Supporting 1024 frames and nearly 100% accuracy, NVIDIA 'LongVILA” begins to develop long videos

Aug 21, 2024 pm 04:35 PM
project LongVILA

Now, Long Context Visual Language Model (VLM) has a new full-stack solution - LongVILA, which integrates system, model training and data set development.


At this stage, it is very important to combine the multi-modal understanding of the model with the long context capability. The basic model that supports more modalities can accept more flexible input signals so that people can Diverse ways to interact with models. And longer context allows the model to process more information, such as long documents and long videos. This ability also provides the functionality needed for more real-world applications.

However, the current problem is that some work has enabled long-context visual language models (VLM), but usually in a simplified approach rather than providing a comprehensive solution.

Full-stack design is crucial for long-context visual language models. Training large models is usually a complex and systematic task that requires data engineering and system software co-design. Unlike text-only LLMs, VLMs (e.g., LLaVA) often require unique model architectures and flexible distributed training strategies.

In addition, long context modeling requires not only long context data, but also an infrastructure that can support memory-intensive long context training. Therefore, a well-planned full-stack design (covering system, data, and pipeline) is essential for long-context VLM.

In this article, researchers from NVIDIA, MIT, UC Berkeley, and the University of Texas at Austin introduce LongVILA, a full-stack solution for training and deploying long-context visual language models, including system design, Model training strategy and data set construction.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
  • Paper address: https://arxiv.org/pdf/2408.10188
  • Code address: https://github.com/NVlabs/VILA/blob/main/LongVILA.md
  • Title of the paper: LONGVILA: SCALING LONG-CONTEXT VISUAL LANGUAGE MODELS FOR LONG VIDEOS

For the training infrastructure, the study established an efficient and user-friendly framework, namely Multimodal Sequence Parallel (MM-SP) ), which supports training memory - dense long context VLM.

For the training pipeline, the researchers implemented a five-stage training process, as shown in Figure 1: namely (1) multi-modal alignment, (2) large-scale pre-training, (3) short-supervised fine-tuning, ( 4) contextual extension of LLM, and (5) long-supervised fine-tuning.

For inference, MM-SP solves the challenge of KV cache memory usage, which can become a bottleneck when processing very long sequences.

By using LongVILA to increase the number of video frames, experimental results show that the performance of this study continues to improve on VideoMME and long video subtitle tasks (Figure 2). The LongVILA model trained on 1024 frames achieved 99.5% accuracy in the needle-in-a-haystack experiment of 1400 frames, equivalent to a context length of 274k tokens. In addition, the MM-SP system can effectively extend the context length to 2 million tokens without gradient checkpoints, achieving 2.1x to 5.7x speedup compared to ring sequence parallelism and Megatron context parallelism+ Tensor parallelism achieves 1.1x to 1.4x speedup compared to Tensor Parallel.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
The picture below is an example of LongVILA technology when processing long video subtitles: At the beginning of the subtitles, the 8-frame baseline model only describes a static image and two cars. In comparison, 256 frames of LongVILA depict a car on snow, including front, rear, and side views of the vehicle. In terms of detail, the 256-frame LongVILA also depicts close-ups of the ignition button, gear lever, and instrument cluster, which are missing from the 8-frame baseline model.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
Multi-modal sequence parallelism

Training long-context visual language models (VLM) creates significant memory requirements. For example, in the long video training of Stage 5 in Figure 1 below, a single sequence contains 200K tokens that generate 1024 video frames, which exceeds the memory capacity of a single GPU.

Researchers developed a customized system based on sequence parallelism. Sequential parallelism is a technique commonly used in current base model systems to optimize text-only LLM training. However, researchers found that existing systems are neither efficient nor scalable enough to handle long-context VLM workloads.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
After identifying the limitations of existing systems, the researchers concluded that an ideal multi-modal sequence parallel approach should prioritize efficiency and scalability by addressing modal and network heterogeneity, and Scalability should not be limited by the number of attention heads.

MM-SP workflow. To address the challenge of modal heterogeneity, researchers propose a two-stage sharding strategy to optimize the computational workload in the image encoding and language modeling stages.

As shown in Figure 4 below, the first stage first evenly distributes images (such as video frames) among devices within the sequential parallel process group to achieve load balancing during the image encoding stage. In the second stage, researchers aggregate global visual and textual inputs for token-level sharding.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
2D attention parallelism. In order to solve network heterogeneity and achieve scalability, researchers combine the advantages of Ring sequence parallelism and Ulysses sequence parallelism.

Specifically, they regard parallelism across sequence dimensions or attention head dimensions as "1D SP". The method scales through parallel computation across attention heads and sequence dimensions, converting a 1D SP into a 2D grid composed of independent groups of Ring (P2P) and Ulysses (A2A) processes.

As shown on the left side of Figure 3 below, in order to achieve 8-degree sequence parallelism across 2 nodes, the researcher used 2D-SP to build a 4×2 communication grid.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
In addition, in Figure 5 below, to further explain how ZIGZAG-RINGATTN balances calculations and how the 2D-Attention mechanism operates, the researchers explain the attention calculation plan using different methods.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
Compared with HuggingFace’s native pipeline parallel strategy, the inference mode of this article is more efficient because all devices participate in the calculation at the same time, thereby accelerating the process in proportion to the number of machines, as shown in Figure 6 below. At the same time, this inference mode is scalable, with memory evenly distributed across devices to use more machines to support longer sequences.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
LongVILA training process

As mentioned above, the training process of LongVILA is completed in 5 stages. The main tasks of each stage are as follows:

In Stage 1, only the multi-modal mapper can be trained, and other mappers are frozen.

In Stage 2, the researchers froze the visual encoder and trained the LLM and multi-modal mapper.

In Stage 3, researchers comprehensively fine-tune the model for short data instruction following tasks, such as using image and short video data sets.

In Stage 4, researchers use text-only datasets to extend the context length of LLM in a continuous pre-training manner.

In Stage 5, researchers use long video supervision to fine-tune to enhance instruction following ability. It is worth noting that all parameters are trainable in this stage.

Experimental results

The researchers evaluated the full-stack solution in this article from two aspects: system and modeling. They first present training and inference results, illustrating the efficiency and scalability of a system that can support long-context training and inference. We then evaluate the performance of the long context model on captioning and instruction following tasks.

Training and Inference System

This study provides a quantitative evaluation of the throughput of the training system, the latency of the inference system, and the maximum sequence length supported.

Table 2 shows the throughput results. Compared with ZIGZAG-RINGATTN, this system achieves an acceleration of 2.1 times to 5.7 times, and the performance is comparable to DeepSpeed-Ulysses. A speedup of 3.1x to 4.3x is achieved compared to the more optimized ring sequence parallel implementation in Megatron-LM CP.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
この調査では、メモリ不足エラーが発生するまでシーケンス長を 1k から 10k まで徐々に増加させて、固定数の GPU でサポートされる最大シーケンス長を評価します。結果を図 9 にまとめます。

256 GPU にスケールすると、私たちのメソッドはコンテキスト長の約 8 倍をサポートできます。さらに、提案されたシステムは、ZIGZAG-RINGATTN と同様のコンテキスト長スケーリングを実現し、256 個の GPU で 200 万を超えるコンテキスト長をサポートします。
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
表 3 はサポートされるシーケンスの最大長を比較しており、この研究で提案された方法は HuggingFace Pipeline でサポートされるシーケンスよりも 2.9 倍長いシーケンスをサポートします。
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
図 11 は、干し草の山の実験における長いビデオ ニードルの結果を示しています。対照的に、LongVILA モデル (右) は、さまざまなフレーム番号と深度にわたってパフォーマンスが向上しています。
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
表 5 は、ビデオ MME ベンチマークにおけるさまざまなモデルのパフォーマンスをリストし、短い、中程度、長いビデオの長さでの有効性と全体的なパフォーマンスを比較しています。 LongVILA-8B は 256 フレームを使用し、総合スコアは 50.5 です。
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
研究者らは、表 6 のステージ 3 と 4 の影響に関するアブレーション研究も実施しました。
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
表 7 は、さまざまなフレーム数 (8、128、256) でトレーニングおよび評価された LongVILA モデルのパフォーマンス メトリクスを示しています。フレーム数が増えると、モデルのパフォーマンスが大幅に向上します。具体的には、平均スコアが 2.00 から 3.26 に増加し、より多くのフレームで正確で豊富な字幕を生成するモデルの能力が強調されました。

The above is the detailed content of Supporting 1024 frames and nearly 100% accuracy, NVIDIA 'LongVILA” begins to develop long videos. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Roblox: Bubble Gum Simulator Infinity - How To Get And Use Royal Keys
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Mandragora: Whispers Of The Witch Tree - How To Unlock The Grappling Hook
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Nordhold: Fusion System, Explained
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1668
14
PHP Tutorial
1273
29
C# Tutorial
1256
24
The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' Jun 24, 2024 pm 03:04 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

LLM is really not good for time series prediction. It doesn't even use its reasoning ability. LLM is really not good for time series prediction. It doesn't even use its reasoning ability. Jul 15, 2024 pm 03:59 PM

Can language models really be used for time series prediction? According to Betteridge's Law of Headlines (any news headline ending with a question mark can be answered with "no"), the answer should be no. The fact seems to be true: such a powerful LLM cannot handle time series data well. Time series, that is, time series, as the name suggests, refers to a set of data point sequences arranged in the order of time. Time series analysis is critical in many areas, including disease spread prediction, retail analytics, healthcare, and finance. In the field of time series analysis, many researchers have recently been studying how to use large language models (LLM) to classify, predict, and detect anomalies in time series. These papers assume that language models that are good at handling sequential dependencies in text can also generalize to time series.

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

See all articles