Don't wait for OpenAI, wait for Open-Sora to be fully open source
Not long ago, OpenAI Sora quickly became popular with its amazing video generation effects. It stood out from the crowd of Wensheng video models and became the focus of global attention. Following the launch of the Sora training inference reproduction process with a 46% cost reduction 2 weeks ago, the Colossal-AI team fully open sourced the world's first Sora-like architecture video generation model "Open-Sora 1.0", covering the entire training process, including data processing , all training details and model weights, and join hands with global AI enthusiasts to promote a new era of video creation.
For a sneak peek, let’s first watch a video of a bustling city generated by the “Open-Sora 1.0” model released by the Colossal-AI team.
A snapshot of the bustling city generated by Open-Sora 1.0
This is just the tip of the iceberg of Sora’s reproduction technology. About the above article The video model architecture, trained model weights, all repeated training details, data preprocessing process, demo display and detailed getting started tutorials, the Colossal-AI team has been fully open sourced on GitHub for free, and the author contacted the company as soon as possible. The team understands that they will continue to update Open-Sora related solutions and latest developments. Interested friends can continue to pay attention to Open-Sora's open source community.
Open-Sora open source address: https://github.com/hpcaitech/Open-Sora
Comprehensive interpretation of Sora reproduction plan
Next, we will delve into several key aspects of Sora’s replication solution, including model architecture design, training methods, data preprocessing, model effect display, and optimized training strategies.
Model architecture design
The model adopts the currently popular Diffusion Transformer (DiT) [1] architecture. The author team uses the high-quality open source Vincent graph model PixArt-α [2] that also uses the DiT architecture as the base, introduces a temporal attention layer on this basis, and extends it to video data. Specifically, the entire architecture includes a pre-trained VAE, a text encoder, and a STDiT (Spatial Temporal Diffusion Transformer) model that utilizes the spatial-temporal attention mechanism. Among them, the structure of each layer of STDiT is shown in the figure below. It uses a serial method to superimpose a one-dimensional temporal attention module on a two-dimensional spatial attention module to model temporal relationships. After the temporal attention module, the cross-attention module is used to align the semantics of the text. Compared with the full attention mechanism, such a structure greatly reduces training and inference overhead. Compared with the Latte [3] model, which also uses a spatial-temporal attention mechanism, STDiT can better utilize the weights of pre-trained image DiT to continue training on video data.
STDiT structure diagram
The training and inference process of the entire model is as follows. It is understood that in the training phase, the pre-trained Variational Autoencoder (VAE) encoder is first used to compress the video data, and then the STDiT diffusion model is trained together with text embedding in the compressed latent space. In the inference stage, a Gaussian noise is randomly sampled from the latent space of VAE and input into STDiT together with prompt embedding to obtain the denoised features. Finally, it is input to the decoder of VAE and decoded to obtain the video.
Training process of the model
Training reproduction plan
We provide the The team learned that Open-Sora’s recurrence plan refers to the Stable Video Diffusion (SVD)[3] work and includes three stages, namely:
- Large-scale Image pre-training.
- Large-scale video pre-training.
- Fine-tuning of high-quality video data.
Each stage will continue training based on the weights of the previous stage. Compared with single-stage training from scratch, multi-stage training achieves the goal of high-quality video generation more efficiently by gradually expanding data.
Three phases of training plan
The first phase: large-scale image pre-training
The first stage uses large-scale image pre-training and the mature Vincentian graph model to effectively reduce the cost of video pre-training.
The author team revealed to us that through the rich large-scale image data on the Internet and advanced grammatical technology, we can train a high-quality grammatical model, which will be used as the next Initialization weights for one-stage video pre-training. At the same time, since there is currently no high-quality spatiotemporal VAE, they used the image VAE pretrained by the Stable Diffusion [5] model. This strategy not only ensures the superior performance of the initial model, but also significantly reduces the overall cost of video pre-training.
The second stage: large-scale video pre-training
The second stage performs large-scale video pre-training to increase the model generalization ability, effectively Master the time series correlation of videos.
We understand that this stage requires the use of a large amount of video data for training to ensure the diversity of video themes, thereby increasing the generalization ability of the model. The second-stage model adds a temporal attention module to the first-stage Vincentian graph model to learn temporal relationships in videos. The remaining modules remain consistent with the first stage and load the first stage weights as initialization. At the same time, the output of the temporal attention module is initialized to zero to achieve more efficient and faster convergence. The Colossal-AI team used open-source weights from PixArt-alpha [2] as initialization for the second-stage STDiT model, and the T5 [6] model as the text encoder. At the same time, they used a small resolution of 256x256 for pre-training, which further increased the convergence speed and reduced training costs.
The third stage: fine-tuning high-quality video data
The third stage fine-tunes high-quality video data to significantly improve the quality of video generation .
The author team mentioned that the size of the video data used in the third stage is one order of magnitude less than that in the second stage, but the length, resolution and quality of the video are higher. By fine-tuning in this way, they achieved efficient scaling of video generation from short to long, from low to high resolution, and from low to high fidelity.
The author team stated that in the Open-Sora reproduction process, they used 64 H800 blocks for training. The total training volume of the second stage is 2808 GPU hours, which is about $7000, and the training volume of the third stage is 1920 GPU hours, which is about $4500. After preliminary estimates, the entire training program successfully controlled the Open-Sora reproduction process to about US$10,000.
Data preprocessing
In order to further reduce the threshold and complexity of Sora reproduction, the Colossal-AI team also provides The convenient video data preprocessing script allows you to easily start Sora recurrence pre-training, including downloading public video data sets, segmenting long videos into short video clips based on shot continuity, and using the open source large language model LLaVA [7] to generate detailed Prompt word. The author team mentioned that the batch video title generation code they provided can annotate a video with two cards and 3 seconds, and the quality is close to GPT-4V. The resulting video/text pairs can be used directly for training. With the open source code they provide on GitHub, we can easily and quickly generate the video/text pairs required for training on our own data set, significantly reducing the technical threshold and preliminary preparation for starting a Sora replication project.
Video/text pair automatically generated based on data preprocessing script
Model generation effect display
Let’s take a look at the actual video generation effect of Open-Sora. For example, let Open-Sora generate an aerial shot of sea water lapping against rocks on a cliff coast.
Let Open-Sora capture the majestic aerial view of mountains and waterfalls surging down from the cliffs and finally flowing into the lake.
# In addition to going to the sky, you can also enter the sea. Simply enter prompt and let Open-Sora generate a shot of the underwater world. In the shot, a turtle is on a coral reef. Cruise leisurely.
Open-Sora can also show us the Milky Way with twinkling stars through time-lapse photography.
If you have more interesting ideas for video generation, you can visit the Open-Sora open source community to obtain model weights for free experience. Link: https://github.com/hpcaitech/Open-Sora
It is worth noting that the author team mentioned on Github that the current version only uses 400K training data, and the model Both the generation quality and the ability to follow text need to be improved. For example, in the turtle video above, the resulting turtle has an extra leg. Open-Sora 1.0 is also not good at generating portraits and complex images. The author team listed a series of plans to be done on Github, aiming to continuously solve existing defects and improve the quality of production.
Efficient training support
In addition to significantly reducing the technical threshold for Sora reproduction and improving the quality of video generation in multiple dimensions such as duration, resolution, and content, the author team also provides Colossal-AI acceleration system provides efficient training support for Sora recurrence. Through efficient training strategies such as operator optimization and hybrid parallelism, an acceleration effect of 1.55 times was achieved in the training of processing 64-frame, 512x512 resolution videos. At the same time, thanks to Colossal-AI’s heterogeneous memory management system, a 1-minute 1080p high-definition video training task can be performed without hindrance on a single server (8*H800).
In addition, in the report of the author team, we also found that the STDiT model architecture also showed excellent efficiency during training. Compared with DiT using a full attention mechanism, STDiT achieves up to 5 times acceleration as the number of frames increases, which is particularly critical in real-world tasks such as processing long video sequences.
Welcome to continue to pay attention to the Open-Sora open source project: https://github.com/hpcaitech/Open-Sora
The author team stated that they will continue to maintain and optimize the Open-Sora project, and are expected to use more video training data to generate higher quality, longer video content, and support multi-resolution features to effectively promote The implementation of AI technology in movies, games, advertising and other fields.
Reference link:
[1] https://arxiv.org/abs/2212.09748 Scalable Diffusion Models with Transformers.
[2] https://arxiv.org/abs/2310.00426 PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis.
[3] https://arxiv.org/abs/2311.15127 Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets.
[4] https://arxiv.org/abs/2401.03048 Latte: Latent Diffusion Transformer for Video Generation.
[5] https://huggingface.co/stabilityai/sd-vae-ft-mse-original.
[6] https://github.com/google-research/text-to-text-transfer-transformer.
[7] https://github.com/haotian-liu/LLaVA.
[8] https://hpc-ai.com/blog/open-sora-v1.0.
The above is the detailed content of Don't wait for OpenAI, wait for Open-Sora to be fully open source. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

If the answer given by the AI model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this website on July 5, GlobalFoundries issued a press release on July 1 this year, announcing the acquisition of Tagore Technology’s power gallium nitride (GaN) technology and intellectual property portfolio, hoping to expand its market share in automobiles and the Internet of Things. and artificial intelligence data center application areas to explore higher efficiency and better performance. As technologies such as generative AI continue to develop in the digital world, gallium nitride (GaN) has become a key solution for sustainable and efficient power management, especially in data centers. This website quoted the official announcement that during this acquisition, Tagore Technology’s engineering team will join GLOBALFOUNDRIES to further develop gallium nitride technology. G
