Table of Contents
Method introduction
Experiments and results
Home Technology peripherals AI 'Subject Three' that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

'Subject Three' that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

Dec 03, 2023 am 11:25 AM
ai Model

In recent times, you may have heard more or less of "Subject 3", with waving hands, half-crotch feet, and matching rhythmic music. This dance move has been popular all over the Internet. imitate.

What would happen if similar dances were generated by AI? As shown in the picture below, both modern people and paper people are doing uniform movements. What you might not guess is that this is a dance video generated based on a picture.

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

The character movements become more difficult, and the generated video is also very smooth (far right):

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

Let Messi and Iron Man move, it’s no problem:

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily


Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

There are also various anime ladies.

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

How are these effects achieved? Let’s continue reading

Character animation is the process of converting original character images into realistic videos in the desired pose sequence. This task has many potential application areas, such as online retail, entertainment videos, art creation, virtual characters, etc.

Since the advent of GAN technology, researchers have been continuously exploring in-depth Methods for converting images into animations and completing pose transfer. However, the generated images or videos still have some problems, such as local distortion, blurred details, semantic inconsistency, and temporal instability, which hinder the application of these methods.

Ali's research The authors proposed a method called Animate Anybody that converts character images into animated videos that follow the desired pose sequence. This study adopted a Stable Diffusion network design and pre-trained weights, and modified the denoising UNet to accommodate multi-frame input

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

  • Paper address: https://arxiv.org/pdf/2311.17117.pdf
  • Project address: https://humanaigc.github.io/ animate-anyone/

To keep the appearance consistent, the study introduced ReferenceNet. The network adopts a symmetric UNet structure and aims to capture the spatial details of the reference image. In each corresponding UNet block layer, this study uses a spatial-attention mechanism to integrate the features of ReferenceNet into the denoising UNet. This architecture enables the model to comprehensively learn the relationship with the reference image in a consistent feature space

To ensure pose controllability, this study designed a lightweight pose guidance processor to effectively integrate attitude control signals into the denoising process. In order to achieve temporal stability, this paper introduces a temporal layer to model the relationship between multiple frames, thereby retaining high-resolution details of visual quality while simulating a continuous and smooth temporal motion process.

Animate Anybody was trained on an in-house dataset of 5K character video clips, as shown in Figure 1, showing the animation results for various characters. Compared with previous methods, the method in this article has several obvious advantages:

  • First, it effectively maintains the spatial and temporal consistency of the appearance of characters in the video.
  • Secondly, the high-definition video it generates will not have problems such as time jitter or flickering.
  • Third, it can animate any character image into a video without being restricted by a specific field.

This paper is evaluated on two specific human video synthesis benchmarks (UBC Fashion Video Dataset and TikTok Dataset). The results show that Animate Anybody achieves SOTA results. Additionally, the study compared the Animate Anybody method with general image-to-video methods trained on large-scale data, showing that Animate Anybody demonstrates superior capabilities in character animation.

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

Animate Anybody compared to other methods:

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

Method introduction

The processing method of this article is shown in Figure 2. The original input of the network is composed of multi-frame noise. In order to achieve the denoising effect, the researchers adopted a configuration based on SD design, using the same framework and block units, and inheriting the training weights from SD. Specifically, this method includes three key parts, namely:

  • ReferenceNet, which encodes the appearance characteristics of the character in the reference image;
  • Pose Guider (posture guide), encodes action control signals to achieve controllable character movement;
  • Temporal layer (temporal layer), encodes temporal relationships to ensure the continuity of character actions .

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

ReferenceNet

ReferenceNet is a reference image feature extraction Network, its framework is roughly the same as the denoising UNet, only the temporal layer is different. Therefore, ReferenceNet inherits the original SD weights similar to the denoising UNet, and each weight update is performed independently. The researchers explain how to integrate features from ReferenceNet into denoising UNet.

The design of ReferenceNet has two advantages. First, ReferenceNet can leverage the pre-trained image feature modeling capabilities of raw SD to produce well-initialized features. Second, since ReferenceNet and denoising UNet essentially have the same network structure and shared initialization weights, denoising UNet can selectively learn features associated in the same feature space from ReferenceNet.

Attitude guide

The rewritten content is: This lightweight attitude guide uses four convolutional layers (4 × 4 kernels, 2 × 2 stride) with channel numbers of 16, 32, 64, 128, similar to the conditional encoder in [56], used to align the gesture image. The processed pose image is added to the latent noise and then input to the denoising UNet for processing. The pose guide is initialized with Gaussian weights and uses zero convolutions in the final mapping layer

Temporal layer

The design of the time layer is inspired by AnimateDiff. For a feature map x∈R^b×t×h×w×c, the researcher first deforms it into x∈R^(b×h×w)×t×c, and then performs temporal attention, that is, along Self-attention in dimension t. The features of the temporal layer are merged into the original features through residual connections. This design is consistent with the two-stage training method below. Temporal layers are used exclusively within the Res-Trans block of denoising UNet.

Training strategy

The training process is divided into two stages.

Rewritten content: In the first stage of training, a single video frame is used for training. In the denoising UNet model, the researchers temporarily excluded the temporal layer and took single-frame noise as input. At the same time, the reference network and attitude guide are also trained. Reference images are randomly selected from the entire video clip. They used pretrained weights to initialize the denoising UNet and ReferenceNet models. The pose guide is initialized with Gaussian weights, except for the final projection layer, which uses zero convolutions. The weights of the VAE encoder and decoder and the CLIP image encoder remain unchanged. The optimization goal of this stage is to generate high-quality animated images given the reference image and target pose

In the second stage, the researcher introduces the temporal layer into the previously trained model and initialize it using the pre-trained weights in AnimateDiff. The input to the model consists of a 24-frame video clip. At this stage, only the temporal layer is trained, while the weights of other parts of the network are fixed.

Experiments and results

Qualitative results: As shown in Figure 3, the method in this article can produce animations of any character, including full-body portraits and half-length portraits , cartoon characters and humanoid characters. This method is capable of producing high definition and realistic human details. It maintains temporal consistency with the reference image and exhibits temporal continuity from frame to frame even in the presence of large motions.

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

Fashion video synthesis. The goal of fashion video synthesis is to transform fashion photos into realistic animated videos using driven pose sequences. Experiments are conducted on the UBC Fashion Video Dataset, which consists of 500 training videos and 100 testing videos, each containing approximately 350 frames. Quantitative comparisons are shown in Table 1. It can be found in the results that the method in this paper is better than other methods, especially in video measurement indicators, showing a clear lead.

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

Qualitative comparison is shown in Figure 4. For a fair comparison, the researchers used DreamPose’s open source code to obtain results without sample fine-tuning. In the field of fashion videos, the requirements for clothing details are very strict. However, videos generated by DreamPose and BDMM fail to maintain consistency in clothing details and exhibit significant errors in color and fine structural elements. In contrast, the results generated by this method can more effectively maintain the consistency of clothing details.

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

Human dance generation is a study that aims to generate humans by animating images of realistic dance scenes dance. The researchers used the TikTok data set, which includes 340 training videos and 100 test videos. They performed a quantitative comparison using the same test set, which included 10 TikTok-style videos, following DisCo's dataset partitioning method. As can be seen from Table 2, the method in this article achieves the best results. In order to enhance the generalization ability of the model, DisCo combines human attribute pre-training and uses a large number of image pairs to pre-train the model. In contrast, other researchers only trained on the TikTok data set, but the results were still better than DisCo

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

A qualitative comparison with DisCo is shown in Figure 5 . Considering the complexity of the scene, DisCo's method requires the additional use of SAM to generate human foreground masks. In contrast, our method shows that even without explicit human mask learning, the model can grasp the foreground-background relationship from the subject's motion without prior human segmentation. Furthermore, in complex dance sequences, the model excels at maintaining visual continuity throughout the action and shows greater robustness in handling different character appearances.

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

Image - Generic method for video. Currently, many studies have proposed video diffusion models with strong generation capabilities based on large-scale training data. The researchers chose two of the best-known and most effective image-video methods for comparison: AnimateDiff and Gen2. Since these two methods do not perform pose control, the researchers only compared their ability to maintain the appearance fidelity of the reference image. As shown in Figure 6, current image-to-video approaches face challenges in generating a large number of character actions and struggle to maintain long-term appearance consistency across videos, thus hindering effective support for consistent character animation.

Subject Three that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily

Please consult the original paper for more information

The above is the detailed content of 'Subject Three' that attracts global attention: Messi, Iron Man, and two-dimensional ladies can handle it easily. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

WorldCoin (WLD) price forecast 2025-2031: Will WLD reach USD 4 by 2031? WorldCoin (WLD) price forecast 2025-2031: Will WLD reach USD 4 by 2031? Apr 21, 2025 pm 02:42 PM

WorldCoin (WLD) stands out in the cryptocurrency market with its unique biometric verification and privacy protection mechanisms, attracting the attention of many investors. WLD has performed outstandingly among altcoins with its innovative technologies, especially in combination with OpenAI artificial intelligence technology. But how will the digital assets behave in the next few years? Let's predict the future price of WLD together. The 2025 WLD price forecast is expected to achieve significant growth in WLD in 2025. Market analysis shows that the average WLD price may reach $1.31, with a maximum of $1.36. However, in a bear market, the price may fall to around $0.55. This growth expectation is mainly due to WorldCoin2.

What is the analysis chart of Bitcoin finished product structure? How to draw? What is the analysis chart of Bitcoin finished product structure? How to draw? Apr 21, 2025 pm 07:42 PM

The steps to draw a Bitcoin structure analysis chart include: 1. Determine the purpose and audience of the drawing, 2. Select the right tool, 3. Design the framework and fill in the core components, 4. Refer to the existing template. Complete steps ensure that the chart is accurate and easy to understand.

Why is the rise or fall of virtual currency prices? Why is the rise or fall of virtual currency prices? Why is the rise or fall of virtual currency prices? Why is the rise or fall of virtual currency prices? Apr 21, 2025 am 08:57 AM

Factors of rising virtual currency prices include: 1. Increased market demand, 2. Decreased supply, 3. Stimulated positive news, 4. Optimistic market sentiment, 5. Macroeconomic environment; Decline factors include: 1. Decreased market demand, 2. Increased supply, 3. Strike of negative news, 4. Pessimistic market sentiment, 5. Macroeconomic environment.

What does cross-chain transaction mean? What are the cross-chain transactions? What does cross-chain transaction mean? What are the cross-chain transactions? Apr 21, 2025 pm 11:39 PM

Exchanges that support cross-chain transactions: 1. Binance, 2. Uniswap, 3. SushiSwap, 4. Curve Finance, 5. Thorchain, 6. 1inch Exchange, 7. DLN Trade, these platforms support multi-chain asset transactions through various technologies.

Aavenomics is a recommendation to modify the AAVE protocol token and introduce token repurchase, which has reached the quorum number of people. Aavenomics is a recommendation to modify the AAVE protocol token and introduce token repurchase, which has reached the quorum number of people. Apr 21, 2025 pm 06:24 PM

Aavenomics is a proposal to modify the AAVE protocol token and introduce token repos, which has implemented a quorum for AAVEDAO. Marc Zeller, founder of the AAVE Project Chain (ACI), announced this on X, noting that it marks a new era for the agreement. Marc Zeller, founder of the AAVE Chain Initiative (ACI), announced on X that the Aavenomics proposal includes modifying the AAVE protocol token and introducing token repos, has achieved a quorum for AAVEDAO. According to Zeller, this marks a new era for the agreement. AaveDao members voted overwhelmingly to support the proposal, which was 100 per week on Wednesday

The top ten free platform recommendations for real-time data on currency circle markets are released The top ten free platform recommendations for real-time data on currency circle markets are released Apr 22, 2025 am 08:12 AM

Cryptocurrency data platforms suitable for beginners include CoinMarketCap and non-small trumpet. 1. CoinMarketCap provides global real-time price, market value, and trading volume rankings for novice and basic analysis needs. 2. The non-small quotation provides a Chinese-friendly interface, suitable for Chinese users to quickly screen low-risk potential projects.

How to win KERNEL airdrop rewards on Binance Full process strategy How to win KERNEL airdrop rewards on Binance Full process strategy Apr 21, 2025 pm 01:03 PM

In the bustling world of cryptocurrencies, new opportunities always emerge. At present, KernelDAO (KERNEL) airdrop activity is attracting much attention and attracting the attention of many investors. So, what is the origin of this project? What benefits can BNB Holder get from it? Don't worry, the following will reveal it one by one for you.

Rexas Finance (RXS) can surpass Solana (Sol), Cardano (ADA), XRP and Dogecoin (Doge) in 2025 Rexas Finance (RXS) can surpass Solana (Sol), Cardano (ADA), XRP and Dogecoin (Doge) in 2025 Apr 21, 2025 pm 02:30 PM

In the volatile cryptocurrency market, investors are looking for alternatives that go beyond popular currencies. Although well-known cryptocurrencies such as Solana (SOL), Cardano (ADA), XRP and Dogecoin (DOGE) also face challenges such as market sentiment, regulatory uncertainty and scalability. However, a new emerging project, RexasFinance (RXS), is emerging. It does not rely on celebrity effects or hype, but focuses on combining real-world assets (RWA) with blockchain technology to provide investors with an innovative way to invest. This strategy makes it hoped to be one of the most successful projects of 2025. RexasFi

See all articles