Home Technology peripherals AI Bytedance Doubao and Wuhan University proposed CAL: enhancing multi-modal alignment effects through visually related tokens

Bytedance Doubao and Wuhan University proposed CAL: enhancing multi-modal alignment effects through visually related tokens

Jun 19, 2024 am 09:53 AM
project ByteDance Bean bag model

字节豆包、武大提出 CAL:通过视觉相关的 token 增强多模态对齐效果
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

The current mainstream visual language model (VLM) is mainly based on the large language model (LLM) for further fine-tuning. Therefore, it is necessary to map the image to the embedding space of LLM in various ways, and then use autoregressive methods to predict the answer based on the image token.

In this process, modal alignment is implicitly implemented through text tokens. How to align this step well is very critical.

In response to this problem, researchers from Wuhan University, ByteDance Beanbao Large Model Team and the University of Chinese Academy of Sciences proposed a text token screening method (CAL) based on contrastive learning to screen out text Tokens that are highly related to the image are increased in weight of the loss function to achieve more accurate multi-modal alignment.

字节豆包、武大提出 CAL:通过视觉相关的 token 增强多模态对齐效果

  • Paper link: https://arxiv.org/pdf/2405.17871
  • Code link: https://github.com/foundation-multimodal-models/CAL

CAL has the following highlights:

  • can be directly nested into the training process without additional pre-training stage.
  • has achieved significant improvements in OCR and Caption benchmarks. From the visualization, it can be found that CAL makes the image modal alignment better.
  • CAL makes the training process more resistant to noisy data.

Research motivation

Currently, visual language models rely on the alignment of image modalities, and how to do alignment is very critical. The current mainstream method is to perform implicit alignment through text autoregression, but the contribution of each text token to image alignment is inconsistent. It is very necessary to distinguish these text tokens.

CAL proposed that in the existing visual language model (VLM) training data, text tokens can be divided into three categories:

  • Text that is highly related to pictures: such as entities ( Such as people, animals, objects), quantity, color, text, etc. These tokens directly correspond to image information and are crucial for multi-modal alignment.
  • Text with low correlation to the picture: Such as following words or content that can be inferred from the previous text. These tokens are actually mainly used to train the plain text capabilities of VLM.
  • Text that contradicts the image content: These tokens are inconsistent with image information and may even provide misleading information, negatively affecting the multi-modal alignment process.

字节豆包、武大提出 CAL:通过视觉相关的 token 增强多模态对齐效果标 Figure 1: The green mark is related to the high -related Token, the red is the contrary to the content, and the colorless is the neutral Token

During the training process, the latter two types of token are actually actually occupy a larger proportion, but because they are not strongly dependent on the image, they have little effect on the modal alignment of the image. Therefore, in order to achieve better alignment, it is necessary to increase the weight of the first type of text tokens, that is, the tokens that are highly related to the image. How to find this part of the token has become the key to solving this problem.

Method

Finding tokens that are highly related to the image This problem can be solved by condition contrastive.
For each image-text pair in the training data, without image input, the logit on each text token represents LLM’s estimate of the occurrence of this situation based on context and existing knowledge. value.
  • If you add image input in front, it is equivalent to providing additional contextual information. In this case, the logit of each text token will be adjusted based on the new situation. The logit changes in these two cases represent the impact of the new condition of the picture on each text token.
  • Specifically, during the training process, CAL inputs the image and text sequences and individual text sequences into the large language model (LLM) respectively to obtain the logit of each text token. By calculating the logit difference between these two cases, we can measure the impact of the image on each token. The larger the logit difference, the greater the impact of the image on the token, so the token is more relevant to the image. The figure below shows the flow chart of the logit diff and CAL methods for text tokens.对 Figure 2: The left picture is the visualization of the token logit diff in the two situations. The picture on the right is the visualization of the CAL method process

字节豆包、武大提出 CAL:通过视觉相关的 token 增强多模态对齐效果Cal in Llava Experimental verification was conducted on two mainstream models: MGM and MGM, and performance improvements were achieved in models of different sizes.
Contains the following four parts of verification:

(1) Models using CAL perform better on various benchmark indicators.


(2) Create a batch of noise data (image-text mismatch) by randomly exchanging the text in the two image-text pairs in proportion and use it for model training. CAL makes the training process Has stronger data anti-noise performance.度 Figure 3: In the case of noise training at different intensity, the performance of CAL and the baseline

(3) calculates the attention scores of the picture token in the answer part of QA Case, And plotting it on the original image, the CAL-trained model has a clearer attention distribution map.

字节豆包、武大提出 CAL:通过视觉相关的 token 增强多模态对齐效果

C Figure 4: The baseline and CAL's Attention Map can be visualized. The right side of each pair is CAL
(4) to the text token to the text token in its most similar LLM vocabulary. Drawing it onto the original image, the CAL-trained model mapping content is closer to the image content. A Figure 5: Imam into the Image Token as the most similar vocabulary, and correspond to the original picture. The model team was established in 2023 and is committed to developing the industry's most advanced AI large model technology, becoming a world-class research team, and contributing to technological and social development.

字节豆包、武大提出 CAL:通过视觉相关的 token 增强多模态对齐效果
The Doubao Big Model team has long-term vision and determination in the field of AI. Its research directions cover NLP, CV, speech, etc., and it has laboratories and research positions in China, Singapore, the United States and other places. Relying on the platform's sufficient data, computing and other resources, the team continues to invest in related fields. It has launched a self-developed general large model to provide multi-modal capabilities. The downstream supports 50+ businesses such as Doubao, Buttons, and Jimeng, and is open to the volcano engine. Corporate customers. At present, Doubao APP has become the AIGC application with the largest number of users in the Chinese market. Welcome to join the ByteDance Beanbao model team.

The above is the detailed content of Bytedance Doubao and Wuhan University proposed CAL: enhancing multi-modal alignment effects through visually related tokens. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' Jun 24, 2024 pm 03:04 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

Axiomatic training allows LLM to learn causal reasoning: the 67 million parameter model is comparable to the trillion parameter level GPT-4 Axiomatic training allows LLM to learn causal reasoning: the 67 million parameter model is comparable to the trillion parameter level GPT-4 Jul 17, 2024 am 10:14 AM

Show the causal chain to LLM and it learns the axioms. AI is already helping mathematicians and scientists conduct research. For example, the famous mathematician Terence Tao has repeatedly shared his research and exploration experience with the help of AI tools such as GPT. For AI to compete in these fields, strong and reliable causal reasoning capabilities are essential. The research to be introduced in this article found that a Transformer model trained on the demonstration of the causal transitivity axiom on small graphs can generalize to the transitive axiom on large graphs. In other words, if the Transformer learns to perform simple causal reasoning, it may be used for more complex causal reasoning. The axiomatic training framework proposed by the team is a new paradigm for learning causal reasoning based on passive data, with only demonstrations

See all articles