Home Technology peripherals AI ICML 2024 | The new frontier of large language model pre-training: 'Best Adaptation Packaging' reshapes document processing standards

ICML 2024 | The new frontier of large language model pre-training: 'Best Adaptation Packaging' reshapes document processing standards

Jun 02, 2024 pm 09:42 PM
project

ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

##In the training process of large language models, the way of data processing is crucial important.

#Traditional methods usually splice and split a large number of documents into training sequences equal to the context length of the model. Although this improves training efficiency, it often leads to unnecessary truncation of documents, damages data integrity, and leads to the loss of key contextual information, which in turn affects the logical coherence and factual consistency of the content learned by the model, and makes the model easier to Hallucinations.

Researchers at AWS AI Labs conducted an in-depth study of this common splicing-chunking text processing method and found that it seriously affects the model's understanding of contextual coherence and facts. The ability to be consistent. This not only affects the model's performance on downstream tasks, but also increases the risk of hallucinations.

In response to this problem, they proposed an innovative document processing strategy - Best-fit Packing (Best-fit Packing), which eliminates the problem by optimizing document combinations. Unnecessary text truncation, significantly improves model performance and reduces model artifacts. This research has been accepted into ICML 2024.

ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准

Article title: Fewer Truncations Improve Language Modeling
Paper link: https://arxiv.org/pdf/2404.10830

Research background

In the traditional large language model training method, in order to improve efficiency , researchers typically splice together multiple input documents and then split these spliced ​​documents into fixed-length sequences.

Although this method is simple and efficient, it will cause a major problem - document truncation (document truncation), damaging data integrity (data integrity). Document truncation results in a loss of information contained in the document.

Additionally, document truncation reduces the amount of context in each sequence, potentially causing the prediction of the next word to be irrelevant to the previous one, making the model more susceptible to hallucinations ( hallucination).

The following example shows the problems caused by document truncation:

  • Figure 2 (a): In Python programming, although the original code is correct, splitting the definition and use of variables into different training sequences will introduce syntax errors, causing some variables to be undefined in subsequent training sequences, causing the model to learn errors patterns and may produce hallucinations in downstream tasks. For example, in program synthesis tasks, a model may use variables directly without defining them.
  • Figure 2(b): Truncation also damages the integrity of the information. For example, "Monday morning" in the summary cannot match any context in the training sequence, resulting in inaccurate content. This kind of incomplete information will significantly reduce the sensitivity of the model to contextual information, causing the generated content to be inconsistent with the actual situation, which is the so-called unfaithful generation.
  • Figure 2(c): Truncation also hinders knowledge acquisition during training, because the representation of knowledge in text often relies on complete sentences or paragraphs. For example, the model cannot learn the location of the ICML conference because the conference name and location are distributed in different training sequences.
ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准

Figure 2. Example of document truncation leading to illusion or loss of knowledge. (a) The variable definition (blue part) is truncated and subsequent usage calls result in an undefined name (red part). (b) Key contextual information is truncated (blue part), making the summary less accurate than the original text (red part). (c) Due to truncation, the model does not know where ICML 2024 will be held.

Best-fit Packing

To address this problem, researchers proposed Best-fit Packing.

This method uses length-aware combinatorial optimization techniques to efficiently pack documents into training sequences, completely eliminating unnecessary truncation. This not only maintains the training efficiency of traditional methods, but also substantially improves the quality of model training by reducing data fragmentation.

#The author first splits each text into one or more sequences up to the length of the model context length L. The limitation of this step comes from the model, so it must be carried out.

Now, based on a large number of file blocks that are at most L in length, researchers hope to combine them reasonably and obtain as few training sequences as possible. This problem can be viewed as a Bin Packing problem. The assembly optimization problem is NP-hard. As shown in the algorithm below, here they adopt the heuristic strategy of Best-Fit-Decreasing (BFD).

Next, we will discuss the feasibility of BFD from the perspective of time complexity (Time Complexity) and compactness (Compactness).

ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准

Time complexity:

Sorting sum of BFD The time complexity of packaging is O(N log N), where N is the number of document blocks. In pre-training data processing, since the length of the document block is an integer and limited ([1, L]), count sort can be used to reduce the time complexity of sorting to O(N).

In the packaging phase, by using the data structure of the segment tree, each operation of finding the best-fitting container only takes logarithmic time, that is, O (log L). And because L < Documentation) only takes 3 hours.

ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准

Compactness:

##Compactness is another important factor in measuring the effectiveness of the packaging algorithm Indicators, it is necessary to reduce the number of training sequences as much as possible to improve the efficiency of model training without destroying the integrity of the original document.

In practical applications, by precisely controlling the filling and arrangement of sequences, best-fit packing can generate an almost equivalent number of training sequences as traditional methods, while significantly reducing Data loss due to truncation is eliminated.

ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准

Based on experiments on natural language (RefinedWeb) and programming language (The Stack) data sets, we found that best-fit packaging significantly reduces text truncation.

It is worth noting that most documents contain less than 2048 tokens; due to the truncation caused by traditional splicing-blocking mainly occurs in this range, Best-fit packaging will not truncate any document with a length less than L, thus effectively maintaining the integrity of the vast majority of documents.

ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准

Figure 4: When the maximum sequence length is set to 2k or 8k, under different document lengths, the number of documents and the number of truncation corresponding to each document length. After using the "Best-fit Packing" technology, the number of truncation is significantly reduced. Above: Natural language. Below: Programming languages.

##Experiments and results

Researchers reported in detail the performance comparison of language models trained using best-fit packaging and traditional methods (i.e. splicing methods) on different tasks, including: natural language processing and programming language tasks, such as reading comprehension (Reading Comprehension), Natural Language Inference (Natural Language Inference), Context Following (Context Following), Text Summarization (Summarization), World Knowledge (Commonsense and Closed-book QA) and Program Synthesis (Program Synthesis), a total of 22 subtasks.

The experiments involved model sizes ranging from 7 billion to 13 billion parameters, sequence lengths from 2,000 to 8,000 tokens, and data sets covering natural languages ​​and programming languages. These models are trained on large-scale datasets such as Falcon RefinedWeb and The Stack, and experiments are conducted using the LLaMA architecture.

ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准

Experimental results show that using optimal adaptation packaging improves model performance in a series of tasks, especially in reading comprehension (4.7%), natural language reasoning ( The performance is significant in tasks such as 9.3%), context following (16.8%) and program synthesis (15.0%) (Due to the different scales of metrics for different tasks, the author defaults to relative improvement to describe the results.)

After statistical testing, the researchers found that all results were either statistically significantly better than the baseline (marked as s) or on par with the baseline (marked as n), and in all evaluated tasks, using No significant performance degradation was observed for any of the best-fit packings.

This improvement in consistency and monotonicity highlights that optimal adaptation packaging can not only improve the overall performance of the model, but also ensure the performance under different tasks and conditions. stability. Please refer to the text for detailed results and discussions.

ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准

ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准

ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准

ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准

##

The authors focused on the impact of best-fit packaging on illusion.

In summary generation, using the QAFactEval metric it was found that models using best-fit packaging had a significantly lower in generating hallucinations.

More significantly, in the program synthesis task, when using the best-fit packaged trained model to generate code, the "Undefined Name" The error was reduced by up to 58.3%, which shows that the model has a more complete understanding of the program structure and logic, thereby effectively reducing hallucinations.

#The authors also reveal differences in the model’s performance when dealing with different types of knowledge.

As mentioned earlier, truncation during training may affect the integrity of the information, thereby hindering the acquisition of knowledge. But the questions in most standard assessment sets focus on common knowledge, which occurs frequently in human language. So even if some knowledge is lost due to truncation, the model still has a good chance of learning this information from the document fragments.

In contrast, uncommon
tail knowledge
is more susceptible to truncation because this type of information is in the training data The frequency of occurrence itself is low, and it is difficult for the model to supplement the lost knowledge from other sources.

By analyzing the results of the two test sets ARC-C and ARC-E, the researchers found that compared to ARC-E, which contains more common knowledge, using Optimal fit packaging will result in a more significant performance improvement in the model in ARC-C, which contains more tail knowledge.

This finding was further verified by counting the number of co-occurrences of each question-answer pair in Kandpal et al. (2023) preprocessed Wikipedia entity map . Statistical results show that the challenge set (ARC-C) contains more rare co-occurring pairs, which verifies the hypothesis that optimal adaptation packaging can effectively support tail knowledge learning, and also explains why traditional large language models are unable to learn long-tail knowledge. provides an explanation for the difficulties encountered.

ICML 2024 | 大语言模型预训练新前沿:「最佳适配打包」重塑文档处理标准

Summary

##This article proposes large-scale language model training Common document truncation problem.
This truncation effect affects the model's ability to learn logical coherence and factual consistency, and increases the hallucination phenomenon during the generation process. The authors proposed Best-fit Packing, which maximizes the integrity of each document by optimizing the data sorting process. This method is not only suitable for processing large-scale data sets with billions of documents, but is also on par with traditional methods in terms of data compactness.
Experimental results show that this method is extremely effective in reducing unnecessary truncation, and can significantly improve the performance of the model in various text and code tasks, while effectively reducing The illusion of closed-domain language generation. Although the experiments in this paper mainly focus on the pre-training stage, optimal adaptation packaging can also be widely used in other stages such as fine-tuning. This work contributes to the development of more efficient and reliable language models and advances the development of language model training technology.
For more study details, please see the original paper. If you are interested in a job or internship, you can contact the author of this article by email zijwan@amazon.com.

The above is the detailed content of ICML 2024 | The new frontier of large language model pre-training: 'Best Adaptation Packaging' reshapes document processing standards. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1664
14
PHP Tutorial
1268
29
C# Tutorial
1247
24
The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' Jun 24, 2024 pm 03:04 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

LLM is really not good for time series prediction. It doesn't even use its reasoning ability. LLM is really not good for time series prediction. It doesn't even use its reasoning ability. Jul 15, 2024 pm 03:59 PM

Can language models really be used for time series prediction? According to Betteridge's Law of Headlines (any news headline ending with a question mark can be answered with "no"), the answer should be no. The fact seems to be true: such a powerful LLM cannot handle time series data well. Time series, that is, time series, as the name suggests, refers to a set of data point sequences arranged in the order of time. Time series analysis is critical in many areas, including disease spread prediction, retail analytics, healthcare, and finance. In the field of time series analysis, many researchers have recently been studying how to use large language models (LLM) to classify, predict, and detect anomalies in time series. These papers assume that language models that are good at handling sequential dependencies in text can also generalize to time series.

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

See all articles