Table of Contents
Data filtering under a certain computing budget
The expansion law of data filtering
Fitting expansion curves for various data utility pools
Result: Estimating expansion law for data combination under QQT
Home Technology peripherals AI Is it better to have more data or higher quality? This research can help you make your choice

Is it better to have more data or higher quality? This research can help you make your choice

Jun 01, 2024 pm 10:09 PM
data train

Scaling the basic model refers to using more data, calculations and parameters for pre-training, which is simply "scale expansion".

Although directly expanding the model size seems simple and crude, it has indeed brought many outstanding models to the machine learning community. Many previous studies have recognized the practice of expanding the scale of neuroeconomic models. The so-called quantitative changes lead to qualitative changes. This view is also known as neural scaling laws. However, as the model size increases, it results in intensive consumption of computing resources. This means that larger models require more computing resources, including processors and memory. This is not feasible for many practical applications, especially on resource-constrained devices. Therefore, researchers have begun to focus on how to use computing resources more efficiently to improve models.

Recently, many people believe that "data" is the best closed source The key to the model, whether it is LLM, VLM or diffusion model. As the importance of data quality has been recognized, a lot of research has emerged aimed at improving data quality: either filtering high-quality data from large databases or generating new high-quality data. However, the expansion law in the past generally regarded "data" as a homogeneous entity, and did not take the "data quality" that people have been paying attention to recently as a consideration dimension.

Despite the vastness of data models on the web, high-quality data (based on multiple evaluation metrics) is often limited. Now, groundbreaking research is coming - the expansion law in data filtering dimensions! It comes from Carnegie Mellon University and the Bosch Center for AI, with a particular focus on the quantity-quality trade-off (QQT) between "large scale" and "high quality."

Is it better to have more data or higher quality? This research can help you make your choice


  • Paper title: Scaling Laws for Data Filtering—Data Curation cannot be Compute Agnostic
  • Paper address: https://arxiv.org/pdf/2404.07177.pdf
  • Code address: https://github.com/locuslab/scaling_laws_data_filtering


## As shown in Figure 1, when training for multiple epochs, the utility of high-quality data is not great (because the model has already completed learning).


Is it better to have more data or higher quality? This research can help you make your choice

At this point, use lower quality data (the initial utility smaller) is often more helpful than reusing high-quality data.

Under the quantity-quality trade-off (QQT), how do we determine what kind of data combination is better for training?

To answer this question, any data curation workflow must consider the total computational effort used for model training. This is different from the community's view of data filtering. For example, the LAION filtering strategy extracts the highest quality 10% from common crawl results.

But as can be seen from Figure 2, it is obvious that once training exceeds 35 epochs, the effect of training on a completely unorganized data set is better than that on high-quality data organized using the LAION strategy on the effect of training.

Is it better to have more data or higher quality? This research can help you make your choice

#Current neural expansion laws cannot model this dynamic trade-off between quality and quantity. In addition, there are even fewer studies on the extension of visual-language models, and most current research is limited to the field of language modeling.

The groundbreaking research we are going to introduce today has overcome three important limitations of the previous neural expansion law, and it has done:

(1) Consider the "quality" axis when expanding data;

(2) Estimate the expansion law of the data pool combination (without actually training on the combination), this Helps guide the implementation of optimal data organization decisions;

(3) Adjust the LLM expansion law to make it suitable for comparative training (such as CLIP), in which each batch has square The number of comparisons for the quantity.

The team proposed the expansion law for heterogeneous and limited amount of network data for the first time.

Large-scale models are trained on a combination of data pools of various qualities. By modeling the aggregate data utility derived from the diffusion parameters of individual data pools (A-F in Figure 1 (a)), it is possible to directly estimate the model's performance on any combination of these data pools.

It is important to point out that this method does not require training on these data pool combinations to estimate their expansion laws, but can be directly estimated based on the expansion parameters of each component pool. their expansion curves.

Compared with the expansion law in the past, the expansion law here has some important differences. It can model and compare repetitions in the training mechanism and achieve O (n²) comparison. For example, if the size of the training pool is doubled, the number of comparisons that contribute to the model loss will be quadrupled.

They mathematically describe how data from different pools interact with each other, allowing the performance of the model to be estimated under different combinations of data. This results in a data organization strategy that is appropriate for currently available computations.

One of the key messages from this study is: Data compilation cannot be done in isolation from calculations.

When the computational budget is small (fewer repetitions), quality takes precedence under the QQT trade-off, as shown by the best performance of aggressive filtering (E) at low computational effort in Figure 1.

On the other hand, when the scale of calculation far exceeds the training data used, the effectiveness of limited high-quality data will decrease, and you need to find ways to make up for this. This results in a less aggressive filtering strategy, i.e. better performance with larger data volumes.

The team conducted experimental demonstrations, and the results showed that this new scaling law for heterogeneous network data can predict data from 32M to 640M using DataComp's medium-sized pool (128M samples). A Pareto optimal filtering strategy under computational budget.

Data filtering under a certain computing budget

The team studied the effect of data filtering under different computing budgets through experiments.

They trained a VLM using a large initial data pool. For the base unfiltered data pool, they chose a "medium" scale version of Datacomp, a recent data compilation benchmark. The data pool contains 128M samples. They used 18 different downstream tasks to evaluate the model's zero-shot performance.

They first studied the LAION filtering strategy used to obtain the LAION dataset, and the results are shown in Figure 2. They observed the following results:

#1. When computational budget is low, it is better to use high-quality data.

2. Data filtering can get in the way when the computational budget is high.

Why?

LAION filtering retains approximately 10% of the data, so the computational budget is approximately 450M and each sample from the filtered LAION pool is used approximately 32 times. The key insight here is that if the same sample is seen multiple times during training, the utility will decrease each time.

The team then studied two other data filtering methods:

(1) CLIP score filtering, using CLIP L/14 Model;

(2) T-MARS, which ranks data based on CLIP scores after masking text features (OCR) in images. For each data filtering method, they used four filtering levels and various different total computational efforts.

Figure 3 shows the comparison of the results of Top 10-20%, Top 30%, and Top 40% CLIP filtering when the calculation scale is 32M, 128M, and 640M.

Is it better to have more data or higher quality? This research can help you make your choice

At 32M compute scale, a highly aggressive filtering strategy (retaining only the top 10-20% based on CLIP score) gives the best results, The least aggressive filtering method, which retained the top 40%, performed the worst. However, when the computing scale is expanded to 640M, this trend is completely reversed. Similar trends are observed using the T-MARS score metric.

The expansion law of data filtering

The team first defined utility mathematically.

Their approach is not to estimate the loss of n samples at the end of training, but to consider the instantaneous utility of a sample at any point in the training phase. The mathematical formula is:

Is it better to have more data or higher quality? This research can help you make your choice

This shows that the instantaneous utility of a sample is directly proportional to the current loss and inversely proportional to what is seen so far. number of samples reached. This is also in line with our intuitive thinking: as the number of samples seen by the model increases, the effectiveness of the samples will decrease. The focus is on the data utility parameter b .

The next step is the effectiveness of data being reused.

Mathematically, the utility parameter b of a sample that has been seen k 1 times is defined as:

Is it better to have more data or higher quality? This research can help you make your choice

Where τ is the half-life of the utility parameter. The higher the value of τ, the slower the sample utility decays with repetition. δ is a concise way of writing the decay of utility with repetition. Then, the expression of the model’s loss after seeing n samples and each sample having been seen k times is:

Is it better to have more data or higher quality? This research can help you make your choice

Where n_j is the number of samples seen by the model at the end of the j-th training epoch. This equation is the basis of the newly proposed expansion law.

Finally, there is another layer of complexity, namely heterogeneous network data.

Then we get the theorem they gave: given p data pools randomly and uniformly sampled, their respective utility and repetition parameters are (b_1, τ_1)... (b_p, τ_p), then the new repeated half-life of each bucket is τˆ = p·τ. Furthermore, the effective utility value b_eff of the combined data pool at the kth iteration is the weighted average of the individual utility values. Its mathematical form is:

Is it better to have more data or higher quality? This research can help you make your choice

Is it better to have more data or higher quality? This research can help you make your choice, which is the new per-bucket attenuation parameter.

Finally, you can use b_eff in the above theorem in (3) to estimate the loss when training on the data pool combination.

Fitting expansion curves for various data utility pools

The team experimentally explored the newly proposed expansion law.

Figure 4 shows the expansion curves of various data utility pools after fitting. The data utility index used is the T-MARS score.

Is it better to have more data or higher quality? This research can help you make your choice

Column 2 of Figure 4 shows that the utility of each data pool decreases as epochs increase. Here are some key observations from the team:

1. Network data is heterogeneous and cannot be modeled by a single set of extended parameters.

2. Different data pools have different data diversity.

3. The effect of high-quality data with repeated phenomena cannot keep up with the direct use of low-quality data.

Result: Estimating expansion law for data combination under QQT

The corresponding parameters a, b, d, τ. The goal here is to determine what is the most efficient data wrangling strategy given a training compute budget.

Through the previous theorem and the expansion parameters of each data pool, the expansion laws of different pool combinations can now be estimated. For example, the Top-20% pool can be thought of as a combination of the Top-10% and Top 10%-20% pools. This trend from the expansion curve can then be used to predict a Pareto-optimal data filtering strategy for a given computational budget.

Figure 5 gives the expansion curves for different data combinations, which were evaluated on ImageNet.

Is it better to have more data or higher quality? This research can help you make your choice

#It should be emphasized here that these curves are estimated directly from the expansion parameters of each component pool based on the above theorem. They did not train on these data pool combinations to estimate these expansion curves. The scatter points are actual test performance and serve to verify the estimated results.

It can be seen that: (1) When the calculation budget is low/the number of repetitions is small, the aggressive filtering strategy is the best.

(2) Data compilation cannot be performed without calculation.

Expand the expansion curve

##The 2023 paper by Cherti et al. Reproducible scaling laws for contrastive language-image learning" studied the scaling laws proposed for the CLIP model, in which dozens of models with computational scales ranging from 3B to 34B training samples were trained, and the models covered different ViT series models. Training models at this computational scale is very expensive. Cherti et al. (2023) aimed to fit expansion laws for this family of models, but the expansion curves for models trained on small data sets had many errors.

The CMU team believes this is mainly because they did not consider the reduction in utility caused by reusing data. So they estimated the errors of these models using the newly proposed expansion law.

Figure 6 is the expanded curve after correction, which can predict errors with high accuracy.

Is it better to have more data or higher quality? This research can help you make your choice

This shows that the newly proposed expansion law is suitable for large models trained with 34B data calculations. This shows that when predicting model training results, the new The expansion law does take into account the degradation in utility of duplicate data.

For more technical details and experimental results, please refer to the original paper.

The above is the detailed content of Is it better to have more data or higher quality? This research can help you make your choice. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1664
14
PHP Tutorial
1268
29
C# Tutorial
1242
24
Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Apr 03, 2024 pm 12:04 PM

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

Use ddrescue to recover data on Linux Use ddrescue to recover data on Linux Mar 20, 2024 pm 01:37 PM

DDREASE is a tool for recovering data from file or block devices such as hard drives, SSDs, RAM disks, CDs, DVDs and USB storage devices. It copies data from one block device to another, leaving corrupted data blocks behind and moving only good data blocks. ddreasue is a powerful recovery tool that is fully automated as it does not require any interference during recovery operations. Additionally, thanks to the ddasue map file, it can be stopped and resumed at any time. Other key features of DDREASE are as follows: It does not overwrite recovered data but fills the gaps in case of iterative recovery. However, it can be truncated if the tool is instructed to do so explicitly. Recover data from multiple files or blocks to a single

Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Apr 01, 2024 pm 07:46 PM

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Slow Cellular Data Internet Speeds on iPhone: Fixes Slow Cellular Data Internet Speeds on iPhone: Fixes May 03, 2024 pm 09:01 PM

Facing lag, slow mobile data connection on iPhone? Typically, the strength of cellular internet on your phone depends on several factors such as region, cellular network type, roaming type, etc. There are some things you can do to get a faster, more reliable cellular Internet connection. Fix 1 – Force Restart iPhone Sometimes, force restarting your device just resets a lot of things, including the cellular connection. Step 1 – Just press the volume up key once and release. Next, press the Volume Down key and release it again. Step 2 – The next part of the process is to hold the button on the right side. Let the iPhone finish restarting. Enable cellular data and check network speed. Check again Fix 2 – Change data mode While 5G offers better network speeds, it works better when the signal is weaker

Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! May 06, 2024 pm 04:13 PM

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

Kuaishou version of Sora 'Ke Ling' is open for testing: generates over 120s video, understands physics better, and can accurately model complex movements Kuaishou version of Sora 'Ke Ling' is open for testing: generates over 120s video, understands physics better, and can accurately model complex movements Jun 11, 2024 am 09:51 AM

What? Is Zootopia brought into reality by domestic AI? Exposed together with the video is a new large-scale domestic video generation model called "Keling". Sora uses a similar technical route and combines a number of self-developed technological innovations to produce videos that not only have large and reasonable movements, but also simulate the characteristics of the physical world and have strong conceptual combination capabilities and imagination. According to the data, Keling supports the generation of ultra-long videos of up to 2 minutes at 30fps, with resolutions up to 1080p, and supports multiple aspect ratios. Another important point is that Keling is not a demo or video result demonstration released by the laboratory, but a product-level application launched by Kuaishou, a leading player in the short video field. Moreover, the main focus is to be pragmatic, not to write blank checks, and to go online as soon as it is released. The large model of Ke Ling is already available in Kuaiying.

Alibaba 7B multi-modal document understanding large model wins new SOTA Alibaba 7B multi-modal document understanding large model wins new SOTA Apr 02, 2024 am 11:31 AM

New SOTA for multimodal document understanding capabilities! Alibaba's mPLUG team released the latest open source work mPLUG-DocOwl1.5, which proposed a series of solutions to address the four major challenges of high-resolution image text recognition, general document structure understanding, instruction following, and introduction of external knowledge. Without further ado, let’s look at the effects first. One-click recognition and conversion of charts with complex structures into Markdown format: Charts of different styles are available: More detailed text recognition and positioning can also be easily handled: Detailed explanations of document understanding can also be given: You know, "Document Understanding" is currently An important scenario for the implementation of large language models. There are many products on the market to assist document reading. Some of them mainly use OCR systems for text recognition and cooperate with LLM for text processing.

See all articles