Home Technology peripherals AI ICML 2024 | Feature pollution: Neural networks learn irrelevant features and fail to generalize

ICML 2024 | Feature pollution: Neural networks learn irrelevant features and fail to generalize

Jun 24, 2024 pm 02:17 PM
theory

ICML 2024 | 特征污染:神经网络会学习不相关特征而泛化失败

  • Paper title: Feature Contamination: Neural Networks Learn Uncorrelated Features and Fail to Generalize
  • Paper link: https://arxiv.org/pdf/2406.03345
  • Code link: https:/ /github.com/trzhang0116/feature-contamination

With the great success of large models represented by GPT in recent years, the machine learning paradigm of deep neural network + SGD + scaling once again proved its dominance in the field of AI status. Why are paradigms based on deep neural networks successful? The more common view is that neural networks have the ability to automatically learn abstract and generalizable features from massive high-dimensional input data. Unfortunately, limited by the shortcomings of current analysis methods and mathematical tools, our current understanding of "how (deep) neural networks implement such a feature learning process" is still not very deep. Because of this, most of the current relevant research in the academic community still remains at the level of "explaining" the features that the model has learned, and it is difficult to obtain more data-efficient and more generalizable models through "intervention" in its learning process. Model. When we discuss the feature learning process of neural networks, one of the most basic questions is: What features will the neural network learn from the input data? From a goal perspective, the feature learning of neural networks is a "by-product" driven by tasks, and its purpose is to minimize training errors. Therefore, we would intuitively think that the neural network should extract "task-relevant" features from the data, while the remaining "task-irrelevant" features are equivalent to data noise. Then, because neural networks have the characteristic of "not learning unless necessary" (more precisely, simplicity bias), neural networks should tend not to learn them. This is also a common view in the current literature.

However, in our recent work accepted by ICML 2024, we found that such intuitive cognition is actually

wrong

! Specifically, we found that when nonlinear neural networks learn task-related features, they also tend to learn task-irrelevantfeatures (we call it "feature pollution"), and this tendency can lead to neural It is difficult for the network to generalize to scenarios with distribution shift. Theoretically, we proved that feature contamination occurs even in a simple two-layer ReLU network and is closely related to the category asymmetry of neuron activation in neural networks; experimentally, we also gave a series of evidence that features Contamination also exists in deep networks such as ResNet and Vision transformer, and will adversely affect their generalization. It is worth mentioning that the failure mode we discovered is completely orthogonal to the mainstream analysis based on spurious correlations in the current out-of-distribution (OOD) generalization literature. Therefore, from a larger perspective, our findings demonstrate the importance of the inductive bias of the neural network itself for OOD generalization. It also shows that many of our studies on neural network feature learning and generalization have been Intuition may also need to be rethought.

Next, let’s introduce the specific content of the article:

Research background

The generalization ability in scenarios where data distribution changes (that is, OOD generalization ability) is a measure of whether a machine learning system can perform in reality One of the key indicators of deployment in the environment. However, current neural networks often suffer significant performance losses in OOD generalization scenarios. As for the reason why OOD generalization fails, the more mainstream statement in the literature is that spurious correlations exist in the representation, that is, the model will learn features that are related to the task goal but have no causal relationship. Therefore, when the correlation between these features and task objectives changes due to distribution shifts, models that rely on these features for prediction cannot guarantee the original performance.

The above theoretical explanation is quite intuitive and natural, and has also become the main line guiding OOD algorithm research in recent years, that is, by designing better optimization objective functions and regular terms, the model can learn better representations without false correlations. In order to obtain stronger generalization performance. In recent years, there has been a lot of work along this main line trying to improve the OOD generalization of the model through algorithm design. However, recent work shows that many algorithms with built-in theoretical guarantees have very limited performance improvement on OOD generalization tasks based on real data. Why does this happen? We believe that the current difficulties in OOD generalization research may stem from

two limitations

of existing analyses:

  • Most of the existing research only considers the failure mode caused by spurious correlation;
  • Most of the current research is limited to linear models and does not consider the nonlinearity of neural networks and the inductive bias of SGD, so the existing analysis results are not necessarily Suitable for the neural network we actually use.

In other words, current explanations and theoretical models of OOD generalization may not accurately reflect real-world distribution shift scenarios. Therefore, we believe that considering the inductive bias of neural networks and SGD is very necessary to understand the generalization of OOD based on deep neural networks.

Experiment

First, we try to estimate the "performance upper bound" that can be achieved by the current OOD generalization algorithm designed based on representation learning goals through experimental design. Under the guidance of spurious correlation theory, existing work mainly attempts to constrain the model to learn representations that can be generalized by OOD by designing auxiliary representation learning objective functions. In order to study whether optimizing such an objective can actually extract the desired representation, we designed an idealized scenario:

  • First, during the training process, we allow the model to explicitly fit a teacher model that can be generalized by OOD. The extracted representation is representation distillation. In experiments, this teacher model can be a large-scale pre-trained model (such as CLIP). In order to control variables, in actual operation we control the model structure of the student model and the teacher model to be exactly the same.
  • In the second step, we train linear classifiers (linear probing) on ​​the training set based on the representations provided by the teacher model and student model respectively. ,
  • Finally, we tested the linear classifiers based on the teacher model and the student model on the identically distributed test set and the OOD test set, respectively, to measure the OOD generalization of the representations extracted by these two models.

ICML 2024 | 特征污染:神经网络会学习不相关特征而泛化失败

The experimental results are shown in the picture above. From the figure we have two main findings:

  • Compared with the standard model (blue) that does not directly fit the teacher model representation during the training process, the linear classifier based on the student model (orange) does have better OOD Generalizability;
  • However, the OOD generalization performance of the linear classifier based on the student model (orange) is still significantly behind the linear classifier based on the teacher model (purple).

So we naturally ask: Since we have directly fitted the representation of the teacher model, where does the generalization gap between the student model and the teacher model come from? We found that it is difficult to directly explain this experimental phenomenon with the existing theoretical explanations:

  • First of all, this gap cannot be directly explained by the spurious correlation theory: since the representations of the student model and teacher model (on the training set) are basically The same, then the linear classifier based on these two representations should be similarly affected by spurious correlation features during the training process, and should not have such a large gap;
  • Another possible explanation is the teacher model (such as CLIP) may have "seen" many OOD samples during its own pre-training process, so it can extract some features that are not found on the training set for the OOD samples. However, recent research shows that even if all samples similar to OOD test samples are removed from CLIP's pre-training data, CLIP still has strong OOD generalization [1]. This shows that it is not sufficient to explain the gap between the teacher model and the student model simply from this perspective.

In short, we believe that the existing analysis is insufficient to explain the gap in OOD generalization ability that we actually observed in our experiments. At the same time, since "directly fitting representations that can be generalized by OOD" cannot guarantee a model that can be generalized by OOD, we have to consider the "process" of representation learning in addition to the "goal" of representation learning. ", which is the inductive bias caused by the feature learning dynamics of neural networks. Although it is very difficult to directly analyze the feature learning process of deep neural networks in theory, we found that even a two-layer ReLU network will show an interesting feature learning tendency, that is, "feature pollution", and this tendency It is also directly related to the OOD generalization of neural networks.

Theory

In this section, we prove the existence of the "feature pollution" phenomenon on a binary classification problem based on a two-layer ReLU network, and analyze the source of this phenomenon. Specifically, we assume that the input to the network is composed of a linear combination of two features: "core features" and "background features". Among them, the distribution of core features depends on the category label (can be visualized as the object to be classified in the image classification problem), while the distribution of the background features has nothing to do with the label (can be visualized as the picture background and other elements in the image classification problem). In order to eliminate the interference of other factors, we also make the following assumptions about these two types of features:

  • 背景特征和标签不相关(这样我们就排除了由虚假相关性导致的failure mode)。
  • 通过核心特征可以对标签实现100%准确率的预测(这样我们就排除了由于训练集的特征不够导致的failure mode)。
  • 核心特征和背景特征分布在正交的子空间中(这样我们就排除由于不同特征难以解耦导致的failure mode)。

我们发现,即使在以上的条件下,神经网络仍然会在学习核心特征的同时学习和任务完全不相关的背景特征。由于这两种特征在网络权重空间的耦合,在背景特征上发生的分布偏移也会导致神经网络的误差增大,从而降低网络的OOD泛化性。我们也因此把这种神经网络的特征学习偏好称之为“特征污染”。以下,我们详细介绍特征污染现象的出现原因。整体分析思路的示意图如下:

ICML 2024 | 特征污染:神经网络会学习不相关特征而泛化失败

我们分析中的关键点在于:特征污染实际上和神经网络中的神经元往往对不同类别具有不对称激活(asymmetric activation)有关。具体而言,我们可以证明在经过足够的SGD迭代后,网络中至少有相当一部分的神经元都会被倾向于而与一个类别的样本保持正相关(我们称之为该神经元的正样本,并用ypos表示其类别),而与另外一个类别的样本保持负相关(我们称之为该神经元的负样本,并用yneg表示其类别)。这就会导致这些神经元的激活具有类别不对称性,如定理4.1所示:

ICML 2024 | 特征污染:神经网络会学习不相关特征而泛化失败

这样的类别不对称性是怎么影响神经网络的特征学习过程的呢?我们首先注意到,对于网络隐层的第k个神经元,其权重向量wk在第t次迭代后可以被拆分为:

ICML 2024 | 特征污染:神经网络会学习不相关特征而泛化失败

上式中,Score和Sbg分别表示核心特征和背景特征的集合,其中每个mj都对应一个核心特征或者背景特征。从该式中我们可以看出,神经元的权重可以分解为其在不同特征上的投影(这里我们假设不同的mj之间都是正交的单位向量)。进一步地,我们可以证明在wk的负梯度在每一个背景特征mj,j属于Sbg上的投影满足:

ICML 2024 | 特征污染:神经网络会学习不相关特征而泛化失败

对于具有类别不对称激活的神经元,根据定理4.1我们可得其梯度主要取决于该神经元的正样本y=ypos而和负样本y=yneg几乎无关。这就导致正样本中存在的核心特征和背景特征会同时得到正的梯度投影,而这一过程和特征与标签之间的相关性无关。

如定理4.2所示,我们证明了在经过足够的SGD迭代后,上面这种梯度投影的积累将导致神经元学习到的特征既包含核心特征,也包含耦合的背景特征:

ICML 2024 | 特征污染:神经网络会学习不相关特征而泛化失败

由于核心特征和背景特征在神经元权重中的耦合,背景特征的负向分布偏移会降低神经元的激活,导致额外的 OOD 误差。如定理4.3所示,我们定量描述了特征污染对 ID 和 OOD 泛化风险的影响:

ICML 2024 | 特征污染:神经网络会学习不相关特征而泛化失败

同时,为了进一步说明特征污染源自神经网络的非线性激活函数之间的关系,我们证明了在去除掉神经网络的非线性后,特征污染将不再发生:

ICML 2024 | 特征污染:神经网络会学习不相关特征而泛化失败

As shown in the figure below, we verified our theoretical results through numerical experiments. At the same time, in addition to the two-layer ReLU network + SGD, we also extended our conclusions to more general settings, including other types of activation functions, optimizers with adaptive step sizes, etc. The results are shown in Figure 3(d) ), indicating that feature contamination is also prevalent in more general settings.

ICML 2024 | 特征污染:神经网络会学习不相关特征而泛化失败

At the same time, we have also provided more experimental evidence and feature visualization to show that in the deep networks such as ResNet and Vision transformer that we use daily, the phenomenon of feature pollution also occurs, and can explain the observations in our experiments The OOD generalization gap reached. Anyone who is interested in this part can refer to Chapter 5 of our original paper.

Summary and discussion

Finally, we list some research points that may be more important in the future/can be continued in depth. We also welcome everyone who is interested to communicate with us further:

  • Deeper network:Although We have experimentally proven that deep networks also have feature pollution problems, but so far our theoretical analysis has only done a two-layer ReLU network. We suspect that feature contamination may be a more general concept, and the activation asymmetry of neurons for categories may be only one of the reasons for its occurrence. By analyzing deeper networks or more complex network structures (such as introducing a normalization layer, etc.), we may be able to discover more causes of feature pollution and provide targeted solutions.
  • The role of pre-training: The theoretical analysis in this article only considers the case of train from scratch, but the models we actually use are often pre-trained models. There is a lot of experimental evidence that pre-training can help improve the OOD generalization of the model. So, is the essence of this improvement in generalization related to mitigating the feature pollution problem? How does pre-training do this?
  • How to solve the feature pollution problem: Although our work pointed out the feature pollution problem, it has not yet given a clear solution. However, some of our subsequent work has shown that similar problems will also occur when fine-tuning large models, and we have also found that some gradient adjustment-based methods can indeed alleviate this problem, thereby significantly improving the fine-tuning model. generalization ability. We will also release the specific content of this part of the work in the future, and everyone is welcome to continue to pay attention.

About the author | The author of this article, Zhang Tianren, is a doctoral candidate in the Department of Automation, Tsinghua University. He holds a bachelor's degree from the Department of Automation, Tsinghua University. His supervisor is Professor Chen Feng. During his PhD, the author mainly conducted theoretical and algorithmic research around representation learning and generalization issues in machine learning. He has published many articles in top machine learning conferences and journals, such as ICML, NeurIPS, ICLR, IEEE TPAMI, etc.

Author affiliation | Tsinghua University VIPLAB

Contact email | zhangtr22@mails.tsinghua.edu.cn

References

[1] Mayilvahanan, P., Wiedemer, T., Rusak, E ., Bethge, M., and Brendel, W. Does CLIP's generalization performance mainly stem from high train-test similarity? In International Conference on Learning Representations, 2024.

The above is the detailed content of ICML 2024 | Feature pollution: Neural networks learn irrelevant features and fail to generalize. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Breaking through the boundaries of traditional defect detection, 'Defect Spectrum' achieves ultra-high-precision and rich semantic industrial defect detection for the first time. Breaking through the boundaries of traditional defect detection, 'Defect Spectrum' achieves ultra-high-precision and rich semantic industrial defect detection for the first time. Jul 26, 2024 pm 05:38 PM

In modern manufacturing, accurate defect detection is not only the key to ensuring product quality, but also the core of improving production efficiency. However, existing defect detection datasets often lack the accuracy and semantic richness required for practical applications, resulting in models unable to identify specific defect categories or locations. In order to solve this problem, a top research team composed of Hong Kong University of Science and Technology Guangzhou and Simou Technology innovatively developed the "DefectSpectrum" data set, which provides detailed and semantically rich large-scale annotation of industrial defects. As shown in Table 1, compared with other industrial data sets, the "DefectSpectrum" data set provides the most defect annotations (5438 defect samples) and the most detailed defect classification (125 defect categories

NVIDIA dialogue model ChatQA has evolved to version 2.0, with the context length mentioned at 128K NVIDIA dialogue model ChatQA has evolved to version 2.0, with the context length mentioned at 128K Jul 26, 2024 am 08:40 AM

The open LLM community is an era when a hundred flowers bloom and compete. You can see Llama-3-70B-Instruct, QWen2-72B-Instruct, Nemotron-4-340B-Instruct, Mixtral-8x22BInstruct-v0.1 and many other excellent performers. Model. However, compared with proprietary large models represented by GPT-4-Turbo, open models still have significant gaps in many fields. In addition to general models, some open models that specialize in key areas have been developed, such as DeepSeek-Coder-V2 for programming and mathematics, and InternVL for visual-language tasks.

Training with millions of crystal data to solve the crystallographic phase problem, the deep learning method PhAI is published in Science Training with millions of crystal data to solve the crystallographic phase problem, the deep learning method PhAI is published in Science Aug 08, 2024 pm 09:22 PM

Editor |KX To this day, the structural detail and precision determined by crystallography, from simple metals to large membrane proteins, are unmatched by any other method. However, the biggest challenge, the so-called phase problem, remains retrieving phase information from experimentally determined amplitudes. Researchers at the University of Copenhagen in Denmark have developed a deep learning method called PhAI to solve crystal phase problems. A deep learning neural network trained using millions of artificial crystal structures and their corresponding synthetic diffraction data can generate accurate electron density maps. The study shows that this deep learning-based ab initio structural solution method can solve the phase problem at a resolution of only 2 Angstroms, which is equivalent to only 10% to 20% of the data available at atomic resolution, while traditional ab initio Calculation

Google AI won the IMO Mathematical Olympiad silver medal, the mathematical reasoning model AlphaProof was launched, and reinforcement learning is so back Google AI won the IMO Mathematical Olympiad silver medal, the mathematical reasoning model AlphaProof was launched, and reinforcement learning is so back Jul 26, 2024 pm 02:40 PM

For AI, Mathematical Olympiad is no longer a problem. On Thursday, Google DeepMind's artificial intelligence completed a feat: using AI to solve the real question of this year's International Mathematical Olympiad IMO, and it was just one step away from winning the gold medal. The IMO competition that just ended last week had six questions involving algebra, combinatorics, geometry and number theory. The hybrid AI system proposed by Google got four questions right and scored 28 points, reaching the silver medal level. Earlier this month, UCLA tenured professor Terence Tao had just promoted the AI ​​Mathematical Olympiad (AIMO Progress Award) with a million-dollar prize. Unexpectedly, the level of AI problem solving had improved to this level before July. Do the questions simultaneously on IMO. The most difficult thing to do correctly is IMO, which has the longest history, the largest scale, and the most negative

PRO | Why are large models based on MoE more worthy of attention? PRO | Why are large models based on MoE more worthy of attention? Aug 07, 2024 pm 07:08 PM

In 2023, almost every field of AI is evolving at an unprecedented speed. At the same time, AI is constantly pushing the technological boundaries of key tracks such as embodied intelligence and autonomous driving. Under the multi-modal trend, will the situation of Transformer as the mainstream architecture of AI large models be shaken? Why has exploring large models based on MoE (Mixed of Experts) architecture become a new trend in the industry? Can Large Vision Models (LVM) become a new breakthrough in general vision? ...From the 2023 PRO member newsletter of this site released in the past six months, we have selected 10 special interpretations that provide in-depth analysis of technological trends and industrial changes in the above fields to help you achieve your goals in the new year. be prepared. This interpretation comes from Week50 2023

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Nature's point of view: The testing of artificial intelligence in medicine is in chaos. What should be done? Nature's point of view: The testing of artificial intelligence in medicine is in chaos. What should be done? Aug 22, 2024 pm 04:37 PM

Editor | ScienceAI Based on limited clinical data, hundreds of medical algorithms have been approved. Scientists are debating who should test the tools and how best to do so. Devin Singh witnessed a pediatric patient in the emergency room suffer cardiac arrest while waiting for treatment for a long time, which prompted him to explore the application of AI to shorten wait times. Using triage data from SickKids emergency rooms, Singh and colleagues built a series of AI models that provide potential diagnoses and recommend tests. One study showed that these models can speed up doctor visits by 22.3%, speeding up the processing of results by nearly 3 hours per patient requiring a medical test. However, the success of artificial intelligence algorithms in research only verifies this

The accuracy rate reaches 60.8%. Zhejiang University's chemical retrosynthesis prediction model based on Transformer was published in the Nature sub-journal The accuracy rate reaches 60.8%. Zhejiang University's chemical retrosynthesis prediction model based on Transformer was published in the Nature sub-journal Aug 06, 2024 pm 07:34 PM

Editor | KX Retrosynthesis is a critical task in drug discovery and organic synthesis, and AI is increasingly used to speed up the process. Existing AI methods have unsatisfactory performance and limited diversity. In practice, chemical reactions often cause local molecular changes, with considerable overlap between reactants and products. Inspired by this, Hou Tingjun's team at Zhejiang University proposed to redefine single-step retrosynthetic prediction as a molecular string editing task, iteratively refining the target molecular string to generate precursor compounds. And an editing-based retrosynthetic model EditRetro is proposed, which can achieve high-quality and diverse predictions. Extensive experiments show that the model achieves excellent performance on the standard benchmark data set USPTO-50 K, with a top-1 accuracy of 60.8%.

See all articles