Table of Contents
Key Concepts
Feature loop
In the context of machine learning, we sometimes observe superposition, referring to the phenomenon that a neuron in the model represents multiple overlapping features rather than a single, different features. For example, InceptionV1 contains a neuron that responds to the cat's face, the front of the car, and the legs of the cat.
Subject-predicate consistency
Subject-predicate consistency is a basic grammatical rule in English. The subject and predicate verbs in a sentence must be consistent in quantity, i.e., singular or plural. For example:
Build feature loop
Toy model
GPT2-Small
References
Home Technology peripherals AI Formulation of Feature Circuits with Sparse Autoencoders in LLM

Formulation of Feature Circuits with Sparse Autoencoders in LLM

Feb 26, 2025 am 01:46 AM

Large Language Models (LLMs) have made remarkable progress that can perform a variety of tasks, from generating human-like text to answering questions. However, understanding how these models work remains challenging, especially because there is a phenomenon called superposition where features are mixed in a neuron, making it very difficult to extract human-understandable representations from the original model structure . This is why methods like sparse autoencoder seem to be able to untangle features to improve interpretability.

In this blog post, we will use the sparse autoencoder to look for some feature loops in a particularly interesting case of object-verb consistency and understand how the model components contribute to the task.

Key Concepts

Feature loop

In the context of neural networks, the feature loop is how the network learns to combine input features to form complex patterns at a higher level. We use the metaphor of "loop" to describe how features are processed in various layers of a neural network, because this way of processing reminds us of the process of processing and combining signals in electronic circuits. These feature loops are gradually formed through the connection between the neuron and the layer, where each neuron or layer is responsible for transforming the input features, and their interactions lead to useful feature combinations working together to make the final prediction.

The following is an example of feature loops: In many visual neural networks, we can find "a loop, as a family of units that detect curves in different angles. Curve detectors are mainly composed of early, less complex curve detectors. and line detector implementation. These curve detectors are used in the next layer to create 3D geometry and complex shape detectors”[1].

In the following chapters, we will examine a feature loop for subject-predicate consistent tasks in LLM.

Overlay and sparse autoencoder

In the context of machine learning, we sometimes observe superposition, referring to the phenomenon that a neuron in the model represents multiple overlapping features rather than a single, different features. For example, InceptionV1 contains a neuron that responds to the cat's face, the front of the car, and the legs of the cat.

This is what the Sparse Autoencoder (SAE) does.

SAE helps us unblock the activation of the network into a sparse set of features. These sparse features are often understandable by humans, allowing us to better understand the model. By applying SAE to hidden layer activation of the LLM model, we can isolate features that contribute to the model's output.

You can find details on how SAE works in my previous blog post.

Case Study: Subject-predicate consistency

Subject-predicate consistency

Subject-predicate consistency is a basic grammatical rule in English. The subject and predicate verbs in a sentence must be consistent in quantity, i.e., singular or plural. For example:

  • "The cat runs." (singular subject, singular verb)
  • "The cats run." (plural subject, plural verb)

For humans, understanding this simple rule is very important for tasks such as text generation, translation, and question and answer. But how do we know if LLM really learned this rule?

We will now explore how LLM forms feature loops for this task.

Build feature loop

Now let's build the process of creating feature loops. We will proceed in four steps:

  1. We first enter the sentence into the model. For this case study, we consider the following sentence:
  • "The cat runs." (singular subject)
  • "The cats run." (plural subject)
  1. We run the model on these sentences to get hidden activations. These activations represent how the model processes sentences at each layer.
  2. We pass activation to SAE to "decompress" the feature.
  3. We construct the feature loop as a calculation diagram:
    • Input nodes represent singular and plural sentences.
    • Hidden nodes represent the model layer that processes the input.
    • Sparse nodes represent features obtained from SAE.
    • Output node represents the final decision. In this case: runs or run.

Toy model

We first build a toy language model, which may be of no sense to the following code. This is a neural network with two simple layers.

For subject-predicate consistency, the model should:

  • Enter a sentence with a singular or plural verb.
  • Hidden layer converts this information into an abstract representation.
  • The model selects the correct verb form as output.
<code># ====== 定义基础模型(模拟主谓一致)======
class SubjectVerbAgreementNN(nn.Module):
   def __init__(self):
       super().__init__()
       self.hidden = nn.Linear(2, 4)  # 2 个输入 → 4 个隐藏激活
       self.output = nn.Linear(4, 2)  # 4 个隐藏 → 2 个输出 (runs/run)
       self.relu = nn.ReLU()


   def forward(self, x):
       x = self.relu(self.hidden(x))  # 计算隐藏激活
       return self.output(x)  # 预测动词</code>
Copy after login

It is not clear what is happening inside the hidden layer. Therefore, we introduced the following sparse autoencoder:

<code># ====== 定义稀疏自动编码器 (SAE) ======
class c(nn.Module):
   def __init__(self, input_dim, hidden_dim):
       super().__init__()
       self.encoder = nn.Linear(input_dim, hidden_dim)  # 解压缩为稀疏特征
       self.decoder = nn.Linear(hidden_dim, input_dim)  # 重构
       self.relu = nn.ReLU()


   def forward(self, x):
       encoded = self.relu(self.encoder(x))  # 稀疏激活
       decoded = self.decoder(encoded)  # 重构原始激活
       return encoded, decoded</code>
Copy after login

We train the original models SubjectVerbAgreementNN and SubjectVerbAgreementNN, using sentences designed to represent different singular and plural forms of verbs, such as "The cat runs", "the babies run". But, as before, for toy models, they may not make any sense.

Now we visualize the feature loop. As mentioned earlier, feature loops are neuronal units used to process specific features. In our model, features include:

  1. Convert language attributes to a hidden layer of abstract representation.
  2. SAE
  3. with independent features which directly contribute to the verb-subject consistency task.
You can see in the figure that we visualize the feature loop as a graph:

  • Hidden activation and encoder output are both nodes of the graph.
  • We also have the output node as the correct verb.
  • The edges in the figure are weighted by activation intensity, showing which paths are most important in subject-predicate consensus decisions. For example, you can see that the path from H3 to F2 plays an important role.

GPT2-Small

For real cases, we run similar code on GPT2-small. We show a characteristic loop diagram representing the decision to select singular verbs.

Formulation of Feature Circuits with Sparse Autoencoders in LLMConclusion

Feature loops help us understand how different parts of complex LLM lead to the final output. We show the possibility of forming feature loops using SAE for subject-predicate consistent tasks.

However, we must admit that this approach still requires some human intervention, because we do not always know whether loops can really be formed without proper design.

References

[1] Zoom: Circuit Introduction

Please note that I have preserved the image placeholders and assumed the images are still accessible at the provided URLs. I have also maintained the original formatting as much as possible while reforming and restructuring the text for improved flow and clarity. The code blocks remain unchanged.

The above is the detailed content of Formulation of Feature Circuits with Sparse Autoencoders in LLM. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1664
14
PHP Tutorial
1267
29
C# Tutorial
1239
24
Getting Started With Meta Llama 3.2 - Analytics Vidhya Getting Started With Meta Llama 3.2 - Analytics Vidhya Apr 11, 2025 pm 12:04 PM

Meta's Llama 3.2: A Leap Forward in Multimodal and Mobile AI Meta recently unveiled Llama 3.2, a significant advancement in AI featuring powerful vision capabilities and lightweight text models optimized for mobile devices. Building on the success o

10 Generative AI Coding Extensions in VS Code You Must Explore 10 Generative AI Coding Extensions in VS Code You Must Explore Apr 13, 2025 am 01:14 AM

Hey there, Coding ninja! What coding-related tasks do you have planned for the day? Before you dive further into this blog, I want you to think about all your coding-related woes—better list those down. Done? – Let&#8217

AV Bytes: Meta's Llama 3.2, Google's Gemini 1.5, and More AV Bytes: Meta's Llama 3.2, Google's Gemini 1.5, and More Apr 11, 2025 pm 12:01 PM

This week's AI landscape: A whirlwind of advancements, ethical considerations, and regulatory debates. Major players like OpenAI, Google, Meta, and Microsoft have unleashed a torrent of updates, from groundbreaking new models to crucial shifts in le

Selling AI Strategy To Employees: Shopify CEO's Manifesto Selling AI Strategy To Employees: Shopify CEO's Manifesto Apr 10, 2025 am 11:19 AM

Shopify CEO Tobi Lütke's recent memo boldly declares AI proficiency a fundamental expectation for every employee, marking a significant cultural shift within the company. This isn't a fleeting trend; it's a new operational paradigm integrated into p

GPT-4o vs OpenAI o1: Is the New OpenAI Model Worth the Hype? GPT-4o vs OpenAI o1: Is the New OpenAI Model Worth the Hype? Apr 13, 2025 am 10:18 AM

Introduction OpenAI has released its new model based on the much-anticipated “strawberry” architecture. This innovative model, known as o1, enhances reasoning capabilities, allowing it to think through problems mor

A Comprehensive Guide to Vision Language Models (VLMs) A Comprehensive Guide to Vision Language Models (VLMs) Apr 12, 2025 am 11:58 AM

Introduction Imagine walking through an art gallery, surrounded by vivid paintings and sculptures. Now, what if you could ask each piece a question and get a meaningful answer? You might ask, “What story are you telling?

Newest Annual Compilation Of The Best Prompt Engineering Techniques Newest Annual Compilation Of The Best Prompt Engineering Techniques Apr 10, 2025 am 11:22 AM

For those of you who might be new to my column, I broadly explore the latest advances in AI across the board, including topics such as embodied AI, AI reasoning, high-tech breakthroughs in AI, prompt engineering, training of AI, fielding of AI, AI re

3 Methods to Run Llama 3.2 - Analytics Vidhya 3 Methods to Run Llama 3.2 - Analytics Vidhya Apr 11, 2025 am 11:56 AM

Meta's Llama 3.2: A Multimodal AI Powerhouse Meta's latest multimodal model, Llama 3.2, represents a significant advancement in AI, boasting enhanced language comprehension, improved accuracy, and superior text generation capabilities. Its ability t

See all articles