Home Technology peripherals AI After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

Jun 13, 2024 pm 02:06 PM
project Abacus

Multiplication and sorting also work.

Since it was proposed in 2017, Transformer has become the mainstream architecture for large AI models and has been firmly in the C position.

However, what all researchers have to admit is that the Transformer performs extremely poorly on arithmetic tasks, albeit addition. This flaw largely stems from the Transformer's inability to track each of the large ranges of numbers. the exact location of the number.

In order to solve this problem, researchers from the University of Maryland, CMU and other institutions have launched a challenge to this problem. They solved this problem by adding an embedding to each number that encodes the position of the number relative to the beginning. The study found that it took just one day to train 20-digit numbers on a single GPU to achieve state-of-the-art performance, with up to 99% accuracy on the 100-digit addition problem.

After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

Paper address: https://arxiv.org/pdf/2405.17399

Project address: https://github.com/mcleish7/arithmetic

Title: Transformers Can Do Arithmetic with the Right Embeddings

Specifically, the researchers suggested that a simple modification to the data table display could resolve this shortcoming. They proposed Abacus embeddings to encode the position within the range of each digital symbol token. Using Abacus embeddings in conjunction with standard positional embeddings, the study observed significant improvements in Transformer accuracy on arithmetic tasks, such that models trained with only up to 20-digit operands scaled to problems with 120-digit operands. . This number represents a 6x SOTA scaling factor, compared to the previous state-of-the-art scaling factor of only 2.5x. It is understood that this is the longest sequence of learning addition demonstrated to date.

In addition to studying optimizing the performance of Transformer in arithmetic and generalization, this article also explores several other methods to improve the performance of Transformer. They found that they could reduce the generalization error by 50% over the Abacus embedding baseline by inserting skip connections between the input injection layer and each decoder layer. The paper also finds that the looped Transformer architecture used in conjunction with embeddings can achieve almost perfect generalization on the addition problem.

The contributions of this paper can be summarized as follows:

  • This paper proposes a new positional embedding, called Abacus embedding, to better capture the importance of each number properties, thereby achieving near-perfect in-distribution generalization;

  • Study shows that when Abacus embedding is combined with input injection and looped transformer, the performance will be further improved, and the out-of-distribution accuracy From 92.9% to 99.1%, the error is reduced by 87% compared to embeddings using the standard architecture alone;

  • The researchers extended these findings to more complex problems, including multiplication and sorting, also exhibiting length generalization in these domains.

Achieve length generalization of addition

The authors studied a series of methods aimed at improving the arithmetic ability of language models trained from scratch. Performance. They mainly focus on two hypotheses: 1) the position information of individual digits within a number is being lost; 2) looping can improve the reasoning ability of the Transformer architecture on multi-step arithmetic reasoning problems. The authors briefly discuss the training and evaluation settings before describing each improvement in detail.

Experimental setup

The authors trained a causal language model containing only the decoder to solve the addition problem.

They considered two standard transformer architectures. First, they use a standard autoregressive transformer model with multiple decoder layers stacked in a feed-forward fashion. Second, they augment this standard transformer model with input injection, which adds embeddings to the input of each decoder layer. The authors visually depict these architectures in Figure 20.

After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

Abacus embedding helps align numbers

Through previous research and preliminary experiments, the author found that even if the entered number is displayed first Least of all numbers, the training data is hierarchical and rich (millions of examples), and it is difficult for standard transformers to learn multi-digit addition. They also observed that when humans perform long addition operations, they first arrange numbers with the same digit into columns. Therefore, the author's first hypothesis is that the digits of each number are not easily represented by the transformer, and that this subproblem poses a greater obstacle than the actual addition itself.

To address the limitations of the transformer in representing positional information, the authors designed a special positional embedding that encodes the position of each number relative to the starting position of the current number. The authors call this Abacus embedding. They apply the same positional embedding to all numbers with the same digit, providing an explicit signal that the model can use to align the numbers, as shown in Figure 2.

After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

Abacus embedding solves the addition problem

For standard transformer architectures, Abacus embedding improves generalization performance to 100 bits and beyond. In Figure 3 (left), the authors highlight the comparative advantage of Abacus embeddings over standard transformer architectures and embeddings when performing additive operations, taking the average accuracy across all cases across the three models.

After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

Figure 1 also shows accuracy results for standard transformer models trained with FIRE and Abacus, which were tested both in-domain (ID) and out-of-domain (OOD). After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

Loops in Transformer improve performance

After solving the position embedding problem, the author next explored whether the loop architecture can further improve the transformer execution of multiple bits Ability to add numbers. They use the term "recurrent block" to refer to a set of decoder layers with different weights, and "recurrence" refers to the number of repetitions of the recurrent block. The authors use the term effective depth to refer to the number of layers used in a transformer, regardless of whether their weights are unique. Unless otherwise stated, they use a max-loop architecture, which only loops through a unique layer to reach effective depth. They also used input injection and residual connections to propagate a copy of the input to each layer in the network.

Advantages of Loops

In Figure 3 (right), the authors compare all training methods using FIRE and NoPE embeddings for additions with operands up to 40 bits. Architecture variants. Although the number of parameters is only 1/10 of the other models, we can see that the looped transformer (looped, with input injection and progressive loss) achieves the best out-of-distribution performance when using any kind of positional embedding. In Figure 8, the authors demonstrate the robustness of this result across a variety of training data sizes.

After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

For recurrent models, you can choose to change the number of loops for each forward pass during training. This tends to improve the generalization ability of the model to more difficult tasks when testing, which is also called progressive loss computation. This loss function is a convex combination of the loss values ​​of two forward passes, one using a literal number of cycles (16 for the 1 × 16 model) and the other using a randomly smaller number of cycles.

Next, the authors explore the effect of changing the loop block size while keeping the effective depth fixed. They halved the number of layers in the loop block and doubled the loop count, going from a model with 16 layers in the block and only one loop count (16 × 1, the standard transformer) to a model with only one layer in the block and loop count There are 16 times (1 × 16) models.

Analyzing these results through Figure 4, the authors found that in some cases combining loops and Abacus embeddings can further improve performance. Specifically, on the OOD problem, the model with two cycles (8 × 2) produced half the error of the purely acyclic model (16 × 1), while on the OOD problem with 100+, its accuracy was also slightly higher. improve.

Finally, in Appendix A.7.3, the authors vary the effective depth of the model to analyze the impact of the number of parameters on this task, including Abacus, FIRE, and NoPE embeddings. While the experiments in Figure 4 are a fair comparison of different depths, the pure standard transformer model has many more parameters than the corresponding loop model. In Table 3 in the Appendix, the authors record parameter quantities to the nearest million.

After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

Experiment

The researchers not only discussed addition problems, but also multiplication problems and sorting were studied.

Integer multiplication

Figure 5 shows that the Abacus embedding model exceeds previous work in the distribution of 15-digit multiplications without requiring zeros for each digit. operands are padded to the same length. In particular, the study highlights that combining Abacus embeddings with FIRE also improves accuracy on the hardest distribution problems (bottom right) compared to the baseline using FIRE alone.

After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

Array sort

Table 1 shows the performance of a standard transformer (eight layers) trained with different embeddings—FIRE, Abacus, and their combinations. The results show that the combined embedding method enhances the generalization ability of the model.

After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

As shown in Table 2, the researchers observed that when pairing the Abacus+FIRE embedding combination with different model architectures (effective depth of 8), the results showed mixed sex.

After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

Abacus and related embeddings

Figure 6 illustrates the real potential of integrating Abacus embeddings into more general systems, showing Abacus embedding combined with FIRE unlocks problem-solving capabilities that go far beyond FIRE embedding.

After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.

For more research details, please refer to the original paper.

The above is the detailed content of After one day of training on a single GPU, Transformer can achieve 99% accuracy in adding 100-digit numbers.. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1660
14
PHP Tutorial
1260
29
C# Tutorial
1233
24
The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' Jun 24, 2024 pm 03:04 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

Axiomatic training allows LLM to learn causal reasoning: the 67 million parameter model is comparable to the trillion parameter level GPT-4 Axiomatic training allows LLM to learn causal reasoning: the 67 million parameter model is comparable to the trillion parameter level GPT-4 Jul 17, 2024 am 10:14 AM

Show the causal chain to LLM and it learns the axioms. AI is already helping mathematicians and scientists conduct research. For example, the famous mathematician Terence Tao has repeatedly shared his research and exploration experience with the help of AI tools such as GPT. For AI to compete in these fields, strong and reliable causal reasoning capabilities are essential. The research to be introduced in this article found that a Transformer model trained on the demonstration of the causal transitivity axiom on small graphs can generalize to the transitive axiom on large graphs. In other words, if the Transformer learns to perform simple causal reasoning, it may be used for more complex causal reasoning. The axiomatic training framework proposed by the team is a new paradigm for learning causal reasoning based on passive data, with only demonstrations

See all articles