Home Technology peripherals AI Human preference is the ruler! SPPO alignment technology allows large language models to compete with each other and compete with themselves

Human preference is the ruler! SPPO alignment technology allows large language models to compete with each other and compete with themselves

Jun 06, 2024 pm 06:32 PM
project Self-game preference optimization

Human preference is the ruler! SPPO alignment technology allows large language models to compete with each other and compete with themselves
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

##Richard Sutton made this evaluation in "The Bitter Lesson": "The most important lesson that can be drawn from 70 years of artificial intelligence research is that those general methods that exploit computing are ultimately the most effective, and the advantages are huge."

Self play is such a method that uses search and learning at the same time to fully utilize and expand the scale of computing.

At the beginning of this year, Professor Gu Quanquan’s team at the University of California, Los Angeles (UCLA) proposed a
Self-Play Fine-Tuning, SPIN ), without using additional fine-tuning data, the ability of LLM can be greatly improved by relying on self-game alone.

Recently, Professor Gu Quanquan’s team and Professor Yiming Yang’s team at Carnegie Mellon University (CMU) collaborated to develop a method called “
Self-Game Preference Optimization (Self-Play Preference Optimization, SPPO) " alignment technology, this new method aims to optimize the behavior of large language models through a self-game framework to better match human preferences. Fight each other from left to right and show off your magical powers again!

Human preference is the ruler! SPPO alignment technology allows large language models to compete with each other and compete with themselves

    ##Paper title: Self-Play Preference Optimization for Language Model Alignment
  • Paper link: https://arxiv.org/pdf/2405.00675.pdf

##Technical background and challenges
Large language models (LLMs) are becoming an important driving force in the field of artificial intelligence, performing well in various tasks with their excellent text generation and understanding capabilities. Although the capabilities of LLM are impressive, making the output behavior of these models more consistent with the needs of practical applications often requires fine-tuning through an alignment process.
#The key to this process is to adjust the model to better reflect human preferences and behavioral norms. Common methods include reinforcement learning based on human feedback (RLHF) or direct preference optimization (Direct Preference Optimization, DPO).
Reinforcement learning based on human feedback (RLHF) relies on explicitly maintaining a reward model to adjust and refine large language models. In other words, for example, InstructGPT first trains a reward function that obeys the Bradley-Terry model based on human preference data, and then uses reinforcement learning algorithms like Proximal Policy Optimization (PPO) to optimize large language models. Last year, researchers proposed Direct Preference Optimization (DPO).
Unlike RLHF, which maintains an explicit reward model, the DPO algorithm implicitly obeys the Bradley-Terry model, but can be directly used for large language model optimization. Existing work has attempted to further fine-tune large models by using DPO over multiple iterations (Figure 1).

Human preference is the ruler! SPPO alignment technology allows large language models to compete with each other and compete with themselves##                                                                                                                                                                                             Figure 1. The iterative optimization method based on the Bradley-Terry model lacks theoretical understanding and guarantee

Parametric models such as Bradley-Terry provide a numerical score for each choice. While these models provide reasonable approximations of human preferences, they fail to fully capture the complexity of human behavior.

These models often assume that the preference relationship between different options is monotonic and transitive, while empirical evidence often shows the inconsistency and nonlinearity of human decision-making, such as Tversky's research observed that human decision-making can be influenced by multiple factors and exhibit inconsistencies.

Theoretical basis and method of SPPO
2. The two language models of imagination are often played.

In these contexts, the author proposes a new self-game framework SPPO, which not only has the ability to solve the problem of two players It is provably guaranteed for two-player constant-sum games and can be extended to efficiently fine-tune large language models on a large scale.

Specifically, the article strictly defines the RLHF problem as a two-player normal-sum game (Figure 2). The goal of this work is to identify Nash equilibrium strategies that, on average, always provide a more preferred response than any other strategy.

In order to approximately identify the Nash equilibrium strategy, the author adopts the classic online adaptive algorithm with multiplicative weights as a high-level framework algorithm to solve the two-player game.

Within each step of this framework, the algorithm can approximate multiplicative weight updates through a self-game mechanism, where in each round, the large language model is working on the previous The wheel itself is fine-tuned, optimized through synthetic data generated by the model and annotations of preferred models.

Specifically, the large language model will generate several responses for each prompt in each round; based on the annotations of the preference model, the algorithm can estimate the winning rate of each response ;The algorithm can further fine-tune the parameters of the large language model so that responses with a high winning rate have a higher probability of appearing (Figure 3).

Human preference is the ruler! SPPO alignment technology allows large language models to compete with each other and compete with themselves

#                                             Figure 3. The goal of the self-game algorithm is to fine-tune itself to outperform the previous round of language model

Experimental design and results

In the experiment, the research team used A Mistral-7B is used as the baseline model and 60,000 prompts from the UltraFeedback dataset are used for unsupervised training. They found that through self-playing, the model was able to significantly improve its performance on multiple evaluation platforms, such as AlpacaEval 2.0 and MT-Bench. These platforms are widely used to evaluate the quality and relevance of model-generated text.

Through the SPPO method, the model is not only improved in
the fluency and accuracy of generated text, but more importantly is: "It performs better in conforming to human values ​​and preferences."

Human preference is the ruler! SPPO alignment technology allows large language models to compete with each other and compete with themselves

## 图 4. The effect of the Sppo model on Alpacaeval 2.0 is significantly improved, which is higher than other benchmark methods such as Iterative DPO.

In the test of AlpacaEval 2.0 (Figure 4), the length control winning rate of the SPPO-optimized model increased from 17.11% of the baseline model to 28.53%, showing a significant improvement in its understanding of human preferences. The model optimized by three rounds of SPPO is significantly better than the multi-round iteration of DPO, IPO and self-rewarding language model (Self-Rewarding LM) on AlpacaEval2.0.

In addition, the model’s performance on MT-Bench also exceeded that of traditional models tuned through human feedback. This demonstrates the effectiveness of SPPO in automatically adapting model behavior to complex tasks.

Conclusion and future prospects

Self-playing preference optimization (SPPO) is the big language The model provides a new optimization path, which not only improves the quality of model generation, but more importantly, improves the alignment of the model with human preferences.

With the continuous development and optimization of technology, it is expected that SPPO and its derivative technologies will play a greater role in the sustainable development and social application of artificial intelligence, building a Paving the way for more intelligent and responsible AI systems.

The above is the detailed content of Human preference is the ruler! SPPO alignment technology allows large language models to compete with each other and compete with themselves. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1663
14
PHP Tutorial
1264
29
C# Tutorial
1237
24
The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' Jun 24, 2024 pm 03:04 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

LLM is really not good for time series prediction. It doesn't even use its reasoning ability. LLM is really not good for time series prediction. It doesn't even use its reasoning ability. Jul 15, 2024 pm 03:59 PM

Can language models really be used for time series prediction? According to Betteridge's Law of Headlines (any news headline ending with a question mark can be answered with "no"), the answer should be no. The fact seems to be true: such a powerful LLM cannot handle time series data well. Time series, that is, time series, as the name suggests, refers to a set of data point sequences arranged in the order of time. Time series analysis is critical in many areas, including disease spread prediction, retail analytics, healthcare, and finance. In the field of time series analysis, many researchers have recently been studying how to use large language models (LLM) to classify, predict, and detect anomalies in time series. These papers assume that language models that are good at handling sequential dependencies in text can also generalize to time series.

See all articles