


CMU & Tsinghua's new work: Let LLM synthesize data to learn by itself, and the performance of specific tasks is also greatly improved.
Although large-scale language models (LLM) perform well in many natural language processing tasks, their results in specific tasks are not satisfactory. In order to improve the performance of models on specific natural language tasks, existing methods mainly rely on high-quality manually annotated data. The process of collecting this type of data is time-consuming and laborious, especially difficult for tasks where data are scarce.
In order to solve this problem, some research attempts to generate training data through powerful Teacher Model to enhance the performance of Student Model on specific tasks. However, this approach still faces many challenges in terms of cost, scalability, and legal compliance. When high-quality human supervision signals cannot be continuously obtained, the ability to continuously iterate the model has become an urgent problem to be solved.
A research team from Carnegie Mellon University and Tsinghua University proposed the SELF-GUIDE method. This method generates a task-specific data set by the language model itself and fine-tunes it on this data set, thereby significantly improving the model's ability on a specific task without relying on a large amount of external high-quality data or a more powerful Teacher Model. Specifically, with approximately 3 external input samples, SELF-GUIDE uses a multi-stage generation and filtering mechanism to fine-tune the model using synthetic data generated by the model to make the model perform better on specific tasks.
Method
Specifically, the research team decomposed the SELF-GUIDE method into three main stages: input data generation, output data generation and quality optimization.
Input data generation
In the design and implementation process of the SELF-GUIDE framework, the researcher first specified different prompt templates according to the task type (generative task or classification task). For generative tasks, the SELF-GUIDE framework uses a relatively simple prompt template. For classification tasks, the SELF-GUIDE framework adopts another strategy. For classification tasks, the SELF-GUIDE framework first randomly selects a label from all label spaces and uses it as a conditionally generated pseudo-label to guide the generation of input data. After selecting a pseudo-label, the SELF-GUIDE framework uses more complex conditions to generate a template to guide the model to generate input content corresponding to the selected pseudo-label.
After the template is selected and the few-shot examples are populated, the complete prompt is passed to LLM to generate the input data. After each round of prompts, newly generated inputs are added to the input library. A subset of the inputs are randomly sampled from this library and merged with the inputs from the initial example to form new cues, gradually expanding the set of inputs generated by the LLM and reducing duplication. SELF-GUIDE performs only one round of input generation, followed by a quality optimization phase where rule-based filters are applied to remove low-quality inputs.
図 3: この図は、SELF-GUIDE が分類タスクを完了するプロセスを示しています。分類タスクからのデータの場合、SELF-GUIDE は最初に疑似ラベルを生成し、次に対応する入力を生成し、最後に実際のラベルを再生成します。
出力データ生成
出力データ生成フェーズでは、典型的なコンテキスト学習方法が使用されます。研究者はタスクの指示と元の例をモデルに提供し、モデルが入力生成フェーズで生成された各入力にラベルを付けることができるようにします。すべての出力が取得された後、別のルールベースのフィルタリングが実行されて、最終的な合成データセットが選択されます。
品質の最適化
生成されたデータの品質は、下流のトレーニングの成功にとって重要です。 SELF-GUIDE では、品質を向上させるために 2 つの戦略を採用しています。1 つは生成パラメータを調整して生成品質を向上させる方法、もう 1 つはルールに基づいて低品質のサンプルをフィルタリングする方法です。
温度を調整する: 温度を調整することは、品種と品質のバランスをとるための一般的な戦略です。 SELF-GUIDE フレームワークは、入力生成ステージでより高い温度を使用して多様性を促進し、他のステージでより低い温度を使用して最高確率の出力を保証し、全体的なデータ品質を保証します。ただし、温度調整だけでは望ましいバランスを達成するのに十分ではありません。したがって、SELF-GUIDE は、入力生成後と出力アノテーション後に 2 回のルールベースのデータ フィルタリングも実行します。
ノイズ フィルター: 研究者は、一般的な挨拶やノイズ文字 (たとえば、生成されたコンテンツ内の「”」など) を含むノイズ用語のリストを手動で作成しました。生成されたノイズ用語の例の入力または出力に含まれます。
長さフィルター: サンプルの長さは偏っている可能性がありますが、研究者は、これらのサンプルが特定のタスクの長さの分布の観点からは代表的なものであると信じています。サンプルの長さは正規分布に従い、入力サンプルの平均 μ と標準偏差 σ を計算します。研究者は、生成されたサンプルの入力長と出力長が同じ正規分布に従う必要があると想定し、長さを必要とします。 (μ − 2σ、μ + 2σ)
1 つのパラメーターですべてに適合: SELF-GUIDE が指示と例で指定されたターゲット分布に準拠するトレーニング データを生成するには、ラベル付けされたパラメータでさまざまなハイパーパラメーターを最適化する必要があります。生成された入力と出力の数、入力データが生成される温度、出力データが生成される温度、パラメータの微調整などのデータ ポイント。研究者は実験テスト タスクを 2 つの部分に分割します。生成パラメータを調整するためにすべてのデータを使用できます。これは検証タスクと呼ばれます。データの他の部分はテストにのみ使用され、研究者が検索するパラメータの調整には使用できません。検証タスクで「最悪のタスクのパフォーマンスを最大化」するパラメータを設定し、SELF-GUIDE のパフォーマンスを評価するために修正しました。研究者は、Super-NaturalInstructions V2 ベンチマークからタスクの半分をランダムに選択し、残りの半分を評価に使用しました。入力生成、出力生成、微調整については、研究者は Super-Natural 命令ベンチマークと同じものを使用し、分類タスクには完全一致、生成タスクには ROUGE-L を使用しました。 SELF-GUIDE の効果を反映するために、研究者は SELF-GUIDE を他の命令追従およびコンテキスト学習方法と比較しました。
1.Few-Shot ICL: 主なベンチマークとして、研究者は直接ヒント言語モデルと比較しました。
2. Self-ICL は、自己生成されたサンプルを使用して、Self-ICL の作業に基づいて修正を加え、命令追従の数を増やします。プロンプトの単語を埋めるために (固定数の例ではなく) できるだけ多くの例を自己生成することで、サンプルを参照します。
3.少数ショット微調整: 微調整には少数の入力サンプルを直接使用します。
セルフガイド 原文の主な実験結果は以下の通りです。ベースライン評価指標では、分類タスクの絶対改善は 14.5% に達し、生成タスクの絶対改善は 17.9% に達しました。これらの結果は、データが非常に限られている場合でも、SELF-GUIDE が LLM をタスク固有の専門化に導くのに非常に効果的であることを示しています。これは、LLM を特定のタスクに大規模に適応させるための自己生成データの可能性を強調しています。より詳しい実験結果やアブレーション実験については、原論文を参照してください。
図 4: タスクのタイプ (分類タスクと生成タスク) ごとに、研究者はタスクをランダムに 2 つの半分に分割し、半分は「1 つのパラメーターがすべてに適合」戦略のパラメーターのデバッグに使用され、もう半分は使用されました。半分は、これらのデバッグされたパラメーターを使用するために使用されました。パラメーターは、SELF-GUIDE のパフォーマンスをテストします。 SELF-GUIDE の前後でモデルのパフォーマンスを評価するために、同じデコード パラメーターとキュー テンプレートを使用します。
概要
SELF-GUIDE フレームワークは、モデルが自律的にトレーニング データを生成し、このデータに基づいて微調整することを促進します。実験結果は、この方法が特定のタスクに対する大規模言語モデルの専門的能力を向上させる大きな可能性を秘めていることを示しています。特にデータが限られている場合、SELF-GUIDE はトレーニング データの不足の問題を効果的に解決できます。同時に、これは自律的なモデル適応と継続的学習のためのテクノロジーを探索するための参考にもなります。研究者らは、この研究が自律的な調整と改善メカニズムの AI システムの開発を促進し、AI システムを人間の意図とより一致させることを期待しています。
The above is the detailed content of CMU & Tsinghua's new work: Let LLM synthesize data to learn by itself, and the performance of specific tasks is also greatly improved.. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

If the answer given by the AI model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

Can language models really be used for time series prediction? According to Betteridge's Law of Headlines (any news headline ending with a question mark can be answered with "no"), the answer should be no. The fact seems to be true: such a powerful LLM cannot handle time series data well. Time series, that is, time series, as the name suggests, refers to a set of data point sequences arranged in the order of time. Time series analysis is critical in many areas, including disease spread prediction, retail analytics, healthcare, and finance. In the field of time series analysis, many researchers have recently been studying how to use large language models (LLM) to classify, predict, and detect anomalies in time series. These papers assume that language models that are good at handling sequential dependencies in text can also generalize to time series.

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which
