Home Technology peripherals AI Code Llama's coding capabilities soared, and the fine-tuned version of HumanEval scored better than GPT-4 and was released in one day

Code Llama's coding capabilities soared, and the fine-tuned version of HumanEval scored better than GPT-4 and was released in one day

Aug 26, 2023 pm 09:01 PM
theory fine-tuning code llama

Yesterday, Meta open sourced the basic model specializing in code generation Code Llama, which can be used for research and commercial purposes for free.

Code Llama series models have three parameter versions with parameter sizes of 7B, 13B and 34B respectively. And supports multiple programming languages, including Python, C, Java, PHP, Typescript (Javascript), C#, and Bash.

The Code Llama version provided by Meta includes:
  • Code Llama, basic code model;

  • Code Sheep-Python, a fine-tuned version of Python;

  • Code Llama-Instruct, a fine-tuned version of natural language instructions

Just its effect Generally speaking, the one-time generation pass rate (pass@1) of different versions of Code Llama on the HumanEval and MBPP data sets exceeds GPT-3.5.

In addition, Code Llama’s “Unnatural” 34B version’s pass@1 on the HumanEval dataset is close to GPT-4 (62.2% vs 67.0%). However, Meta did not release this version, but significant performance improvements were achieved through training on a small set of high-quality encoded data.

Code Llama代码能力飙升,微调版HumanEval得分超越GPT-4,一天发布Image source: https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/

Just after one day, a researcher launched a challenge to GPT-4. They come from Phind, an organization that aims to build an AI search engine for developers, and the research used finely tuned Code Llama-34B to beat GPT-4 in the HumanEval evaluation.

Phind co-founder Michael Royzen said: "This is just an early experiment, aiming to reproduce (and surpass) the "Unnatural Code Llama" results in the Meta paper. In the future, we will have an expert portfolio of different CodeLlama models that I think will be competitive in real-world workflows. 》

Code Llama代码能力飙升,微调版HumanEval得分超越GPT-4,一天发布

Both models have been open source:

Code Llama代码能力飙升,微调版HumanEval得分超越GPT-4,一天发布

The researchers released these two models on Huggingface, You can go and check it out.
  • Phind-CodeLlama-34B-v1:https://huggingface.co/Phind/Phind-CodeLlama-34B-v1
  • Phind -CodeLlama-34B-Python-v1: https://huggingface.co/Phind/Phind-CodeLlama-34B-Python-v1
Next let’s see how this research goes realized.

Fine-tuned Code Llama-34B beats GPT-4

## Let’s look at the results first. This study used Phind internal data sets to fine-tune Code Llama-34B and Code Llama-34B-Python, resulting in two models Phind-CodeLlama-34B-v1 and Phind-CodeLlama-34B-Python-v1 respectively.

The two newly obtained models achieved 67.6% and 69.5% pass@1 respectively on HumanEval.

For comparison, CodeLlama-34B pass@1 is 48.8%; CodeLlama-34B-Python pass@1 is 53.7%.

And GPT-4’s pass@1 on HumanEval is 67% (data released by OpenAI in the “GPT-4 Technical Report” released in March this year).

Code Llama代码能力飙升,微调版HumanEval得分超越GPT-4,一天发布Image source: https://ai.meta.com/blog/code-llama-large-language-model-coding/


Code Llama代码能力飙升,微调版HumanEval得分超越GPT-4,一天发布 Picture source: https://cdn.openai.com/papers/gpt-4.pdf

When it comes to fine-tuning, it’s natural to have a data set. This study fine-tuned Code Llama-34B and Code Llama-34B-Python on a proprietary data set containing about 80,000 high-quality programming problems and solutions.

This dataset does not take code completion examples, but instead takes instruction-answer pairs, which is different from the HumanEval data structure. The study then trained the Phind model for two epochs, with a total of about 160,000 examples. The researchers said that LoRA technology was not used in the training, but local fine-tuning was used.

In addition, the study also used DeepSpeed ​​ZeRO 3 and Flash Attention 2 technology. They spent three hours on 32 A100-80GB GPUs to train these Model, the sequence length is 4096 tokens.

In addition, the study also applied OpenAI’s decontamination method to the data set to make the model results more effective.

As we all know, even the very powerful GPT-4 will face the dilemma of data pollution. In layman's terms, the trained model may have been trained on the evaluation data. .

This problem is very difficult for LLM. For example, in the process of evaluating the performance of a model, in order to make a scientifically credible evaluation, the researcher must check the Whether the problem is in the model's training data. If it is, the model can remember these problems and will obviously perform better on these specific problems when evaluating the model.

It's like a person already knows the test questions before taking the exam.

In order to solve this problem, OpenAI disclosed how GPT-4 evaluates data pollution in the public GPT-4 technical document "GPT-4 Technical Report". They disclose their strategies for quantifying and assessing this data contamination.

Specifically, OpenAI uses substring matching to measure cross-contamination between the evaluation dataset and pre-training data. Both evaluation and training data are processed by removing all spaces and symbols, leaving only characters (including numbers).

For each evaluation example, OpenAI randomly selects three 50-character substrings (if there are fewer than 50 characters, the entire example is used). A match is determined if any of the three sampled evaluation substrings is a substring of the processed training example.

This will produce a list of tainted examples, which OpenAI discards and reruns to obtain an untainted score. But this filtering method has some limitations, substring matching can lead to false negatives (if there are small differences between the evaluation and training data) as well as false positives. As a result, OpenAI uses only part of the information in the evaluation example, leveraging only questions, context, or equivalent data, and ignoring answers, responses, or equivalent data. In some cases, multiple choice options are also excluded. These exclusions may lead to an increase in false positives.

Regarding this part, interested readers can refer to the paper to learn more.

Paper address: https://cdn.openai.com/papers/gpt-4.pdf

However, there is some controversy over the HumanEval score used by Phind when benchmarking GPT-4. Some people say that the latest test score of GPT-4 has reached 85%. But Phind replied that the relevant research that derived this score did not conduct contamination research, and it was impossible to determine whether GPT-4 had seen HumanEval's test data when it received a new round of testing. Considering some recent research on "GPT-4 becomes dumb", it is more safe to use the data in the original technical report.

Code Llama代码能力飙升,微调版HumanEval得分超越GPT-4,一天发布

However, considering the complexity of large model evaluation, whether these evaluation results can reflect the true capabilities of the model is still a controversial issue. You can download the model and experience it yourself.

The rewritten content is as follows: Reference link:

The content that needs to be rewritten is: https://benjaminmarie.com/the-decontaminated-evaluation- of-gpt-4/

The content that needs to be rewritten is: https://www.phind.com/blog/code-llama-beats-gpt4

The above is the detailed content of Code Llama's coding capabilities soared, and the fine-tuned version of HumanEval scored better than GPT-4 and was released in one day. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Breaking through the boundaries of traditional defect detection, 'Defect Spectrum' achieves ultra-high-precision and rich semantic industrial defect detection for the first time. Breaking through the boundaries of traditional defect detection, 'Defect Spectrum' achieves ultra-high-precision and rich semantic industrial defect detection for the first time. Jul 26, 2024 pm 05:38 PM

In modern manufacturing, accurate defect detection is not only the key to ensuring product quality, but also the core of improving production efficiency. However, existing defect detection datasets often lack the accuracy and semantic richness required for practical applications, resulting in models unable to identify specific defect categories or locations. In order to solve this problem, a top research team composed of Hong Kong University of Science and Technology Guangzhou and Simou Technology innovatively developed the "DefectSpectrum" data set, which provides detailed and semantically rich large-scale annotation of industrial defects. As shown in Table 1, compared with other industrial data sets, the "DefectSpectrum" data set provides the most defect annotations (5438 defect samples) and the most detailed defect classification (125 defect categories

NVIDIA dialogue model ChatQA has evolved to version 2.0, with the context length mentioned at 128K NVIDIA dialogue model ChatQA has evolved to version 2.0, with the context length mentioned at 128K Jul 26, 2024 am 08:40 AM

The open LLM community is an era when a hundred flowers bloom and compete. You can see Llama-3-70B-Instruct, QWen2-72B-Instruct, Nemotron-4-340B-Instruct, Mixtral-8x22BInstruct-v0.1 and many other excellent performers. Model. However, compared with proprietary large models represented by GPT-4-Turbo, open models still have significant gaps in many fields. In addition to general models, some open models that specialize in key areas have been developed, such as DeepSeek-Coder-V2 for programming and mathematics, and InternVL for visual-language tasks.

Training with millions of crystal data to solve the crystallographic phase problem, the deep learning method PhAI is published in Science Training with millions of crystal data to solve the crystallographic phase problem, the deep learning method PhAI is published in Science Aug 08, 2024 pm 09:22 PM

Editor |KX To this day, the structural detail and precision determined by crystallography, from simple metals to large membrane proteins, are unmatched by any other method. However, the biggest challenge, the so-called phase problem, remains retrieving phase information from experimentally determined amplitudes. Researchers at the University of Copenhagen in Denmark have developed a deep learning method called PhAI to solve crystal phase problems. A deep learning neural network trained using millions of artificial crystal structures and their corresponding synthetic diffraction data can generate accurate electron density maps. The study shows that this deep learning-based ab initio structural solution method can solve the phase problem at a resolution of only 2 Angstroms, which is equivalent to only 10% to 20% of the data available at atomic resolution, while traditional ab initio Calculation

Google AI won the IMO Mathematical Olympiad silver medal, the mathematical reasoning model AlphaProof was launched, and reinforcement learning is so back Google AI won the IMO Mathematical Olympiad silver medal, the mathematical reasoning model AlphaProof was launched, and reinforcement learning is so back Jul 26, 2024 pm 02:40 PM

For AI, Mathematical Olympiad is no longer a problem. On Thursday, Google DeepMind's artificial intelligence completed a feat: using AI to solve the real question of this year's International Mathematical Olympiad IMO, and it was just one step away from winning the gold medal. The IMO competition that just ended last week had six questions involving algebra, combinatorics, geometry and number theory. The hybrid AI system proposed by Google got four questions right and scored 28 points, reaching the silver medal level. Earlier this month, UCLA tenured professor Terence Tao had just promoted the AI ​​Mathematical Olympiad (AIMO Progress Award) with a million-dollar prize. Unexpectedly, the level of AI problem solving had improved to this level before July. Do the questions simultaneously on IMO. The most difficult thing to do correctly is IMO, which has the longest history, the largest scale, and the most negative

PRO | Why are large models based on MoE more worthy of attention? PRO | Why are large models based on MoE more worthy of attention? Aug 07, 2024 pm 07:08 PM

In 2023, almost every field of AI is evolving at an unprecedented speed. At the same time, AI is constantly pushing the technological boundaries of key tracks such as embodied intelligence and autonomous driving. Under the multi-modal trend, will the situation of Transformer as the mainstream architecture of AI large models be shaken? Why has exploring large models based on MoE (Mixed of Experts) architecture become a new trend in the industry? Can Large Vision Models (LVM) become a new breakthrough in general vision? ...From the 2023 PRO member newsletter of this site released in the past six months, we have selected 10 special interpretations that provide in-depth analysis of technological trends and industrial changes in the above fields to help you achieve your goals in the new year. be prepared. This interpretation comes from Week50 2023

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

The accuracy rate reaches 60.8%. Zhejiang University's chemical retrosynthesis prediction model based on Transformer was published in the Nature sub-journal The accuracy rate reaches 60.8%. Zhejiang University's chemical retrosynthesis prediction model based on Transformer was published in the Nature sub-journal Aug 06, 2024 pm 07:34 PM

Editor | KX Retrosynthesis is a critical task in drug discovery and organic synthesis, and AI is increasingly used to speed up the process. Existing AI methods have unsatisfactory performance and limited diversity. In practice, chemical reactions often cause local molecular changes, with considerable overlap between reactants and products. Inspired by this, Hou Tingjun's team at Zhejiang University proposed to redefine single-step retrosynthetic prediction as a molecular string editing task, iteratively refining the target molecular string to generate precursor compounds. And an editing-based retrosynthetic model EditRetro is proposed, which can achieve high-quality and diverse predictions. Extensive experiments show that the model achieves excellent performance on the standard benchmark data set USPTO-50 K, with a top-1 accuracy of 60.8%.

Nature's point of view: The testing of artificial intelligence in medicine is in chaos. What should be done? Nature's point of view: The testing of artificial intelligence in medicine is in chaos. What should be done? Aug 22, 2024 pm 04:37 PM

Editor | ScienceAI Based on limited clinical data, hundreds of medical algorithms have been approved. Scientists are debating who should test the tools and how best to do so. Devin Singh witnessed a pediatric patient in the emergency room suffer cardiac arrest while waiting for treatment for a long time, which prompted him to explore the application of AI to shorten wait times. Using triage data from SickKids emergency rooms, Singh and colleagues built a series of AI models that provide potential diagnoses and recommend tests. One study showed that these models can speed up doctor visits by 22.3%, speeding up the processing of results by nearly 3 hours per patient requiring a medical test. However, the success of artificial intelligence algorithms in research only verifies this

See all articles