Home Technology peripherals AI Choosing the smartest AI in the Olympiad: Claude-3.5-Sonnet vs. GPT-4o?

Choosing the smartest AI in the Olympiad: Claude-3.5-Sonnet vs. GPT-4o?

Jun 24, 2024 pm 05:01 PM
project GAIR Lab

奥林匹克竞赛里选最聪明的AI:Claude-3.5-Sonnet vs. GPT-4o?
AIxiv專欄是本站發布學術、技術內容的欄位。過去數年,本站AIxiv專欄接收通報了2,000多篇內容,涵蓋全球各大專院校與企業的頂尖實驗室,有效促進了學術交流與傳播。如果您有優秀的工作想要分享,歡迎投稿或聯絡報道。投稿信箱:liyazhou@jiqizhixin.com;zhaoyunfeng@jiqizhixin.com

上海交通大學生成式模型實驗室(GAIR Lab) 的研究團隊,主要研究方向與大模型實驗室(GAIR Lab) 的研究團隊,主要研究方向為大模型評估。
團隊首頁:https://plms.ai/

AI技術日新月異,近來Anthropic公司最新發布的Claude-3.5-Sonnet因在知識型等任務上設立新產業基準而引發廣泛討論:Claude-3.5-Sonnet 已經取代OpenAI的GPT4o成為世界上」最聰明的AI「(Most Intelligent AI)了嗎? 回答這個問題的挑戰在於我們首先需要一個足夠挑戰的智力測驗基準,使得我們可以區分目前最高水準的AI

上海交通大學生成式人工智慧實驗室(GAIR Lab)推出的OlympicArena[1] (奧林匹克競技場)滿足了這個需求。

奧林匹克學科競賽不僅是對人類(碳基智能)思維敏捷性、知識掌握和邏輯推理的極限挑戰,更是AI(「矽基智能」)鍛鍊的絕佳練兵場,是佳練兵場,是衡量AI與「超級智慧」距離的重要標尺。 OlympicArena-一個真正意義上的AI奧運競技場。在這裡,AI不僅要展現其在傳統學科知識上的深度(數學、物理、生物、化學、地理等頂級競賽),還要在模型間的認知推理能力上展開較量。

近日,同樣是研究團隊,首次提出使用"奧林匹克競賽獎牌榜"的方法,根據各AI模型在奧林匹克競技場(各學科)的綜合表現進行排名,選出迄今為止智力最高的AI。在此次競技場中,研究團隊重點分析並比較了最近發布的兩個先進模型-Claude-3.5-SonnetGemini-1.5-Pro,以及OpenAI的GPT-4系列(e.g., GPT4o )。透過這種方式,研究團隊希望能夠更有效地評估和推動AI技術的發展。

奥林匹克竞赛里选最聪明的AI:Claude-3.5-Sonnet vs. GPT-4o?

                                                   注意:研究團隊首先依據金牌數量對模型進行排序,如果金牌數量相同,則依照整體表現分數來排序。
實驗結果顯示:

Claude-3.5-Sonnet在整體表現上與GPTo-4oo
Claude-3.5-Sonnet在整體表現上與GPTo一樣(例如在物理、化學和生物學上)。
  • Gemini-1.5-Pro和GPT-4V排名緊隨GPT-4o和Claude-3.5-Sonnet之後,但它們之間存在明顯的表現差距。
  • 來自開源社群的AI模型表現明顯落後於這些專有模型。
  • 這些模型在此基準測試上的表現不盡人意,顯示我們在實現超級智慧之路上還有很長的路要走。
    • Project homepage: https://gair-nlp.github.io/OlympicArena/

    Experimental settings

    The research team took the test set of OlympicArena for evaluation. The answers to this test set are not made public to help prevent data leakage and thus reflect the true performance of the model. The research team tested multimodal large models (LMMs) and text-only large models (LLMs). For testing of LLMs, no image-related information is provided to the model as input, only text. All assessments use zero-shot Chain of Thought prompt words.

    Evaluation objects

    The research team evaluated a series of open and closed source multimodal large models (LMMs) and text-only large models (LLMs). For LMMs, closed-source models such as GPT-4o, GPT-4V, Claude-3-Sonnet, Gemini Pro Vision, Qwen-VL-Max, etc. were selected. In addition, LLaVA-NeXT-34B, InternVL-Chat-V1.5 were also evaluated. , Yi-VL-34B and Qwen-VL-Chat and other open source models. For LLMs, open source models such as Qwen-7B-Chat, Qwen1.5-32B-Chat, Yi-34B-Chat, and InternLM2-Chat-20B were mainly evaluated.

    In addition, the research team specifically included the newly released Claude-3.5-Sonnet as well as Gemini-1.5-Pro ​​and compared them with the powerful GPT-4o and GPT-4V. to reflect the latest model performance.

    Evaluation Method

    Metrics Given that all problems can be evaluated by rule-based matching, the research team used accuracy for non-programming tasks and unbiased pass@k metrics for programming tasks , defined as follows:

    奥林匹克竞赛里选最聪明的AI:Claude-3.5-Sonnet vs. GPT-4o?

    In this evaluation, k = 1 and n = 5 are set, and c represents the number of correct samples that pass all test cases.

    Olympic Arena Medal List:

    Similar to the medal system used in the Olympic Games, it is a pioneering ranking mechanism specifically designed to evaluate the performance of AI models in various academic fields. The table awards medals to models that achieve the top three results in any given discipline, providing a clear and competitive framework for comparing different models. The research team first sorted the models according to the number of gold medals. If the number of gold medals was the same, they were sorted according to the overall performance score. It provides an intuitive and concise way to identify leading models in different academic fields, making it easier for researchers and developers to understand the strengths and weaknesses of different models.

    Fine-grained assessment:
    The research team also conducts accuracy-based fine-grained assessment based on different disciplines, different modalities, different languages, and different types of logical and visual reasoning abilities.

    Results and Analysis

    The analysis content mainly focuses on Claude-3.5-Sonnet and GPT-4o, and also partially discusses the performance of Gemini-1.5-Pro.

    Overall situation

    奥林匹克竞赛里选最聪明的AI:Claude-3.5-Sonnet vs. GPT-4o?

    Table: Performance of the model on different subjects The performance of Claude-3.5-Sonnet is powerful and reaches almost Comparable to GPT-4o. The overall accuracy difference between the two is only about 1%.
    The newly released Gemini-1.5-Pro ​​has also shown considerable strength, outperforming GPT-4V (OpenAI’s current second most powerful model) in most disciplines.

    It’s worth noting that at the time of writing, the earliest of these three models was released only a month ago, reflecting the rapid development in this field.

    • Fine-grained analysis for disciplines
    • GPT-4o vs. Claude-3.5-Sonnet:
    Although GPT-4o and Claude-3. 5-Sonnet in the whole The performance is similar, but both models exhibit different subject advantages. GPT-4o demonstrates superior capabilities on traditional deductive and inductive reasoning tasks, especially in mathematics and computer science. Claude-3.5-Sonnet performs well in subjects such as physics, chemistry, and biology, especially in biology, where it exceeds GPT-4o by 3%.
    GPT-4V vs. Gemini-1.5-Pro:

    A similar phenomenon can be observed in the comparison of Gemini-1.5-Pro ​​vs. GPT-4V. Gemini-1.5-Pro ​​significantly outperforms GPT-4V in physics, chemistry, and biology. However, in terms of mathematics and computer science, Gemini-1.5-Pro's advantages are not obvious or even inferior to GPT-4V.

    From these two sets of comparisons, it can be seen that:

    OpenAI’s GPT series performs outstandingly in traditional mathematical reasoning and programming capabilities. This shows that the GPT series models have been rigorously trained to handle tasks that require a lot of deductive reasoning and algorithmic thinking.

    In contrast, other models such as Claude-3.5-Sonnet and Gemini-1.5-Pro ​​demonstrated competitive performance when it comes to subjects that require combining knowledge with reasoning, such as physics, chemistry, and biology. This reflects the areas of expertise and potential training focus of different models, indicating possible trade-offs between reasoning-intensive tasks and knowledge integration tasks.

    Fine-grained analysis of reasoning types
    • Caption: The performance of each model in terms of logical reasoning capabilities. Logical reasoning abilities include: deductive reasoning (DED), inductive reasoning (IND), abductive reasoning (ABD), analogical reasoning (ANA), causal reasoning (CAE), critical thinking (CT), decomposition reasoning (DEC) and quantitative Reasoning (QUA).

    Comparison between GPT-4o and Claude-3.5-Sonnet in terms of logical reasoning capabilities:

    奥林匹克竞赛里选最聪明的AI:Claude-3.5-Sonnet vs. GPT-4o?

    As can be seen from the experimental results in the table, GPT-4o has excellent performance in most logical reasoning capabilities Better than Claude-3.5-Sonnet in areas such as deductive reasoning, inductive reasoning, abductive reasoning, analogical reasoning and critical thinking. However, Claude-3.5-Sonnet outperforms GPT-4o in causal reasoning, decomposition reasoning, and quantitative reasoning. Overall, the performance of both models is comparable, although GPT-4o has a slight advantage in most categories.
    Table: Performance of each model in visual reasoning capabilities. Visual reasoning abilities include: pattern recognition (PR), spatial reasoning (SPA), diagrammatic reasoning (DIA), symbolic interpretation (SYB), and visual comparison (COM).

    GPT-4o vs. Claude-3.5-Sonnet Performance in visual reasoning ability:

    As can be seen from the experimental results in the table, Claude-3.5-Sonnet is better in pattern recognition and Leading in diagram reasoning, demonstrating its competitiveness in pattern recognition and interpretation of diagrams. The two models performed comparably on symbol interpretation, indicating that they have comparable abilities in understanding and processing symbolic information. However, GPT-4o outperforms Claude-3.5-Sonnet in spatial reasoning and visual comparison, demonstrating its superiority on tasks that require understanding spatial relationships and comparing visual data.

    Comprehensive analysis of disciplines and reasoning types, the research team found that:

    • Mathematics and computer programming emphasize complex deductive reasoning skills and rule-based derivation of universal conclusions, and tend to rely less on pre-existing Knowledge. In contrast, disciplines like chemistry and biology often require large knowledge bases to reason based on known information about causal relationships and phenomena. This suggests that while mathematical and programming abilities are still valid indicators of a model's reasoning ability, other disciplines better test a model's ability to reason and problem analyze based on its internal knowledge.
    • The characteristics of different disciplines indicate the importance of customized training data sets. For example, to improve model performance in knowledge-intensive subjects such as chemistry and biology, the model needs extensive exposure to domain-specific data during training. In contrast, for subjects that require strong logic and deductive reasoning, such as mathematics and computer science, models can benefit from training focused on purely logical reasoning.
    • Furthermore, the distinction between reasoning ability and knowledge application demonstrates the potential of the model for cross-disciplinary application. For example, models with strong deductive reasoning capabilities can assist fields that require systematic thinking to solve problems, such as scientific research. And knowledge-rich models are valuable in disciplines that rely heavily on existing information, such as medicine and environmental science. Understanding these nuances helps develop more specialized and versatile models.

    Fine-grained analysis of language types

    奥林匹克竞赛里选最聪明的AI:Claude-3.5-Sonnet vs. GPT-4o?                                                                                                                                                Caption: The performance of each model in different language problems.

    The above table shows the performance of the model in different languages. The research team found that most models were more accurate on English than on Chinese, with this gap being particularly significant among the top-ranked models. It is speculated that there may be several reasons:

    • Although these models contain a large amount of Chinese training data and have cross-language generalization capabilities, their training data is mainly English-based.
    • Chinese questions are more challenging than English questions, especially in subjects like physics and chemistry, Chinese Olympiad questions are harder.
    • These models are insufficient in identifying characters in multi-modal images, and this problem is even more serious in the Chinese environment.

    However, the research team also found that some models developed by Chinese manufacturers or fine-tuned based on base models that support Chinese perform better in Chinese scenarios than in English scenarios, such as Qwen1.5-32B-Chat, Qwen -VL-Max, Yi-34B-Chat and Qwen-7B-Chat, etc. Other models, such as InternLM2-Chat-20B and Yi-VL-34B, while still performing better in English, have much smaller accuracy differences between English and Chinese scenes than the top-ranked closed-source models. many. This shows that optimizing models for Chinese data and even more languages ​​around the world still requires significant attention.

    Fine-grained analysis of modalities

    奥林匹克竞赛里选最聪明的AI:Claude-3.5-Sonnet vs. GPT-4o?

                                                                                                                    to                     to                 to                           to   to   to  to  to  to  to solve  different modal problems to be solved.

    The above table shows the performance of the model in different modalities. GPT-4o outperforms Claude-3.5-Sonnet in both plain text and multi-modal tasks, and performs more prominently on plain text. On the other hand, Gemini-1.5-Pro ​​performs better than GPT-4V on both plain text and multi-modal tasks. These observations indicate that even the strongest models currently available have higher accuracy on text-only tasks than on multi-modal tasks. This shows that the model still has considerable room for improvement in utilizing multi-modal information to solve complex reasoning problems.

    Conclusion

    In this review, the research team mainly focused on the latest models: Claude-3.5-Sonnet and Gemini-1.5-Pro, and compared them with OpenAI's GPT-4o and GPT- 4V for comparison. In addition, the research team also designed a novel ranking system for large models, the OlympicArena Medal Table, to clearly compare the capabilities of different models. The research team found that GPT-4o excels in subjects such as mathematics and computer science, and has strong complex deductive reasoning capabilities and the ability to draw general conclusions based on rules. Claude-3.5-Sonnet, on the other hand, is better at reasoning from established causal relationships and phenomena. In addition, the research team also observed that these models performed better on English language problems and had significant room for improvement in multi-modal capabilities. Understanding these nuances of models can help develop more specialized models that better serve the diverse needs of different academic and professional fields.

    As the quadrennial Olympic event approaches, we can’t help but imagine what kind of peak showdown between wisdom and technology it will be if artificial intelligence can also participate? It is no longer just a physical competition. The addition of AI will undoubtedly open up a new exploration of the limits of intelligence. We also look forward to more AI players joining this intellectual Olympics.

    参考链接:
    [1] Huang et al., OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI https://arxiv.org/abs/2406.12753v1

The above is the detailed content of Choosing the smartest AI in the Olympiad: Claude-3.5-Sonnet vs. GPT-4o?. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Roblox: Bubble Gum Simulator Infinity - How To Get And Use Royal Keys
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Nordhold: Fusion System, Explained
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Mandragora: Whispers Of The Witch Tree - How To Unlock The Grappling Hook
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1670
14
PHP Tutorial
1274
29
C# Tutorial
1256
24
The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' Jun 24, 2024 pm 03:04 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

LLM is really not good for time series prediction. It doesn't even use its reasoning ability. LLM is really not good for time series prediction. It doesn't even use its reasoning ability. Jul 15, 2024 pm 03:59 PM

Can language models really be used for time series prediction? According to Betteridge's Law of Headlines (any news headline ending with a question mark can be answered with "no"), the answer should be no. The fact seems to be true: such a powerful LLM cannot handle time series data well. Time series, that is, time series, as the name suggests, refers to a set of data point sequences arranged in the order of time. Time series analysis is critical in many areas, including disease spread prediction, retail analytics, healthcare, and finance. In the field of time series analysis, many researchers have recently been studying how to use large language models (LLM) to classify, predict, and detect anomalies in time series. These papers assume that language models that are good at handling sequential dependencies in text can also generalize to time series.

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

See all articles