


Latest PNAS research: 81% problem solving rate, neural network Codex opens the door to the world of advanced mathematics
Recently, a new study was published in PNAS, which once again refreshed the capabilities of neural networks. This time the neural network was used to solve advanced mathematics problems, and they were difficult mathematics problems in the MIT mathematics course!
In this new study, the research team proved that OpenAI’s Codex model can perform program synthesis to solve large-scale mathematical problems, and automatically solve 81% of the data set through small sample learning mathematics course problems, and Codex achieved human-level performance on these tasks.
##Original link: https://www.pnas.org/doi/10.1073/pnas.2123433119
The emergence of this research subverts the common consensus that neural networks cannot solve advanced mathematics problems. The research team pointed out that the reason why Codex can achieve such capabilities is precisely because the team has made a major innovation. Those unsuccessful studies in the past only used text-based pre-training, and the Codex neural network that appeared this time not only Pre-training is done based on text, and the code is also fine-tuned.
The question data set studied was selected from six mathematics courses at MIT and one mathematics course at Columbia University. 25 questions were randomly selected from seven courses: MIT's Univariate Micron Integrals, multivariable calculus, differential equations, introduction to probability and statistics, linear algebra and mathematics for computer science and COMS3251 Computational Linear Algebra from Columbia University.
At the same time, the research team used MATH, the latest advanced mathematics problem benchmark for evaluating mathematical reasoning, to test the capabilities of OpenAI Codex. MATH was drawn from 6 major mathematics sections: Junior Algebra , 15 problems each from Algebra, Counting and Probability, Intermediate Algebra, Number Theory, and Precalculus.
Caption: Course question data set and MATH benchmark used in the study
Research shows that Codex solved 265 problems in the problem data set and MATH data set, of which 213 were solved automatically.
1 Where is the innovation
After the release of Transformer, language models based on Transformer have been used in various natural language processing (NLP) tasks, including It has achieved great success in zero-shot and few-shot language tasks. But because Transformer is only pre-trained on text, these models are basically unable to solve mathematical problems. GPT-3 is a typical example.
Later, through few-shot learning and Chain-of-thought (CoT) prompts, GPT-3’s mathematical reasoning capabilities were improved; however, , without code, even with small sample learning and CoT hints, GPT-3 is still powerless on college-level math problems and MATH benchmarks.
Past research on solving mathematical problems may have achieved certain results at a relatively simple mathematical level. For example, techniques that validate or predict expression trees based on collaborative training outputs, such as MAWPS and Math23k, can solve elementary school-level math problems with over 81% accuracy, but they cannot solve high school, Olympiad math, or college-level math problems. course. Co-training combined with graph neural networks (GNN) to predict arithmetic expression trees enables solving university-level problems in machine learning with up to 95% accuracy. But this work was also limited to numerical answers, and produced overfitting and could not be generalized to other courses.
One of the biggest innovations of this work is that not only the Transformer model such as Codex is pre-trained on the text, but also on the code fine-tuned so that it can generate programs that solve mathematical problems on a large scale.
The research team randomly selected question samples from the data set that did not require input images or proofs for testing. Among them, a language model pretrained only on text (GPT-3 text-davinci-002) automatically solved only 18% of the course problems and 25.5% of the MATH benchmark problems.
In contrast, a program synthesized using zero-shot learning and a neural network pretrained on text and fine-tuned on code (OpenAI Codex code-davinci-002) can automatically Solved 71% of course questions and 72.2% of MATH benchmark questions.
Using the same neural network Codex plus few-shot learning can automatically solve 81% of the problems in the course and 81.1% of the problems in the MATH benchmark test. However, 19% of course questions and 18.9% of MATH benchmark questions that the remaining models could not automatically solve were finally solved through manual prompts.
The addition of the small sample learning method is the second major innovation of this research. As can be seen from the above figure, when zero-shot learning cannot answer the question, (question, code) will be used to perform small-shot learning on (pair):
1) Use OpenAI The text-similarity-babbage-001 embedding engine embeds all questions;
2) Uses the embedded cosine similarity to calculate the solved questions from its course that are most similar to the unsolved questions;
3) Take the most similar problems and their corresponding codes as examples of small sample problems.
Illustration: Comparison of automatic problem-solving rates of 4 methods
The above picture shows the comparison of the automatic problem solving rates of Codex's zero-sample learning, small-sample learning, and GPT-3's zero-sample learning and small-sample learning. It can be seen from the figure that the small sample learning Codex represented by the orange bar has excellent performance in automatic problem solving rate, and its performance in basically every mathematical field is stronger than the other three methods.
The third major innovation of this research is to provide a pipeline for solving mathematical problems and explaining why they are solved as they are. The figure below shows MIT 5 The execution flow of pipelines in a mathematics course.
Taking the 18.01 single variable calculus problem as an example, given a problem and the automatically generated prefix "Use SymPy", the Codex is prompted and output a program. Running the program produces an equation with the correct answer. The program then automatically prompts for the Codex again, resulting in a generated code explanation.
2 After Problem Solving
In addition to solving math problems and explaining answers, Codex is also used to generate new questions for each course.
To evaluate the level of questions generated, the team conducted a survey among MIT students who had taken these courses or courses at the same level, mainly to compare the quality of machine-generated questions and manually written questions. and difficulty.
In each of MIT's 6 courses, 5 hand-written questions and 5 model-generated questions are mixed and presented randomly. For each of the 60 questions, participating students were asked to answer 3 survey questions:
1) Do you think this question was human-written or machine-generated?
2) Do you think this question is appropriate or inappropriate for a specific course?
3 ) On a scale of 1 (easiest) and 5 (hardest), what would you rate the difficulty level of this problem?
In the returned questionnaires, the student survey results are summarized as follows:
- The difficulty of machine-generated and manually written questions is similar.
- Human-written questions are more appropriate for the course than machine-generated questions.
- Human-written answers are difficult to identify incorrectly, while machine-generated questions are considered by students to be either machine-generated or human-written.
#The machine-generated questions have become indistinguishable to students, indicating that Codex has reached human performance levels in generating new content.
However, the model also has problems that it cannot solve, such as if the question appears in an image or other non-text form, it cannot answer; questions with solutions that need to be proved, Or computationally intractable problems, such as factoring very large prime numbers, cannot be solved by this model. However, this last type of question should not appear in any mathematics coursework because even real students cannot answer it.
The above is the detailed content of Latest PNAS research: 81% problem solving rate, neural network Codex opens the door to the world of advanced mathematics. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

This paper explores the problem of accurately detecting objects from different viewing angles (such as perspective and bird's-eye view) in autonomous driving, especially how to effectively transform features from perspective (PV) to bird's-eye view (BEV) space. Transformation is implemented via the Visual Transformation (VT) module. Existing methods are broadly divided into two strategies: 2D to 3D and 3D to 2D conversion. 2D-to-3D methods improve dense 2D features by predicting depth probabilities, but the inherent uncertainty of depth predictions, especially in distant regions, may introduce inaccuracies. While 3D to 2D methods usually use 3D queries to sample 2D features and learn the attention weights of the correspondence between 3D and 2D features through a Transformer, which increases the computational and deployment time.
