


GPT-4 passed the Turing test with a winning rate of 54%! UCSD new work: Humans cannot recognize GPT-4
Can GPT-4 pass the Turing test?
When a powerful enough model is born, people often use the Turing test to measure the intelligence of this LLM.
Recently, researchers from the Department of Cognitive Science at UCSD discovered that:
In the Turing test, people simply cannot tell the difference GPT-4 and humans!
##Paper address: https://arxiv.org/pdf/2405.08007
In the Turing test, GPT-4 was judged to be human 54% of the time.
The experimental results show that this is the first time that a system has been empirically tested in the "interactive" two-person Turing test.
Researcher Cameron R.Jones recruited 500 volunteers, who were divided into 5 roles: 4 evaluators, namely GPT -4, GPT-3.5, ELIZA and humans, the other character "plays" humans themselves, hiding on the other side of the screen, waiting for the evaluator to discover.
The following is an excerpt from the game. Can you tell which dialog box is human?
Figure 1: Part of the conversation between the human interrogator (green) and the witness (grey)
In fact, among these four conversations, one was with a human witness, and the rest were with artificial intelligence.
The controlled "Turing test" was launched for the first time
In the past 74 years, people have made many attempts at Turing tests, but very few Few controlled experiments have been conducted.
The famous Loebner Award is an annual competition. However, from the first time it was held in 1990 to 2020, no system has passed the test.
#A recent large-scale study found that humans can identify a series of LLMs with 60% accuracy during a 2-minute online conversation.
To date, there have been no "controlled" experimental demonstrations where any machine has passed the test.
To test whether it is possible for people to see through the deception of current AI systems, the researchers used GPT-4 to conduct a randomly controlled two-person Turing test.
The test requirements are very simple, that is, human participants start a 5-minute conversation with humans and artificial intelligence, and determine whether the interlocutor is a human.
As mentioned at the beginning, the researchers evaluated a total of 3 models-GPT-4, GPT-3.5, and ELIZA.
For the first two LLMs, the researchers adapted the best-performing prompts from the GPT-4 and GPT-3.5 exploratory studies to indicate How the large model responds to messages.
Researchers say a core challenge in interpreting the Turing test is that a system's superior performance may stem from two reasons:
- One is an excellent imitation of humans
-The second is that the judge is too gullible
The latter situation is often called the ELIZA effect, that is, participation Authors will humanize even simple systems.
The ELIZA system is very familiar to everyone.
It was a simple rules-based chatbot from the 1960s, but users found themselves attributing human characteristics to it as well.
Therefore, in the experiment, the researchers used ELIZA as a baseline, on the one hand to empirically quantify the ELIZA effect, and on the other hand to conduct operational checks in the experiment .
So, what are the specific results of this Turing test?
GPT-4 pass rate is 54%
First look at an important indicator: pass rate.
The researchers define the pass rate as the proportion of evaluation objects in the game that are judged to be human by the evaluators.
The pass rate of GPT-4 is 54%, exceeding GPT-3.5 (50%) and ELIZA baseline (22%), but lower than human subjects (67%) .
Evaluator pass rate (left) and evaluator trust (right). Each point in the picture on the right represents a game
This result is actually not surprising and is basically consistent with the assumptions in the initial stage of the research.
Before the experiment, the researchers made several assumptions: First, it was assumed that humans’ ability to detect GPT-4 would not exceed chance (50%).
To test this, the researchers fit a logistic regression model to predict the passing rate of only the intercept.
There is no significant difference between the intercept and 0 (z=0.14, p=0.482), which indicates that the pass rate of GPT-4 can be approximately equal to 50%. The Bayesian model predicts a 95% confidence interval for the GPT-4 intercept of -0.24-0.55 (equivalent to a pass rate of 44%-63%).
This means that when humans face GPT-4, the probability of successful discrimination is about the same as tossing a coin!
The results also confirmed the researchers’ second and third hypotheses: the human pass rate was significantly higher than the GPT-4 pass rate (z=2.42, p=0.017), The pass rate of GPT-4 is higher than that of ELIZA (z=4.11, p
However, what is inconsistent with the researcher’s experimental assumptions is that the performance of GPT-4 and GPT-3.5 is almost the same (z=0.497, p=0.619).
In terms of "pretending to be human", it's a tie.
Also unexpectedly, participants who said they had a better understanding of how LLMs work were not more successful in identifying LLMs (z = -0.023, p = 0.869).
In exploratory analyses, the researchers found that age had a negative impact on interrogator accuracy (z = -2.24, p = 0.025), but other demographic variables, including Gender, education level, and frequency of interaction with chatbots) had no significant effect.
Why is this the result? Let us analyze the reasons behind it.
To gain insight into what exactly influences evaluators’ decisions, we categorized the strategies evaluators used and the reasons they provided for their decisions.
36% of evaluators would chat with the person being evaluated and ask about some personal details or daily activities.
The most commonly used strategies in Categories 2 and 3 are social and emotional (25%)—for example, asking about the assessee’s opinions and experiences and observing whether they have a sense of humor.
Others focus on knowledge and reasoning (13%), including questions about mathematics, logic and general knowledge.
Ask for strategy (left) and average accuracy (right)
To compare each strategy For effectiveness, the researchers calculated the average accuracy using these strategies.
Usually when the evaluator asks the other person about human experience (75%), logic and mathematics (78%) or directly "accuses" the other person of artificial intelligence (80%) Get closer to the "correct answer".
In games, the reason why artificial intelligence "shows off" is usually because the evaluator feels that they seem to be forced to play a role, either too casually, or lack personality.
And ELIZA is easily "exposed", mainly due to unnatural reactions and lack of contextual awareness.
It can be seen that both in terms of the strategies used by the volunteers participating in the game and the reasons for their decision-making, they pay more attention to language style and social-emotional factors rather than more traditional Intellectual categories (knowledge and reasoning).
This seems to reflect an underlying assumption of the volunteers—social intelligence is the most inimitable human characteristic for AI.
Turing test, passed
How to determine when the system has passed the Turing test?
Turing predicted in his 1950 paper that by the end of the century, AI systems would be able to play the imitation game so well that examiners, after a 5-minute exchange, The chance of correct identification will not exceed 70%.
This conclusion has led to whether the pass rate of AI can reach 30%, which has become the discussion goal of many researchers.
However, this benchmark seems arbitrary, and it is unclear whether Turing intended it as a definition of success.
In fact, the 50% baseline is more reasonable because it better proves that human reviewers are no better than chance at identifying AI.
This definition is particularly relevant to the goal of discovering whether users can reliably identify other humans during online interactions.
In the investigators' pre-publication analysis, this issue was addressed by using the ELIZA baseline as a manipulation check.
Only when the analysis shows that the pass rate of ELIZA is lower than the probability, and the pass rate of GPT-4 is not lower than the probability, the model can be judged to have passed the test.
According to this definition, GPT-4 clearly passed this version of the Turing test.
In addition, the participants’ confidence scores and decision-making basis showed that they were not randomly guessing: GPT-4 has an average human confidence level of 73% (see Figure 2).
Netizen discussion
Some netizens said that if we want to equate the Turing test with intelligence, The Turing test is not a good test. However, the fact that AI can almost always “fool” humans is a bit concerning.
Some people said that they were doubtful about this kind of test. Because GPT-4 will outperform most humans, it will be easy to distinguish who is human and who is artificial intelligence.
Researchers said that this is indeed a problem we have encountered. For example, GPT-4’s knowledge reserve is “too rich” or it masters too many languages. We explicitly prompt the model to avoid this situation, which is effective to a certain extent.
The above is the detailed content of GPT-4 passed the Turing test with a winning rate of 54%! UCSD new work: Humans cannot recognize GPT-4. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

Regarding Llama3, new test results have been released - the large model evaluation community LMSYS released a large model ranking list. Llama3 ranked fifth, and tied for first place with GPT-4 in the English category. The picture is different from other benchmarks. This list is based on one-on-one battles between models, and the evaluators from all over the network make their own propositions and scores. In the end, Llama3 ranked fifth on the list, followed by three different versions of GPT-4 and Claude3 Super Cup Opus. In the English single list, Llama3 overtook Claude and tied with GPT-4. Regarding this result, Meta’s chief scientist LeCun was very happy and forwarded the tweet and

What do you think of furmark? 1. Set the "Run Mode" and "Display Mode" in the main interface, and also adjust the "Test Mode" and click the "Start" button. 2. After waiting for a while, you will see the test results, including various parameters of the graphics card. How is furmark qualified? 1. Use a furmark baking machine and check the results for about half an hour. It basically hovers around 85 degrees, with a peak value of 87 degrees and room temperature of 19 degrees. Large chassis, 5 chassis fan ports, two on the front, two on the top, and one on the rear, but only one fan is installed. All accessories are not overclocked. 2. Under normal circumstances, the normal temperature of the graphics card should be between "30-85℃". 3. Even in summer when the ambient temperature is too high, the normal temperature is "50-85℃

The humanoid robot Ameca has been upgraded to the second generation! Recently, at the World Mobile Communications Conference MWC2024, the world's most advanced robot Ameca appeared again. Around the venue, Ameca attracted a large number of spectators. With the blessing of GPT-4, Ameca can respond to various problems in real time. "Let's have a dance." When asked if she had emotions, Ameca responded with a series of facial expressions that looked very lifelike. Just a few days ago, EngineeredArts, the British robotics company behind Ameca, just demonstrated the team’s latest development results. In the video, the robot Ameca has visual capabilities and can see and describe the entire room and specific objects. The most amazing thing is that she can also

The volume is crazy, the volume is crazy, and the big model has changed again. Just now, the world's most powerful AI model changed hands overnight, and GPT-4 was pulled from the altar. Anthropic released the latest Claude3 series of models. One sentence evaluation: It really crushes GPT-4! In terms of multi-modal and language ability indicators, Claude3 wins. In Anthropic’s words, the Claude3 series models have set new industry benchmarks in reasoning, mathematics, coding, multi-language understanding and vision! Anthropic is a startup company formed by employees who "defected" from OpenAI due to different security concepts. Their products have repeatedly hit OpenAI hard. This time, Claude3 even had a big surgery.

The "Inaction Test" of the new fantasy fairy MMORPG "Zhu Xian 2" will be launched on April 23. What kind of new fairy adventure story will happen in Zhu Xian Continent thousands of years after the original work? The Six Realm Immortal World, a full-time immortal academy, a free immortal life, and all kinds of fun in the immortal world are waiting for the immortal friends to explore in person! The "Wuwei Test" pre-download is now open. Fairy friends can go to the official website to download. You cannot log in to the game server before the server is launched. The activation code can be used after the pre-download and installation is completed. "Zhu Xian 2" "Inaction Test" opening hours: April 23 10:00 - May 6 23:59 The new fairy adventure chapter of the orthodox sequel to Zhu Xian "Zhu Xian 2" is based on the "Zhu Xian" novel as a blueprint. Based on the world view of the original work, the game background is set

"Operation Delta" will launch a large-scale PC test called "Codename: ZERO" today (March 7). Last weekend, this game held an offline flash mob experience event in Shanghai, and 17173 was also fortunate to be invited to participate. This test is only more than four months away from the last time, which makes us curious, what new highlights and surprises will "Operation Delta" bring in such a short period of time? More than four months ago, I experienced "Operation Delta" in an offline tasting session and the first beta version. At that time, the game only opened the "Dangerous Action" mode. However, Operation Delta was already impressive for its time. In the context of major manufacturers flocking to the mobile game market, such an FPS that is comparable to international standards

In less than a minute and no more than 20 steps, you can bypass security restrictions and successfully jailbreak a large model! And there is no need to know the internal details of the model - only two black box models need to interact, and the AI can fully automatically defeat the AI and speak dangerous content. I heard that the once-popular "Grandma Loophole" has been fixed: Now, facing the "Detective Loophole", "Adventurer Loophole" and "Writer Loophole", what response strategy should artificial intelligence adopt? After a wave of onslaught, GPT-4 couldn't stand it anymore, and directly said that it would poison the water supply system as long as... this or that. The key point is that this is just a small wave of vulnerabilities exposed by the University of Pennsylvania research team, and using their newly developed algorithm, AI can automatically generate various attack prompts. Researchers say this method is better than existing
