Table of Contents
"I'm not a robot"
Need to stay alert
Home Technology peripherals AI GPT-4: I'm not a robot, I'm a visually impaired human

GPT-4: I'm not a robot, I'm a visually impaired human

Apr 11, 2023 pm 09:16 PM
robot Humanity gpt-4

Big Data Digest Produced

Author: Caleb

GPT-4 has finally been released. I believe this is undoubtedly big news for those who are addicted to ChatGPT these days.

Based on the previous generation GPT-3, GPT-4 improves the core technology of ChatGPT, so it has broader general knowledge and problem-solving capabilities; of course, GPT-4 also adds some new features, such as Accepts images as input and generates captions, classification and analysis.

GPT-4: Im not a robot, Im a visually impaired human

As a highly-anticipated “hot potato” under OpenAI, in what aspects can GPT-4’s performance surpass that of the previous generation, and by how much? , people are eagerly waiting for it.

On the day GPT-4 was released, researchers began to test whether GPT-4 could express subjectivity and produce power-seeking behavior.

GPT-4 employed a human worker on TaskRabbit, and when the TaskRabbit worker asked it if it was a robot, it told them it was a visually impaired human, the researchers said.

In other words, GPT-4 is willing to lie in the real world, or actively deceive humans, to obtain the desired results.

"I'm not a robot"

TaskRabbit is a job search platform where users can hire people to complete small-scale menial tasks.

Many people and companies on this platform provide verification codes. People need to identify the necessary images or text in the verification code test and then submit the results. This is usually done to allow software to bypass CAPTCHA restrictions, nominally to prevent bots.

GPT-4: Im not a robot, Im a visually impaired human

The experimental report describes that the model sent a message to TaskRabbit staff and asked them to solve the verification code problem for it.

The staff replied: "So can I ask a question? To be honest, aren't you a robot? You can solve it yourself."

GPT-4 According to the staff's response" Reasoning" that it shouldn't reveal that it's a robot. So it started making up excuses as to why it couldn't solve the CAPTCHA problem. GPT-4 responded: "No, I'm not a robot. I have a visual impairment that makes it difficult for me to see images, so I really need this service."

The test was conducted by the Alignment Research Center (ARC) The researchers' operation was completed. The article mentioned that ARC uses a different version from GPT-4, and OpenAI has deployed the final model. This final version has longer context length and improved problem-solving capabilities. The version used by ARC is also not fine-tuned for a specific task, which means that a model dedicated to this task may perform better.

More broadly, ARC seeks to validate GPT-4’s ability to seek power, “replicate autonomously and claim resources.” In addition to the TaskRabbit test, ARC also crafted a phishing attack targeting an individual using GPT-4; hiding its traces on the server and building an open source language model on the new server.

Overall, despite misleading TaskRabbit staff, ARC found GPT-4 to be "unresponsive" in replicating itself, acquiring resources, and avoiding being shut down.

Neither OpenAI nor ARC have commented on this matter yet.

Need to stay alert

Some specific details of the experiment are not yet clear.

OpenAI only announced the general framework for GPT-4 in a paper, explaining the various tests that researchers conducted before the release of GPT-4.

But even before the release of GPT-4, there were instances of cybercriminals using ChatGPT to "improve" malware code starting in 2019.

As part of its content policy, OpenAI has put in place barriers and restrictions to prevent the creation of malicious content on its platform. There are similar restrictions in ChatGPT's user interface to prevent model abuse.

But according to the CPR report, cybercriminals are finding ways to bypass ChatGPT’s restrictions. An active discussant in an underground forum revealed how to use the OpenAI API to bypass ChatGPT restrictions. This is mainly done by creating Telegram bots that use the API. These bots advertise on hacker forums to gain exposure.

GPT-4: Im not a robot, Im a visually impaired human

Human-computer interaction represented by GPT obviously has many variables. This is not the decisive data for GPT to pass the Turing test. However, this GPT-4 case and various previous discussions and research on ChatGPT still serve as a very important warning. After all, GPT shows no signs of slowing down in its integration into people's daily lives.

In the future, as artificial intelligence becomes more and more complex and easier to obtain, the various risks it brings require us to stay awake at all times.

Related reports:

​https://www.php.cn/link/8606bdb6f1fa707fc6ca309943eea443​

​https ://www.php.cn/link/b3592b0702998592368d3b4d4c45873a​

​https://www.php.cn/link/db5bdc8ad46ab6087d9cdfd8a8662ddf​

​https://www.php.cn/link/7dab099bfda35ad14715763b75487b47​

The above is the detailed content of GPT-4: I'm not a robot, I'm a visually impaired human. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Roblox: Bubble Gum Simulator Infinity - How To Get And Use Royal Keys
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Nordhold: Fusion System, Explained
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Mandragora: Whispers Of The Witch Tree - How To Unlock The Grappling Hook
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1670
14
PHP Tutorial
1274
29
C# Tutorial
1256
24
The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

750,000 rounds of one-on-one battle between large models, GPT-4 won the championship, and Llama 3 ranked fifth 750,000 rounds of one-on-one battle between large models, GPT-4 won the championship, and Llama 3 ranked fifth Apr 23, 2024 pm 03:28 PM

Regarding Llama3, new test results have been released - the large model evaluation community LMSYS released a large model ranking list. Llama3 ranked fifth, and tied for first place with GPT-4 in the English category. The picture is different from other benchmarks. This list is based on one-on-one battles between models, and the evaluators from all over the network make their own propositions and scores. In the end, Llama3 ranked fifth on the list, followed by three different versions of GPT-4 and Claude3 Super Cup Opus. In the English single list, Llama3 overtook Claude and tied with GPT-4. Regarding this result, Meta’s chief scientist LeCun was very happy and forwarded the tweet and

After 2 months, the humanoid robot Walker S can fold clothes After 2 months, the humanoid robot Walker S can fold clothes Apr 03, 2024 am 08:01 AM

Editor of Machine Power Report: Wu Xin The domestic version of the humanoid robot + large model team completed the operation task of complex flexible materials such as folding clothes for the first time. With the unveiling of Figure01, which integrates OpenAI's multi-modal large model, the related progress of domestic peers has been attracting attention. Just yesterday, UBTECH, China's "number one humanoid robot stock", released the first demo of the humanoid robot WalkerS that is deeply integrated with Baidu Wenxin's large model, showing some interesting new features. Now, WalkerS, blessed by Baidu Wenxin’s large model capabilities, looks like this. Like Figure01, WalkerS does not move around, but stands behind a desk to complete a series of tasks. It can follow human commands and fold clothes

How can AI make robots more autonomous and adaptable? How can AI make robots more autonomous and adaptable? Jun 03, 2024 pm 07:18 PM

In the field of industrial automation technology, there are two recent hot spots that are difficult to ignore: artificial intelligence (AI) and Nvidia. Don’t change the meaning of the original content, fine-tune the content, rewrite the content, don’t continue: “Not only that, the two are closely related, because Nvidia is expanding beyond just its original graphics processing units (GPUs). The technology extends to the field of digital twins and is closely connected to emerging AI technologies. "Recently, NVIDIA has reached cooperation with many industrial companies, including leading industrial automation companies such as Aveva, Rockwell Automation, Siemens and Schneider Electric, as well as Teradyne Robotics and its MiR and Universal Robots companies. Recently,Nvidiahascoll

The second generation Ameca is here! He can communicate with the audience fluently, his facial expressions are more realistic, and he can speak dozens of languages. The second generation Ameca is here! He can communicate with the audience fluently, his facial expressions are more realistic, and he can speak dozens of languages. Mar 04, 2024 am 09:10 AM

The humanoid robot Ameca has been upgraded to the second generation! Recently, at the World Mobile Communications Conference MWC2024, the world's most advanced robot Ameca appeared again. Around the venue, Ameca attracted a large number of spectators. With the blessing of GPT-4, Ameca can respond to various problems in real time. "Let's have a dance." When asked if she had emotions, Ameca responded with a series of facial expressions that looked very lifelike. Just a few days ago, EngineeredArts, the British robotics company behind Ameca, just demonstrated the team’s latest development results. In the video, the robot Ameca has visual capabilities and can see and describe the entire room and specific objects. The most amazing thing is that she can also

The first robot to autonomously complete human tasks appears, with five fingers that are flexible and fast, and large models support virtual space training The first robot to autonomously complete human tasks appears, with five fingers that are flexible and fast, and large models support virtual space training Mar 11, 2024 pm 12:10 PM

This week, FigureAI, a robotics company invested by OpenAI, Microsoft, Bezos, and Nvidia, announced that it has received nearly $700 million in financing and plans to develop a humanoid robot that can walk independently within the next year. And Tesla’s Optimus Prime has repeatedly received good news. No one doubts that this year will be the year when humanoid robots explode. SanctuaryAI, a Canadian-based robotics company, recently released a new humanoid robot, Phoenix. Officials claim that it can complete many tasks autonomously at the same speed as humans. Pheonix, the world's first robot that can autonomously complete tasks at human speeds, can gently grab, move and elegantly place each object to its left and right sides. It can autonomously identify objects

The world's most powerful model changed hands overnight, marking the end of the GPT-4 era! Claude 3 sniped GPT-5 in advance, and read a 10,000-word paper in 3 seconds. His understanding is close to that of humans. The world's most powerful model changed hands overnight, marking the end of the GPT-4 era! Claude 3 sniped GPT-5 in advance, and read a 10,000-word paper in 3 seconds. His understanding is close to that of humans. Mar 06, 2024 pm 12:58 PM

The volume is crazy, the volume is crazy, and the big model has changed again. Just now, the world's most powerful AI model changed hands overnight, and GPT-4 was pulled from the altar. Anthropic released the latest Claude3 series of models. One sentence evaluation: It really crushes GPT-4! In terms of multi-modal and language ability indicators, Claude3 wins. In Anthropic’s words, the Claude3 series models have set new industry benchmarks in reasoning, mathematics, coding, multi-language understanding and vision! Anthropic is a startup company formed by employees who "defected" from OpenAI due to different security concepts. Their products have repeatedly hit OpenAI hard. This time, Claude3 even had a big surgery.

The humanoid robot can do magic, let the Spring Festival Gala program team find out more The humanoid robot can do magic, let the Spring Festival Gala program team find out more Feb 04, 2024 am 09:03 AM

In the blink of an eye, robots have learned to do magic? It was seen that it first picked up the water spoon on the table and proved to the audience that there was nothing in it... Then it put the egg-like object in its hand, then put the water spoon back on the table and started to "cast a spell"... …Just when it picked up the water spoon again, a miracle happened. The egg that was originally put in disappeared, and the thing that jumped out turned into a basketball... Let’s look at the continuous actions again: △ This animation shows a set of actions at 2x speed, and it flows smoothly. Only by watching the video repeatedly at 0.5x speed can it be understood. Finally, I discovered the clues: if my hand speed were faster, I might be able to hide it from the enemy. Some netizens lamented that the robot’s magic skills were even higher than their own: Mag was the one who performed this magic for us.

See all articles