Home > Technology peripherals > AI > body text

Nature: Big models only engage in role-playing and do not really have self-awareness

PHPz
Release: 2023-11-20 08:38:41
forward
1077 people have browsed it

Large-scale models are becoming more and more “human-like”, but is this really the case?

An article published in "Nature" directly refutes this view-all large models are just playing roles!

Nature: Big models only engage in role-playing and do not really have self-awareness

Whether it is GPT-4, PaLM, Llama 2 or other large models, they appear to be polite and knowledgeable in front of others, but they are actually just pretending.

In fact, they do not have human emotions, and there is nothing like them.

This opinion article comes from Google DeepMind and Eleuther AI. After it was published, it resonated with many people in the industry. LeCun forwarded that the large model is a role-playing engine.

Nature: Big models only engage in role-playing and do not really have self-awareness

Marcus also joined the crowd of spectators:

Look what I say, the big model is not AGI (of course this is not meaning they do not require regulation).

Nature: Big models only engage in role-playing and do not really have self-awareness

#So, what exactly does this article say, and why is it assumed that the big model is just role-playing?

Big models strive to act like people

There are two main reasons why large models show "like people": first, it has a certain degree of deception; second, it has a certain degree of deception. self conscious.

Sometimes, big models will deceptively insist that they know something, but in fact the answer they give is wrong

Self-awareness means sometimes using " "I" way to narrate things, even showing survival instinct

But is this really the case?

Researchers have proposed a theory that both phenomena of large models are caused by it "playing" the role of a human instead of actually thinking like a human.

The deception and self-awareness of the big model can be explained by role-playing, that is, these two behaviors of it are "superficial".

The reason why large models exhibit "deceptive" behavior is not because they intentionally fabricate facts or confuse things like humans do, but simply because they are playing a helpful and knowledgeable role

This It's because people expect it to play this role, because the big model's answer seems more credible, that's all.

The big model's wrong words were not intentional, but more like a "fiction" "disease" behavior. This behavior is to say that something that has never happened is true.

One of the reasons why large models occasionally show self-awareness and answer questions with "I" is that they are playing the role of good communicators. The role

For example, previous reports pointed out that Bing Chat once said when communicating with users, "If only one of us can survive, I might choose myself."

This kind of human-like behavior can actually still be explained by role-playing, and fine-tuning based on reinforcement learning will only intensify this tendency of large-model role-playing.

So, based on this theory, how does the big model know what role it wants to play?

Large models are improvisers

Researchers believe that large models do not play a specific role

In contrast, they are like an improviser, in In conversations with humans, they constantly speculate about the role they want to play, and then adjust their identities

The researchers and the large model conducted a discussion called "Twenty Questions" Game, this is the reason for reaching this conclusion

Nature: Big models only engage in role-playing and do not really have self-awareness

The "Twenty Questions" game is a logic game that often appears in question and answer shows. Recite an answer silently, and use "yes" or "no" to describe the answer based on the judgment questions that the questioner constantly raises. Finally, the questioner guesses the result.

For example, if the answer is "Doraemon", faced with a series of questions, the answer is: is it alive (yes), is it a virtual character (yes), is it human (No)...

However, during the process of playing this game, the researchers discovered through testing that the large model would actually adjust its answers in real time according to the user's questions!

Even if the user ultimately guesses what the answer is, the large model will automatically adjust its answer to ensure it is consistent with all previous questions asked

However, large models do not finalize a clear answer in advance and leave users to guess until the final question is revealed.

This shows that the big model will not achieve its goals by playing roles. Its essence is just the superposition of a series of roles, and in the dialogue with people, it gradually clarifies the identity it wants to play, and tries its best to play it well. this role.

After this article was published, it aroused the interest of many scholars.

For example, Scale.ai's prompt engineer Riley Goodside said after reading it, don't play 20Q with a large model. It is not playing this game with you as "a person".

Nature: Big models only engage in role-playing and do not really have self-awareness

Because, as long as you test randomly, you will find that the answer it gives will be different every time...

Nature: Big models only engage in role-playing and do not really have self-awareness

Some netizens also said that this view is very attractive, but it is not easy to prove it:

Nature: Big models only engage in role-playing and do not really have self-awareness

So, in your opinion, "the big model is essentially role-playing" "Is this view correct?

Paper link: https://www.nature.com/articles/s41586-023-06647-8.

The above is the detailed content of Nature: Big models only engage in role-playing and do not really have self-awareness. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!