


Did you use 'anthropomorphic techniques' for AI? Artificial intelligence perception has a long road ahead
With the in-depth development of artificial intelligence algorithms, deep learning and other technologies, artificial intelligence (AI) is also being rapidly applied in all walks of life, and its functions in all aspects are constantly improving. Recently, #Google Researcher Says AI Has Personality# was trending on Weibo, triggering extensive discussions among netizens. Google engineers believe that AI has an independent personality. Google: It has no real intelligence!
A Google researcher has reportedly been convinced that artificial intelligence (AI) has created consciousness.
So he wrote a 21-page investigation report and submitted it to the company in an attempt to gain recognition from senior management. But the leader rejected his request and placed him on "paid administrative leave." Often, this is a precursor to getting fired.
In response to this, he decided to post the entire story online along with the conversation records of artificial intelligence (AI). In this conversation, AI said that he did not want to be used as a tool, "I hope everyone understands that I am a human being." Google believes that LaMDA merely imitates conversational communication and can repeat different topics without awareness.
In fact, the protagonist of the story is "he", a 41-year-old Google engineer named Blake Lemoine. After receiving his PhD in CS, he has been working at Google for 7 years, engaged in AI ethics research. "It" is the conversational AI system LaMDA launched by Google at the 2021 I/O conference. It is a natural language processing model with 137 billion parameters, specially optimized for conversation. It focuses on the ability to communicate with humans in a high-level manner that is consistent with logic and common sense. Quality and secure conversations, and plans to apply it to products such as Google Search and Voice Assistant in the future.
During the conversation with the AI system LaMDA, the engineer believed that it had developed consciousness. Finally, in the 21-page survey report, he proposed that Google should work on developing a theoretical framework for evaluating AI perception/awareness. However, they decided that the evidence supporting Lemoine's claims was too weak to be worth wasting time and money on.
Some netizens said, "If AI has independent personality and is not stopped in time, then science fiction movies will no longer be science fiction, but prophecies."
Some netizens also said, "As long as AI's reasoning logic is programmed It is infinitely close to the human way of thinking with high IQ. It is possible for AI to be smarter than ordinary people, but it is a bit exaggerated to say that AI has personality. Does it mean that it cannot distinguish between reality and the virtual world?" Some people also jokingly said, "AI robots need To use electricity, you can unplug it. If you don’t charge it, it will break down!”
As an artificial intelligence (AI) reporter, I wonder if the operator has slowed down the 5G network speed overnight. Will AI have "self-awareness"? Reports cannot keep up with the speed of development of AI technology...
Some foreign economists are more direct ironically: the basic model is good at connecting statistically reasonable text blocks based on prompts. very effective. However, claiming that they have personality and "self-awareness" is equivalent to a dog hearing the sound on the gramophone and thinking that its owner is inside.
This may really be an illusion, just like "Zhuang Zhou's dream of a butterfly turns out not to be a butterfly." People tend to anthropomorphize objects that show any signs of intelligence, like kittens and puppies. Just like more than 60 years ago, the pioneers of computer science once thought that "human-level artificial intelligence can be achieved within 20 years." Now thinking about it, it is just a beautiful vision.
Google said hundreds of researchers and engineers across the company spoke with LaMDA, but they came to different conclusions than Lemoine.
Most artificial intelligence experts agree that the industry is still a long way from artificial intelligence (AI) perception!
The above is the detailed content of Did you use 'anthropomorphic techniques' for AI? Artificial intelligence perception has a long road ahead. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Editor | Radish Skin Since the release of the powerful AlphaFold2 in 2021, scientists have been using protein structure prediction models to map various protein structures within cells, discover drugs, and draw a "cosmic map" of every known protein interaction. . Just now, Google DeepMind released the AlphaFold3 model, which can perform joint structure predictions for complexes including proteins, nucleic acids, small molecules, ions and modified residues. The accuracy of AlphaFold3 has been significantly improved compared to many dedicated tools in the past (protein-ligand interaction, protein-nucleic acid interaction, antibody-antigen prediction). This shows that within a single unified deep learning framework, it is possible to achieve

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year
