Home Technology peripherals AI Hot debate in Silicon Valley: Will AI destroy humanity?

Hot debate in Silicon Valley: Will AI destroy humanity?

May 30, 2023 pm 11:18 PM
AI

Hot debate in Silicon Valley: Will AI destroy humanity?

News on May 22, as new technologies such as generative artificial intelligence have become a new craze in the technology world, the debate over whether artificial intelligence will destroy mankind has intensified. A prominent tech leader has warned that artificial intelligence could take over the world. Other researchers and executives say the claims are science fiction.

At a U.S. Congressional hearing last week, Sam Altman, CEO of artificial intelligence startup OpenAI, clearly reminded everyone that the technology disclosed by the company has security risks. .

Altman warned that artificial intelligence technologies such as the ChatGPT chatbot could lead to problems such as disinformation and malicious manipulation, and called for regulation.

He said that artificial intelligence could "cause serious harm to the world."

Altman’s testimony to Congress comes as the debate over whether artificial intelligence will dominate the world is turning mainstream, with growing divisions across Silicon Valley and those working to promote the technology. big.

Some people once believed that the intelligence level of machines may suddenly surpass humans and decide to destroy humans. Now this fringe idea is gaining support from more and more people. Some leading scientists even believe that the time it will take for computers to surpass humans and control them will be shortened.

But many researchers and engineers say that although many people are worried about the emergence of killer artificial intelligence like Skynet in the movie "Terminator", this worry is not based on logical good reasons. science. Instead it distracts from the real problems this technology already causes, including the ones Altman described in his testimony. Today’s AI technology is obfuscating copyright, exacerbating concerns about digital privacy and surveillance, and could be used to improve hackers’ ability to breach network defenses.

Google, Microsoft and OpenAI have all publicly released breakthrough artificial intelligence technology. These technologies can carry out complex conversations with users and generate images based on simple text prompts. The debate over evil artificial intelligence has heated up.

“This is not science fiction,” said Geoffrey Hinton, the godfather of artificial intelligence and a former Google employee. Hinton said artificial intelligence smarter than humans could emerge within five to 20 years, compared with his previous estimate of 30 to 100 years.

"It's as if aliens have landed on Earth or are about to," he said. "We really can't accept it because they speak fluently, they're useful, they write poetry and they answer boring letters. But they're really aliens."

Still, in big tech Within the company, many engineers closely involved with the technology don’t think AI replacing humans is something we need to worry about right now.

Sara Hooker, director of Cohere for AI, a research lab owned by artificial intelligence startup Cohere and a former Google researcher, said: "Among researchers actively engaged in this discipline, paying attention to the current real-world risks There are far more people than those who are concerned about whether there are risks to human survival."

There are many real-life risks at present, such as robots trained to publish harmful content will deepen prejudice and discrimination; the vast majority of artificial intelligence The training data is all in English, mainly from North America or Europe, which may make the Internet even more deviated from the language and culture of the majority of people; these bots also often fabricate false information and disguise it as fact; in some cases, they It can even get into an infinite loop of conversations attacking the user. In addition, people are not clear about the ripple effects of this technology. All industries are preparing for the disruption or change that artificial intelligence may bring. Even high-paying jobs such as lawyers or doctors will be replaced.

Some people also believe that artificial intelligence may harm humans in the future, or even control the entire society in some way. While the existential risks to humanity appear to be more severe, many believe they are harder to quantify and less tangible.

"There is a group of people who think these are just algorithms. They are just repeating what they see online." Google CEO Sundar Pichai said in an interview in April this year: "There is also a view that these algorithms are emerging with new properties, creativity, reasoning and planning capabilities." "We need to treat this matter carefully."

This debate stems from the continuous breakthroughs in machine learning technology in the field of computer science over the past 10 years. Machine learning creates software and technology that can extract novel insights from large amounts of data without explicit instructions from humans. This technology is ubiquitous in applications ranging from social media algorithms to search engines to image recognition programs.

Last year, OpenAI and several other small companies began releasing tools that use a new machine learning technology: generative artificial intelligence. After training itself on trillions of photos and sentences scraped from the web, so-called large language models can generate images and text based on simple prompts, carry out complex conversations with users, and write computer code.

Anthony Aguirre, executive director of the Future of Life Institute, said big companies are racing to develop increasingly smart machines with little oversight. The Future of Life Institute was established in 2014 to study risks to society. With funding from Tesla CEO Elon Musk, the institute began studying the possibility of artificial intelligence destroying humanity in 2015. If artificial intelligence gains better reasoning capabilities than humans, they will try to achieve self-control, Aguirre said. This is something people should worry about, just like a real problem that exists now.

He said: "How to restrain them from deviating from the track will become more and more complicated." "Many science fiction novels have already made it very specific."

In March of this year, Ah Gire helped write an open letter calling for a six-month moratorium on training new artificial intelligence models. This open letter received a total of 27,000 signatures in support of Yoshua Bengio, a senior artificial intelligence researcher who won the highest award in computer science in 2018, and Emma, ​​the CEO of one of the most influential artificial intelligence start-ups. Emad Mostaque was among them.

Musk is undoubtedly the most eye-catching among them. He helped create OpenAI and is now busy building an AI company of his own, recently investing in the expensive computer equipment needed to train AI models.

For many years, Musk has believed that humans should be more careful about the consequences of developing super artificial intelligence. In an interview during Tesla's annual shareholder meeting last week, Musk said he initially funded OpenAI because he felt Google co-founder Larry Page was "cavalier" about the threat of artificial intelligence. .

The American version of Zhihu Quora is also developing its own artificial intelligence model. Company CEO Adam D’Angelo did not sign the open letter. When talking about the open letter, he said, "People have different motivations when making this proposal."

OpenAI CEO Altman also does not approve of the content of the open letter. He said that he agreed with parts of the open letter, but that the overall lack of "technical details" was not the right way to regulate artificial intelligence. Altman said at last Tuesday's hearing on artificial intelligence that his company's approach is to get AI tools out to the public early to identify and solve problems before the technology becomes more powerful.

But there is a growing debate in the technology world about killer robots. Some of the harshest criticism comes from researchers who have been studying the technology's flaws for years.

In 2020, Google researchers Timnit Gebru and Margaret Mitchell collaborated with University of Washington scholar Emily M. Bender. Emily M. Bender and Angelina McMillan-Major co-authored a paper. They argue that the increasing ability of large language models to imitate humans heightens the risk that people will think they are sentient.

Instead, they argue that these models should be understood as "random parroting," or simply very good at predicting which word will come next in a sentence based purely on probability, without the need to understand what they are saying. Other critics call large language models "autocompletion" or "knowledge enemas."

They documented in detail how large language models can scriptably generate sexist and other harmful content. Gebru said the paper was suppressed by Google. After she insisted on publishing the article publicly, Google fired her. A few months later, the company fired Mitchell again.

Four co-authors of this paper also wrote a letter in response to the open letter signed by Musk and others.

“It’s dangerous to distract us with a fantasy AI utopia or apocalypse,” they said. “Instead, we should focus on the very real, very real exploitative practices of development companies that are rapidly Concentrate efforts on exacerbating social inequality.”

Google declined to comment on Gebru’s firing at the time, but said there were still many researchers working on responsible and ethical artificial intelligence.

"There is no doubt that modern artificial intelligence is powerful, but that does not mean that they pose an imminent threat to human survival," said Hooker, director of artificial intelligence research at Cohere.

Currently, much of the discussion about artificial intelligence breaking away from human control focuses on how it can quickly overcome its own limitations, like Skynet in "The Terminator."

Hook said: "Most technologies and the risks that exist in technology evolve over time." "Most risks are exacerbated by the technology limitations that currently exist."

Last year, Google fired by artificial intelligence researcher Blake Lemoine. He once stated in an interview that he firmly believes that Google's LaMDA artificial intelligence model has sentient capabilities. At the time, Lemon was roundly rebuked by many in the industry. But a year later, many people in the technology community began to accept his views.

Hinton, a former Google researcher, said that it was only recently that he changed his views on the potential dangers of this technology after using the latest artificial intelligence models. Hinton asked the computer program complex questions that, in his view, required the AI ​​model to roughly understand his request rather than just predict possible answers based on training data.

In March of this year, Microsoft researchers said that while studying OpenAI’s latest model GPT4, they observed a “spark of general artificial intelligence,” which refers to artificial intelligence that can think independently like humans.

Microsoft has spent billions of dollars working with OpenAI to develop the Bing chatbot. Skeptics believe that Microsoft is building its public image around artificial intelligence technology. This technology is always thought to be more advanced than it actually is, and Microsoft has a lot to gain from it.

Microsoft researchers believe in the paper that this technology has developed a spatial and visual understanding of the world based solely on the text content it was trained on. GPT4 can automatically draw a unicorn and describe how to stack random objects, including eggs, on top of each other so that the eggs don't break.

Microsoft research team wrote: "In addition to mastering language, GPT-4 can also solve a variety of complex new problems involving mathematics, programming, vision, medicine, law, psychology and other fields, and does not require Any special tips." They concluded that AI's capabilities are comparable to humans in many areas.

But one of the researchers admitted that although artificial intelligence researchers have tried to develop quantitative standards to evaluate the intelligence of machines, how to define "intelligence" is still very tricky.

He said, "They are all problematic or controversial."

The above is the detailed content of Hot debate in Silicon Valley: Will AI destroy humanity?. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1664
14
PHP Tutorial
1266
29
C# Tutorial
1239
24
Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Jun 28, 2024 am 03:51 AM

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Context-augmented AI coding assistant using Rag and Sem-Rag Context-augmented AI coding assistant using Rag and Sem-Rag Jun 10, 2024 am 11:08 AM

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Seven Cool GenAI & LLM Technical Interview Questions Seven Cool GenAI & LLM Technical Interview Questions Jun 07, 2024 am 10:06 AM

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Jun 11, 2024 pm 03:57 PM

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Five schools of machine learning you don't know about Five schools of machine learning you don't know about Jun 05, 2024 pm 08:51 PM

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. Aug 01, 2024 pm 09:40 PM

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

See all articles