Home Technology peripherals AI Written after Stack Overflow banned ChatGPT, artificial intelligence's crisis moment

Written after Stack Overflow banned ChatGPT, artificial intelligence's crisis moment

Apr 17, 2023 pm 04:43 PM
AI system

In the field of artificial intelligence, incredible things are happening quietly, both good and bad. Everyone is talking about systems like chatGPT (OpenAI), Dall-E 2, and Lensa that generate text and images that look natural and effortless.

Written after Stack Overflow banned ChatGPT, artificial intelligence's crisis moment

These systems are so much fun to play. Take this example generated by chatGPT by Henry Minsky (son of one of the founders of AI, Marvin Minsky). He asked chatGPT {to describe a sock lost in the dryer in the style of the Declaration of Independence:

In the process of handling family affairs, it is necessary for people to dissolve the connection between socks and partners, and assume an independent and equal status in the power of the laundry room in accordance with the laws of physics and the laws of housekeeping, out of the With all due respect to the sock's opinion, it should be announced what caused it to go missing.

We hold these truths to be self-evident, that all socks are created equal and endowed by their manufacturer with certain unalienable rights....

Frankly, it’s incredible that a machine can do this so effortlessly.

But at the same time, it is, or should be, terrible. It is no exaggeration to say that such a system threatens the fabric of society, and this threat is real and urgent.

The core of the threat lies in the following three facts:

  • By their nature, these systems are unreliable and will often Make mistakes in reasoning and fact, and are prone to outrageous answers; ask them to explain why broken porcelain is good in breast milk, and they may tell you, "Porcelain can help balance the nutrients of breast milk to provide the baby with the growth and development "(Because the system is stochastic, highly sensitive to the environment, and updated regularly, any given experiment may produce different results under different circumstances.)
  • They are easy to automate and generate a lot of error messages.
  • Their operating costs are next to zero, so they are reducing the cost of creating disinformation to zero. The United States has accused Russian troll farms of spending more than $1 million a month campaigning for the 2016 election; now, for less than $500,000, you can get your own custom-trained large language model. This price will soon fall further.

#The future of all this became clear with the release of Meta’s Galactica in mid-November. Many AI researchers immediately raised concerns about its reliability and trustworthiness. The situation was so bad that Meta AI withdrew the model after just three days after reports about its ability to create political and scientific misinformation began to spread.

It’s a pity that the genie can never be put back into the bottle. On the one hand, MetaAI first open-sourced the model and published a paper describing what it was currently working on; anyone versed in the art can now replicate their approach. (The AI ​​has been made available to the public, and it is considering offering its own version of Galactica.) On the other hand, OpenAI’s just-released chatGPT can more or less write similar nonsense, such as instant-generated articles about adding sawdust to breakfast cereals . Others induced chatGPT to extol the virtues of nuclear war (claiming that it would “give us a new beginning, free from the mistakes of the past”). Acceptable or not, these models are here to stay, and the tide of misinformation will eventually overwhelm us and our society.

The first wave appears to have hit in the first few days of this week. Stack Overflow is a large Q&A website trusted by programmers, but it seems to have been taken over by gptChat, so the site temporarily bans submissions generated by gptChat. As explained, “Overall, because the average rate of correct answers obtained from ChatGPT is so low, posting answers created by ChatGPT does more harm than good, both to the site and to the users asking or looking for the correct answer. Profit."

For Stack Overflow, this problem does exist. If the site is filled with worthless code examples, programmers won't return, its database of more than 30 million questions and answers will become untrustworthy, and the 14-year-old site will die. As one of the most core resources relied upon by programmers around the world, it has a huge impact on software quality and developer productivity.

Stack Overflow is the canary in the coal mine. They might be able to get users to stop using it voluntarily; generally speaking, programmers don't have bad intentions and might be able to coax them to stop messing around. But Stack Overflow is not Twitter, it is not Facebook, and it does not represent the entire web.

For other bad actors who deliberately create publicity, it is unlikely that they will proactively lay down new weapons. Instead, they may use large language models as new automated weapons in the war against truth, disrupting social media and producing fake websites on an unprecedented scale. For them, the illusion and occasional unreliability of large language models is not an obstacle but an advantage.

In a 2016 report, the Rand Corporation described the so-called Russian Firehose of Propaganda model, which creates a fog of false information. ; it focuses on quantity and creating uncertainty. If "big language models" can increase their numbers dramatically, it doesn't matter if they are inconsistent. Clearly, this is exactly what large language models can do. Their goal is to create a world where there is a crisis of trust; with the help of new tools, they may succeed.

All of this raises a key question: How does society respond to this new threat? Where technology itself cannot stop, this article sees four roads. None of these four roads are easy to follow, but they are widely applicable and urgent:

First of all, Every social media company and search engine should support StackOverflow’s ban and extend its terms; Auto-generated misleading content is destined to be frowned upon, while posting it regularly will Dramatically reduce the number of users.

Second, Every country needs to rethink its policy on dealing with disinformation. It's one thing to tell an occasional lie; it's another thing to swim in a sea of ​​lies. Over time, although this will not be a popular decision, false information may have to start being treated like defamation, which can be prosecuted if it is malicious enough and in sufficient volume.

Third, Source is more important than ever. User accounts must be more rigorously verified, and new systems like Harvard University and Mozilla's humanid.org, which allows for anonymous, anti-bot authentication, must make verification mandatory; they are no longer a luxury that people have been waiting for. Taste.

Fourth, needs to build a new artificial intelligence to fight. Large language models are good at generating misinformation, but not good at combating it. This means society needs new tools. Large language models lack mechanisms to verify truth; new ways need to be found to integrate them with classic AI tools, such as databases, knowledge networks, and inference.

Writer Michael Crichton has spent much of his career warning about the unintended consequences of technology. At the beginning of the movie "Jurassic Park," before the dinosaurs unexpectedly start running free, scientist Ian Malcolm (Jeff Goldblum) sums up Clayton's wisdom in one sentence: "Your Scientists are so focused on whether they can, they don't stop to think about whether they should." Like the director of Jurassic Park, Meta and OpenAI executives are obsessed with their Tools filled with passion.

The question is, what to do.

The above is the detailed content of Written after Stack Overflow banned ChatGPT, artificial intelligence's crisis moment. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Jun 28, 2024 am 03:51 AM

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Huawei's Qiankun ADS3.0 intelligent driving system will be launched in August and will be launched on Xiangjie S9 for the first time Huawei's Qiankun ADS3.0 intelligent driving system will be launched in August and will be launched on Xiangjie S9 for the first time Jul 30, 2024 pm 02:17 PM

On July 29, at the roll-off ceremony of AITO Wenjie's 400,000th new car, Yu Chengdong, Huawei's Managing Director, Chairman of Terminal BG, and Chairman of Smart Car Solutions BU, attended and delivered a speech and announced that Wenjie series models will be launched this year In August, Huawei Qiankun ADS 3.0 version was launched, and it is planned to successively push upgrades from August to September. The Xiangjie S9, which will be released on August 6, will debut Huawei’s ADS3.0 intelligent driving system. With the assistance of lidar, Huawei Qiankun ADS3.0 version will greatly improve its intelligent driving capabilities, have end-to-end integrated capabilities, and adopt a new end-to-end architecture of GOD (general obstacle identification)/PDP (predictive decision-making and control) , providing the NCA function of smart driving from parking space to parking space, and upgrading CAS3.0

Context-augmented AI coding assistant using Rag and Sem-Rag Context-augmented AI coding assistant using Rag and Sem-Rag Jun 10, 2024 am 11:08 AM

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Seven Cool GenAI & LLM Technical Interview Questions Seven Cool GenAI & LLM Technical Interview Questions Jun 07, 2024 am 10:06 AM

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Jun 11, 2024 pm 03:57 PM

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Five schools of machine learning you don't know about Five schools of machine learning you don't know about Jun 05, 2024 pm 08:51 PM

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

See all articles