


ChatGPT and others will not take over human work anytime soon. They are error-prone, and AI will not work for free.
The release of large-scale models such as ChatGPT has brought stress and worry to many people, who are worried that AI will soon take over their jobs. In this regard, OpenAI has also published a study showing that the impact of ChatGPT covers all income levels, and high-income jobs may face greater risks. What are the facts?
Should we automate all jobs, even the satisfying ones?
This is one of several questions recently raised by the Future of Life Institute, which has called for a moratorium on large-scale artificial intelligence experiments now that Elon Musk More than 10,000 people including Elon Musk, Steve Wozniak and Andrew Yang have signed on to the initiative. While there may be some hype, it still sounds serious – but how exactly can AI be used to automate all work? Putting aside whether this is desirable – just think, is it really possible?
Douglas Kim, a researcher at the MIT Connection Science Institute, said: I think the real obstacle is that the emergence of general artificial intelligence capabilities we have seen from OpenAI and Google Bard is different from the early A similar situation occurs when the Internet is generally available or cloud infrastructure services are available. It's not yet ready for widespread use by hundreds of millions of workers, as mentioned.
Even researchers can’t keep up with the pace of AI innovation
Douglas Kim points out that while revolutionary technologies can spread quickly, they have to wait until they are proven to be useful, easy to Before using the application, they were often not widely available. He noted that generative AI will require specific business applications to move beyond its core audience of early adopters.
Augment Matthew Kirk, head of AI at the company, also holds a similar view: "I think what is happening in the AI industry is similar to what happened in the early days of the Internet. The various opinions on the Internet at that time were very confusing. There are no standards. It takes time and cooperation for humans to determine the standards that people follow. Even something as mundane as measuring time is very complex."
Standardization is a pain point in the development of artificial intelligence. The methods used to train the models and fine-tune the results are confidential, making fundamental questions about how they work difficult to answer. OpenAI has been touting GPT-4’s ability to pass numerous standardized tests — but does the model truly understand the test, or is it simply trained to reproduce the correct answer? What does this mean for its ability to handle novel tasks? Researchers can't seem to agree on this answer, nor on the methods that might have been used to arrive at their conclusions.
##Comparing the standardized test score chart of GPT 3.5 and GPT 4
OpenAI's GPT-4 can achieve good results on many standardized tests. Does it truly understand them, or is it trained on the correct answers?
Even if standards can be agreed upon, what is needed for the design and production of widely used AI-powered tools based on large language models (LLMs) such as GPT-4 or other generative AI systems Physical hardware can also be a challenge. Lucas A. Wilson, head of global research infrastructure at Optiver, believes the AI industry is in an arms race to produce the most complex large language models (LLMs) possible. This, in turn, rapidly increases the computational resources required to train models.
Like humans, AI doesn’t work for freeAt the same time, developers must find ways to work around limitations. Training a powerful large language model (LLM) from scratch can lead to unique opportunities, but this is only available to large, well-funded organizations. It’s much cheaper to implement a service that can leverage existing models (for example, Open AI’s ChatGPT-3.5 Turbo prices API access at about $0.0027 per 1,000 English words). But when AI-driven services become popular, costs will still increase. In either case, rolling out AI that can be used without restrictions is unrealistic and will force developers to make difficult choices.
Hilary Mason, CEO and co-founder of Hidden Door, a startup that builds an AI platform to create narrative games, said: “Generally speaking, startups founded on AI should have a good understanding of all specific Vendor application programming interface (API) dependencies are very cautious. We can also build architectures that do not have to make the GPU core, but this requires considerable experience."
Hidden Door is developing software to assist users in creating unique narrative experiences using artificial intelligence.. This is an AI-powered screenshot tool for generating narrative games. Users can choose from multiple roles and prompts, which are included.
Most services built on generative AI have a fixed cap on the amount of content they can generate each month. These professional service fees may increase costs for businesses, thereby slowing down the pace of intelligent automation of people's work tasks. Even OpenAI, with its massive resources, limits paid users of ChatGPT based on current load: as of this writing, it caps it at 25 GPT-4 queries every 3 hours. Therefore, this is a huge problem for anyone who wants to rely on ChatGPT for their work.
Developers of AI-powered tools also face a challenge as old as computers themselves – designing a good user interface. A powerful LLM (Large Language Model) that can accomplish many tasks should be an unparalleled tool, but if the person using it doesn't know where to start, then its ability to accomplish the task won't matter. Kirk noted that while ChatGPT is easy to use, the openness of interacting with the AI via chat can prove overwhelming when users need to focus on a specific task.
Kirk said: "I know from past experience that making tools completely open tends to confuse users rather than help. You can think of it as an endless Hall of Porch. Most people would be confused, confused, and stuck there. We still have a lot of work to do to determine which door is best for users." Mason made a similar observation, adding: "Just Like ChatGPT, which is mainly a UX optimization of GPT-3, I think we have only just begun to create metaphors in UI design. We also need to effectively use AI models in products."
Training to use AI is a job in itself. Hallucination, as a special problem of LLM, has long caused controversy. It has also seriously hindered the process of building AI tools for sensitive and important work. . LLM has an incredible ability to generate unique texts, tell jokes, and invent stories about fictional characters. However, when precision and accuracy are key to the mission, this skill becomes a hindrance, as LLMs often treat non-existent false sources or incorrect statements as fact.
#Kim said: In some highly regulated industries (banking, insurance, health care), it is difficult for specific functions of the company to reconcile very strict data privacy and prevention of discrimination. relationship with other regulatory requirements. In these regulated industries, you can’t have an AI make the kind of mistakes that you can get away with while writing a course paper.
Businesses may be scrambling to hire employees with expertise in AI tools. Artificial intelligence security and research company Anthropic recently made headlines with a job ad for a prompt engineer and librarian, specifying that the candidate would be responsible for building "a high-quality, high-quality environment in addition to their other duties." A library for prompt or prompt chains to accomplish various tasks." Salary $175,000 to $335,000.
However, Wilson sees a tension between the expertise required to use AI tools effectively and the efficiencies AI promises to deliver.
“How do you recruit people to do the new job of training LLMs, freeing up employees who are already focused on more complex or abstract work tasks?” Wilson asked. "I haven't seen a clear answer yet."
Despite these issues, augmenting your work with artificial intelligence may still be worthwhile. This was clearly the case with the computer revolution: while many people needed training to use the tools of Word and Excel, few would suggest that typewriters or chart paper were better alternatives. As the Future of Life Institute’s letter worries, “We are replacing all jobs, including satisfying jobs, with automation.” Although such a future will take at least more than half a year, the artificial intelligence revolution is now beginning, and ten years from today, the picture of the artificial intelligence revolution will continue to unfold.
The above is the detailed content of ChatGPT and others will not take over human work anytime soon. They are error-prone, and AI will not work for free.. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year
