Table of Contents
Opening
Introduction to Artificial Intelligence
What is GPT-3 and where does it come from?
How to use GPT-3
Prompt Submit
Preset
Model
Davinci
Curie
Babbage
Ada
Engine
History
Fees and Tokens
Conclusion
Translator introduction
Home Technology peripherals AI GPT-3: Artificial intelligence that can write

GPT-3: Artificial intelligence that can write

Apr 11, 2023 pm 08:10 PM
AI language model gpt-3

Translator | Cui Hao

Reviewer | Sun Shujuan

Opening

Artificial Intelligence (AI) may be in its early stages of development, but it has the potential to revolutionize the way humans interact with technology.

Introduction to Artificial Intelligence

When it comes to artificial intelligence, there are currently two main views. Some believe that AI will eventually surpass human intelligence, while others believe that AI will always serve humanity. There's one thing both sides can agree on: Artificial intelligence is developing at an ever-increasing pace.

Artificial Intelligence (AI) is still in its early stages of development, but it has the potential to revolutionize the way humans interact with technology.

A simple, general description is that artificial intelligence is the process of programming a computer to make decisions on its own. This can be achieved in a variety of ways, but most commonly through the use of algorithms. An algorithm is a set of rules or instructions that can be followed to solve a problem. In the case of artificial intelligence, algorithms are used to teach computers how to make decisions.

#In the past, artificial intelligence was mainly used for simple tasks, such as playing chess or solving math problems. Now, artificial intelligence is being used for more complex tasks such as facial recognition, natural language processing, and even autonomous driving. As artificial intelligence continues to develop, we don’t know what capabilities it will have in the future. As AI capabilities rapidly expand, it’s important to understand what it is, how it works, and its potential impact.

#The benefits brought by artificial intelligence are huge. With the ability to make decisions on its own, AI has the potential to make countless industries more efficient and provide opportunities for all types of people. In this article, we will talk about GPT-3.

What is GPT-3 and where does it come from?

GPT-3 was created by OpenAI, a pioneering AI research company based in San Francisco. They define their goal as "ensuring that artificial intelligence benefits all mankind." Their vision for creating artificial intelligence is clear: an artificial intelligence that is not limited to specialized tasks, but can perform a variety of tasks like humans.

# A few months ago, the company OpenAI released its new language model called GPT-3 to all users. GPT-3 is the abbreviation of Generative Pretrained Transformer 3, which includes the ability to generate text through a premise called Prompt. Simply put, it has high-level "auto-complete" capabilities. For example, you only need to provide two or three sentences on a given topic, and GPT-3 will do the rest. You can also generate conversations, and the answers given by GPT-3 will be based on the context of previous questions and answers.

It should be emphasized that each answer provided by GPT-3 is only a possibility, so it will not be the only possible answer. Furthermore, if you test the same premise multiple times, it may provide a different or even contradictory answer. So it's a model that returns an answer based on what has been said previously and connects that to everything you know to get the most reasonable answer. This means that it is not obligated to give an answer with real data, which is something we must take into account. This does not mean that users cannot disclose relevant work data, but GPT-3 needs to compare this data with contextual information. The more comprehensive the context, the more reasonable the answer you will get, and vice versa.

OpenAI’s GPT-3 language model is pre-trained, and the training includes studying large amounts of information on the Internet. GPT-3 is fed into all publicly available books, the entire content of Wikipedia, and millions of web pages and scientific papers on the Internet. In short, it incorporates the most important human knowledge we have published on the web throughout history.

#After reading and analyzing this information, the language model created connections in the 700GB model located on 48 16GB GPUs. To allow us to understand this dimension, the previous OpenAI model, the GPT-2 model, was 40GB in size and analyzed 45 million web pages. The difference is huge, because GPT-2 has 1.5 billion parameters, while GPT-3 has 175 billion parameters.

#Let’s do a test, shall we? I asked GPT-3 how to define itself and the result is as follows:

GPT-3: Artificial intelligence that can write

How to use GPT-3

The only thing we have to do in order to be able to use GPT-3 and test it is to go to their website, Register and add personal information. During the process you will be asked: What will you use artificial intelligence for? For these examples, I chose the "Personal Use" option.

# I would like to point out that in my experience it works better in an English context. That doesn't mean it doesn't work well in other languages; in fact, in Spanish it does it very well, but I prefer the results it gives in English, which is why from now on I'm showing Tests and results are in English.

#GPT-3 gave us a free gift when we entered. Once you sign up with your email and phone number, you'll have $18 to use completely free with no need to enter a payment method. Although it may not seem like much, in fact, $18 is quite a lot. To give you an idea, I've been testing the AI ​​for five hours and it only cost me $1. Later I will explain the prices so we can understand this better.

#Once we enter the website we will have to go to the Playground section. This is where all the magic happens.

GPT-3: Artificial intelligence that can write

Prompt Submit

First of all, the most eye-catching thing on the Internet is the big Text box. This is where we can start inputting prompts into the AI ​​(remember, these are our requests and/or instructions). It's as simple as entering something, in this case a question, and clicking the submit button below to let GPT-3 answer us and write what we've asked for.

GPT-3: Artificial intelligence that can write

Preset

Preset is a function that can be executed at any time for different tasks. They can be found in the upper right corner of the text box. If we click on several of them, "More Examples" will open a new screen where we will have the entire list available. When a preset is selected, the contents of the text area are updated with the default text. The settings in the right sidebar will also be updated. For example, if we want to use the "Grammar Correction" preset, we should follow the following structure for best results.

GPT-3: Artificial intelligence that can write

Model

The large data set used to train GPT-3, it is GPT-3 The main reason why it is so powerful. However, bigger doesn't always mean better. For these reasons, OpenAI provides four main models. There are of course other models, but we would be advised to use the latest version, which is what we are using now.

#The available models are called Davinci, Babbage, Curie, and Ada. Of the four models, Davinci is the largest and most capable, as it can cover any task performed by the other engines.

#We will provide an overview of the model and the types of tasks that the model matches. Keep in mind that while smaller engines may not have been trained on as much data, they are still general-purpose models that are very feasible and convenient for certain tasks.

Davinci

As mentioned above, it is the most capable model and can do everything any other model can do, usually only Requires fewer instructions. Leonardo da Vinci was able to solve logical problems, determine cause and effect relationships, understand textual intent, produce creative content, explain character motivations, and handle complex summarizing tasks.

Curie

This model attempts to balance computing power and speed. It can do anything Ada or Babbage can do, but it can also handle more complex classification tasks and more nuanced tasks such as summarization, sentiment analysis, chatbot applications, and question answering.

Babbage

Its ability is slightly stronger than Ada, but not as efficient. It can perform all the same tasks as Ada, but can also handle slightly more complex classification tasks, making it ideal for semantic search tasks that classify how well a document matches a search query.

Ada

Finally, this is usually the fastest and cheapest model. It is best suited for less nuanced tasks, such as parsing text, reformatting text, and simpler classification tasks. The more context you provide Ada, the better it performs.

Engine

The other parameters we can adjust to get the best response to our cues are the models.

#One of the most important settings that controls the output of the GPT-3 engine is Temperature. This setting controls the randomness of the generated text. At a value of 0, the engine is deterministic, meaning that for a given text input, it will always produce the same output. At a value of 1, the engine takes the greatest risks and uses a lot of creativity.

#You may have noticed that in some of the tests you were able to run yourself, GPT-3 would stop in the middle of a sentence. To control the maximum amount of text we will allow to be generated, you can use the "max-length" setting specified in a token. We will explain what this token is later.

The "Top P" parameter can control the randomness and creativity of GPT-3 text, but in this case, with the token (word) within the probability range ) depends on where we place it (0.1 would be 10%). The OpenAI documentation recommends using only one function between Temperature and Top P, so when using one, make sure the other is set to 1.

#On the other hand, we have two parameters to penalize the answer given by GPT-3. One of these is the "frequency penalty", which controls the model's tendency to make repeated predictions. It also reduces the probability that a word has been generated and depends on how many times a word has appeared in the prediction.

#The second penalty is the existence penalty. The presence of a penalty parameter encourages the model to make new predictions. If a word has already appeared in the predicted text, there is a penalty that reduces the probability of that word. Unlike the frequency penalty, the presence penalty does not depend on how often the word appeared in past predictions.

#Finally, we have a "best" parameter which produces several answers to a query. Playground will choose the best one to respond to us. GPT-3 will warn that several complete answers to the prompt will result in spending more tokens.

History

#To complete this section, the third icon next to the "Submit" button will display our commitment to GPT-3 All historical requests. Here you can find the prompts for the best-performing responses.

Fees and Tokens

GPT-3 also provides a way to continue using its platform once the free $18 credit is exhausted , not a monthly subscription or anything like that. Price will be directly related to usage. In other words, you are charged according to the token. This is a term used for artificial intelligence, where tokens are related to the cost of output. A token can be anything from a letter to a sentence. Therefore, it’s difficult to know exactly how much each use of AI will cost. But given that they're usually pennies on the dollar, it doesn't take long to see how much everything costs with just a little experimentation.

Although OpenAI only shows us a dozen examples of GPT-3 usage, we can see the tokens spent for each example, thus Better understand how it works.

#These are the versions and their respective prices.

GPT-3: Artificial intelligence that can write

# To give us an idea of ​​how much a certain number of words might cost, or give us an idea of ​​how the markup works For example, we have the following tool, called Tokenizer.

It tells us that the GPT series of models process text using tokens, which are common sequences of characters found in text. The model understands the statistical relationship between tokens and is selected when the next token is used in the production sequence.

#Finally, this is a low level example of how much the same example would cost us.

GPT-3: Artificial intelligence that can write

Conclusion

From my point of view, GPT-3 is something that users must know how to When used correctly, GPT-3 does not necessarily give correct data. This means that if you want to use it to do work, answer questions, or do homework, you have to provide good context for the answers it gives you to be close to the results you want.

Some people worry about whether GPT-3 will change education, or whether some writing-related jobs that exist today will disappear because of it. In my humble opinion, this is going to happen. Sooner or later, we will all be replaced by artificial intelligence. This example is about artificial intelligence related to writing, but they exist in programming, painting, audio, etc.

#On the other hand, it opens up many more possibilities for many, many jobs and projects, both personal and professional. For example, have you ever wanted to write a horror story? This function can be specifically implemented in the example list of the grammar checker.

Having said so much, what I want to say is that we are in the early version of artificial intelligence. There are still many products in this world that need to grow and improve, but there are still many products that need to be grown and improved. Doesn't mean it didn't land. As long as we learn and use artificial intelligence, we need to continuously train it to give the best response.

Translator introduction

Cui Hao, 51CTO community editor, senior architect, has 18 years of software development and architecture experience, 10 years Distributed architecture experience.

Original title: GPT-3 Playground: The AI ​​That Can Write for You by Isaac Alvarez

The above is the detailed content of GPT-3: Artificial intelligence that can write. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Jun 28, 2024 am 03:51 AM

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Context-augmented AI coding assistant using Rag and Sem-Rag Context-augmented AI coding assistant using Rag and Sem-Rag Jun 10, 2024 am 11:08 AM

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Seven Cool GenAI & LLM Technical Interview Questions Seven Cool GenAI & LLM Technical Interview Questions Jun 07, 2024 am 10:06 AM

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Jun 11, 2024 pm 03:57 PM

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Five schools of machine learning you don't know about Five schools of machine learning you don't know about Jun 05, 2024 pm 08:51 PM

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. Aug 01, 2024 pm 09:40 PM

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

See all articles