


Is ChatGPT going to kill the data annotation industry? 20 times cheaper than humans and more accurate
Unexpectedly, the first group of people eliminated after the evolution of AI are the people who help train AI.
Many NLP applications require manual annotation of large amounts of data for a variety of tasks, especially training classifiers or evaluating the performance of unsupervised models. Depending on the scale and complexity, these tasks may be performed by crowdsourced workers on platforms such as MTurk as well as trained annotators such as research assistants.
We know that language large models (LLM) can "emerge" after reaching a certain scale - that is, they can acquire new capabilities that were previously unforeseen. As a large model that promotes a new outbreak of AI, ChatGPT’s capabilities in many tasks have exceeded people’s expectations, including labeling data sets and training yourself.
Recently, researchers from the University of Zurich have demonstrated that ChatGPT outperforms crowdsourcing work platforms and human work on multiple annotation tasks, including relevance, stance, topic and frame detection. assistant.
Additionally, the researchers did the math: ChatGPT costs less than $0.003 per annotation — roughly 20 times cheaper than MTurk. These results show the potential of large language models to greatly improve the efficiency of text classification.
Paper link: https://arxiv.org/abs/2303.15056
Research Details
Many NLP applications require high-quality annotated data, especially for training classifiers or evaluating the performance of unsupervised models. For example, researchers sometimes need to filter noisy social media data for relevance, assign texts to different topic or conceptual categories, or measure their emotional stance. Regardless of the specific method used for these tasks (supervised, semi-supervised, or unsupervised learning), accurately labeled data is required to build a training set or use it as a gold standard to evaluate performance.
The usual way people deal with this is to recruit research assistants or use crowdsourcing platforms like MTurk. When OpenAI built ChatGPT, it also subcontracted the problem of negative content to a Kenyan data annotation agency, and conducted a lot of annotation training before it was officially launched.
This report submitted by the University of Zurich in Switzerland explores the potential of large language models (LLM) in text annotation tasks, focusing on ChatGPT, released in November 2022. It proves that zero-shot (i.e. without any additional training) ChatGPT outperforms MTurk annotation on classification tasks at only a few tenths of the cost of manual labor.
The researchers used a sample of 2,382 tweets collected in a previous study. The tweets were labeled by trained annotators (research assistants) for five different tasks: relevance, stance, topic, and two frame detection. In the experiment, the researcher submitted the task to ChatGPT as a zero-shot classification and simultaneously to the crowdsourcing workers on MTurk, and then evaluated the performance of ChatGPT based on two benchmarks: relative to the accuracy of human workers on the crowdsourcing platform, and accuracy relative to research assistant annotators.
It was found that on four out of five tasks, ChatGPT had a higher zero-sample accuracy than MTurk. ChatGPT's encoder agreement exceeds that of MTurk and trained annotators for all tasks. Furthermore, in terms of cost, ChatGPT is much cheaper than MTurk: five classification tasks cost about $68 on ChatGPT (25264 annotations) and about $657 on MTurk (12632 annotations).
That puts ChatGPT’s cost per annotation at about $0.003, or one-third of a cent — about 20 times cheaper than MTurk, and with higher quality. Given this, it is now possible to annotate more samples or create large training sets for supervised learning. Based on existing tests, 100,000 annotations cost approximately $300.
The researchers say that while further research is needed to better understand how ChatGPT and other LLMs function in a broader context, these results suggest they have the potential to change the way researchers conduct The way data is annotated, and disrupts part of the business model of platforms like MTurk.
Experimental Process
The researchers used a dataset of 2382 tweets that were manually annotated from previous studies on tasks related to content moderation. Specifically, trained annotators (research assistants) constructed gold standards for five conceptual categories with varying numbers of categories: relevance of tweets to content moderation questions (relevant/irrelevant); regarding Article 230 ( position as part of the U.S. Communications Decency Act of 1996), a key part of U.S. Internet legislation; topic identification (six categories); Group 1 frameworks (content moderation as problem, solution, or neutral); and Section 1 Two sets of frameworks (fourteen categories).
The researchers then performed these exact same classifications using ChatGPT and crowdsourced workers recruited on MTurk. Four sets of annotations were made for ChatGPT. To explore the impact of the ChatGPT temperature parameter that controls the degree of randomness in the output, it is annotated here with the default values of 1 and 0.2, which imply less randomness. For each temperature value, the researchers performed two sets of annotations to calculate ChatGPT's encoder agreement.
For the experts, the study found two political science graduate students annotating tweets for all five tasks. For each task, coders were given the same set of instructions and were asked to independently annotate tweets on a task-by-task basis. To calculate the accuracy of ChatGPT and MTurk, the comparison only considered tweets that both trained annotators agreed upon.
For MTurk, the goal of the research is to select the best group of workers, specifically through screening those who are classified by Amazon as "MTurk Masters", have more than 90% positive reviews, and work in the United States By.
This study uses the "gpt-3.5-turbo" version of the ChatGPT API to classify tweets. Annotation took place between March 9 and March 20, 2023. For each annotation task, the researchers intentionally avoided adding any ChatGPT-specific prompts such as “let’s think step by step” to ensure comparability between ChatGPT and MTurk crowdworkers.
After testing several variations, people decided to feed tweets to ChatGPT one by one with a prompt like this: "This is the tweet I selected, please mark it for [task-specific instructions (e.g., one of the topics in the instructions)]. Additionally, four ChatGPT responses were collected per tweet in this study, and a new chat session was also created for each tweet to ensure ChatGPT results Not affected by annotation history.
Figure 1. ChatGPT zero compared to high-scoring annotators on MTurk -shot's text annotation capabilities. ChatGPT has better accuracy than MTurk in four of the five tasks.
In the above figure, ChatGPT has the advantage Among the four tasks, ChatGPT has a slight advantage in one case (relevance), but its performance is very similar to MTurk. In the other three cases (frams I, frams II, and Stance), ChatGPT outperforms MTurk by 2.2 to 3.4 times. Furthermore, considering the difficulty of the task, the number of classes, and the fact that the annotations are zero-sample, the accuracy of ChatGPT is generally more than adequate.
For correlation, there are two For categories (relevant/irrelevant), ChatGPT has an accuracy of 72.8%, while for stance, there are three categories (positive/negative/neutral) with an accuracy of 78.7%. As the number of categories increases, the accuracy decreases, Although the inherent difficulty of the task also has an impact. Regarding the encoder protocol, Figure 1 shows that the performance of ChatGPT is very high, exceeding 95% for all tasks when the temperature parameter is set to 0.2. These values are higher than any human, including those trained on annotator. Even using the default temperature value of 1 (which means more randomness), the inter-coder agreement is always over 84%. The relationship between inter-coder agreement and accuracy is positive but weak ( Pearson correlation coefficient: 0.17). Although the correlation is based on only five data points, it suggests that lower temperature values may be more suitable for the annotation task, as it appears to improve the consistency of the results without significantly reducing accuracy.
It must be emphasized that testing ChatGPT is very difficult. Content moderation is a complex topic that requires significant resources. In addition to positions, researchers have developed concepts for specific research purposes categories. In addition, some tasks involve a large number of categories, yet ChatGPT still achieves high accuracy.
Using models to annotate data is nothing new. In computer science research using large-scale data sets, people often label a small number of samples and then amplify them with machine learning. However, after outperforming humans, we may be able to trust the judgments from ChatGPT more in the future.
The above is the detailed content of Is ChatGPT going to kill the data annotation industry? 20 times cheaper than humans and more accurate. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

The following factors should be considered when choosing a bulk trading platform: 1. Liquidity: Priority is given to platforms with an average daily trading volume of more than US$5 billion. 2. Compliance: Check whether the platform holds licenses such as FinCEN in the United States, MiCA in the European Union. 3. Security: Cold wallet storage ratio and insurance mechanism are key indicators. 4. Service capability: Whether to provide exclusive account managers and customized transaction tools.

Provides a variety of complex trading tools and market analysis. It covers more than 100 countries, has an average daily derivative trading volume of over US$30 billion, supports more than 300 trading pairs and 200 times leverage, has strong technical strength, a huge global user base, provides professional trading platforms, secure storage solutions and rich trading pairs.

The top ten secure digital currency exchanges in 2025 are: 1. Binance, 2. OKX, 3. gate.io, 4. Coinbase, 5. Kraken, 6. Huobi, 7. Bitfinex, 8. KuCoin, 9. Bybit, 10. Bitstamp. These platforms adopt multi-level security measures, including separation of hot and cold wallets, multi-signature technology, and a 24/7 monitoring system to ensure the safety of user funds.

Common stablecoins are: 1. Tether, issued by Tether, pegged to the US dollar, widely used but transparency has been questioned; 2. US dollar, issued by Circle and Coinbase, with high transparency and favored by institutions; 3. DAI, issued by MakerDAO, decentralized, and popular in the DeFi field; 4. Binance Dollar (BUSD), cooperated by Binance and Paxos, and performed excellent in transactions and payments; 5. TrustTo

As of 2025, the number of stablecoin exchanges is about 1,000. 1. Stable coins supported by fiat currencies include USDT, USDC, etc. 2. Cryptocurrency-backed stablecoins such as DAI and sUSD. 3. Algorithm stablecoins such as TerraUSD. 4. There are also hybrid stablecoins.

As of April 2025, seven cryptocurrency projects are considered to have significant growth potential: 1. Filecoin (FIL) achieves rapid development through distributed storage networks; 2. Aptos (APT) attracts DApp developers with high-performance Layer 1 public chains; 3. Polygon (MATIC) improves Ethereum network performance; 4. Chainlink (LINK) serves as a decentralized oracle network to meet smart contract needs; 5. Avalanche (AVAX) trades quickly and

Choosing a reliable exchange is crucial. The top ten exchanges such as Binance, OKX, and Gate.io have their own characteristics. New apps such as CoinGecko and Crypto.com are also worth paying attention to.

DLC coins are blockchain-based cryptocurrencies that aim to provide an efficient and secure trading platform, support smart contracts and cross-chain technologies, and are suitable for the financial and payment fields.
