Will synthetic data drive the future of AI/ML training?
There’s no doubt that collecting real data for training artificial intelligence or machine learning (AI/ML) is time-consuming and expensive. And, many times it is fraught with risk, but a more common problem is that too little data or biased data can lead organizations astray. But what if you could generate new data, so-called synthetic data?
It sounds unlikely, but that’s exactly what Synthesis AI plans to raise from venture capital firms including 468 Capital, Sorenson Ventures, Strawberry Creek Ventures, Bee Partners, PJC, iRobot Ventures, Boom Capital and Kubera Venture Capital of $17 million in Series A financing.
This is very reliable evidence. The company is planning to use the funding to expand its research and development in the field of mixing real and synthetic data.
Yashar Behzadi, CEO of Synthesis AI, said in a statement: "Synthetic data is at an inflection point in adoption, and our goal is to further develop the technology and drive a paradigm shift in how computer vision systems are built. The Industry will soon be fully designing and training computer vision models in virtual worlds, enabling more advanced and ethical artificial intelligence."
But what is synthetic data?
Synthetic data is created by humans rather than collected from the real world. Currently, many applications focus on visual data, such as data collected from computer vision systems. Still, there's no practical reason why synthetic data can't be created for other use cases, such as testing applications or improving algorithms for detecting fraud. They are somewhat like highly structured digital twins of physical records.
By providing massive, realistic data sets at scale, data scientists and analysts can theoretically skip the data collection process and go directly to testing or training.
This is because much of the cost of creating a real-world dataset goes beyond just collecting the raw data. Take computer vision and self-driving cars as an example. Automakers and researchers can attach various cameras, radar and lidar sensors to vehicles to collect them, but the raw data means nothing to AI/ML algorithms. An equally daunting challenge is manually labeling the data with contextual information to help the system make better decisions.
Let’s look at the context of this challenge: Imagine that you drive a short distance regularly, with all the stop signs, intersections, parked cars, pedestrians, etc., and then imagine that, given Labeling every potential hazard is a daunting task.
The core advantage of synthetic data is that, in theory, it can create perfectly labeled data sets large enough to properly train AI/ML applications, which means data scientists can suddenly test their algorithms in a large number of new places , and then only world data can be truly achieved or in situations where it is difficult to obtain. Continuing with the self-driving car example, data scientists can create synthetic data to train cars to drive in harsh conditions, such as snow-covered roads, without having to send drivers north or into the mountains to manually collect data.
The core advantage of synthetic data is that, in theory, it can create perfectly labeled datasets at the scale required to properly train AI/ML applications, meaning data scientists can create data before getting real data. , or suddenly testing their algorithms in many new places when data is hard to come by. Still with the self-driving car example, data scientists can create synthetic data to train the car to drive in adverse conditions, such as snow-covered roads, without having the driver go all the way north or into the mountains to collect data manually.
However, synthetic data presents a chicken-and-egg problem, as it can only be created using…more data and more AI/ML algorithms. Start with a "seed" dataset and then use it as a baseline for your synthetic creations, meaning they will only be as good as the data you start with.
(INTANGIBLE) BENEFITS
What data scientist or researcher wouldn’t benefit from a seemingly endless supply of data generators? The core benefit – the ability to avoid manually collecting real-world data Data – just one of the ways synthetic data can accelerate AI/ML applications.
Because analysts and data scientists have tight control over seed data and can even go the extra mile to incorporate diversity or work with outside consultants to uncover and decode bias, they can hold themselves to a higher standard. Synthesis AI, for example, is developing a system that monitors driver status and carefully includes different faces in their computer-generated synthetic dataset to ensure real-world applications work for everyone.
Privacy is another potential win. If a company spends millions of miles collecting real-world data for their self-driving cars, they're collecting a lot of data that many people consider personal—especially their faces. Big companies like Google and Apple have found ways to avoid these types of problems in their mapping software, but their routes aren't feasible for small AI/ML teams that want to test their algorithms.
"Companies are also grappling with ethical issues related to model bias and consumer privacy in human-centered products. It's clear that building the next generation of computer vision requires a new paradigm," said the company's CEO Yashar Behzadi, founder and CEO, told the media. While synthetic data does rely on a seed to get started, it can be adapted and modified to help train AI/ML applications in edge cases that are difficult or dangerous to capture in real life. The companies behind self-driving cars hope to get good at identifying objects or people that are only partially visible, such as a stop sign hidden behind a truck or a pedestrian standing between two cars darting onto the road.
Given these wins, and despite some concerns about the chicken-and-egg problem of encoding bias into synthetic data, Gartner predicts that by 2024, 60% of the data used to develop AI and analytics products will be generated synthetically. They predict that much of the new data will focus on fixing predictive models when the historical data on which they are based loses relevance or assumptions based on past experience break down.
But there will always be a need to collect some real-world data, so we are still a long way from being completely obsolete by avatars of our universal, unbiased selves.
The above is the detailed content of Will synthetic data drive the future of AI/ML training?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











DMA in C refers to DirectMemoryAccess, a direct memory access technology, allowing hardware devices to directly transmit data to memory without CPU intervention. 1) DMA operation is highly dependent on hardware devices and drivers, and the implementation method varies from system to system. 2) Direct access to memory may bring security risks, and the correctness and security of the code must be ensured. 3) DMA can improve performance, but improper use may lead to degradation of system performance. Through practice and learning, we can master the skills of using DMA and maximize its effectiveness in scenarios such as high-speed data transmission and real-time signal processing.

Using the chrono library in C can allow you to control time and time intervals more accurately. Let's explore the charm of this library. C's chrono library is part of the standard library, which provides a modern way to deal with time and time intervals. For programmers who have suffered from time.h and ctime, chrono is undoubtedly a boon. It not only improves the readability and maintainability of the code, but also provides higher accuracy and flexibility. Let's start with the basics. The chrono library mainly includes the following key components: std::chrono::system_clock: represents the system clock, used to obtain the current time. std::chron

The built-in quantization tools on the exchange include: 1. Binance: Provides Binance Futures quantitative module, low handling fees, and supports AI-assisted transactions. 2. OKX (Ouyi): Supports multi-account management and intelligent order routing, and provides institutional-level risk control. The independent quantitative strategy platforms include: 3. 3Commas: drag-and-drop strategy generator, suitable for multi-platform hedging arbitrage. 4. Quadency: Professional-level algorithm strategy library, supporting customized risk thresholds. 5. Pionex: Built-in 16 preset strategy, low transaction fee. Vertical domain tools include: 6. Cryptohopper: cloud-based quantitative platform, supporting 150 technical indicators. 7. Bitsgap:

Handling high DPI display in C can be achieved through the following steps: 1) Understand DPI and scaling, use the operating system API to obtain DPI information and adjust the graphics output; 2) Handle cross-platform compatibility, use cross-platform graphics libraries such as SDL or Qt; 3) Perform performance optimization, improve performance through cache, hardware acceleration, and dynamic adjustment of the details level; 4) Solve common problems, such as blurred text and interface elements are too small, and solve by correctly applying DPI scaling.

C performs well in real-time operating system (RTOS) programming, providing efficient execution efficiency and precise time management. 1) C Meet the needs of RTOS through direct operation of hardware resources and efficient memory management. 2) Using object-oriented features, C can design a flexible task scheduling system. 3) C supports efficient interrupt processing, but dynamic memory allocation and exception processing must be avoided to ensure real-time. 4) Template programming and inline functions help in performance optimization. 5) In practical applications, C can be used to implement an efficient logging system.

The main steps and precautions for using string streams in C are as follows: 1. Create an output string stream and convert data, such as converting integers into strings. 2. Apply to serialization of complex data structures, such as converting vector into strings. 3. Pay attention to performance issues and avoid frequent use of string streams when processing large amounts of data. You can consider using the append method of std::string. 4. Pay attention to memory management and avoid frequent creation and destruction of string stream objects. You can reuse or use std::stringstream.

Measuring thread performance in C can use the timing tools, performance analysis tools, and custom timers in the standard library. 1. Use the library to measure execution time. 2. Use gprof for performance analysis. The steps include adding the -pg option during compilation, running the program to generate a gmon.out file, and generating a performance report. 3. Use Valgrind's Callgrind module to perform more detailed analysis. The steps include running the program to generate the callgrind.out file and viewing the results using kcachegrind. 4. Custom timers can flexibly measure the execution time of a specific code segment. These methods help to fully understand thread performance and optimize code.

Efficient methods for batch inserting data in MySQL include: 1. Using INSERTINTO...VALUES syntax, 2. Using LOADDATAINFILE command, 3. Using transaction processing, 4. Adjust batch size, 5. Disable indexing, 6. Using INSERTIGNORE or INSERT...ONDUPLICATEKEYUPDATE, these methods can significantly improve database operation efficiency.
