How to inject simple dummy data at a large scale in MySQL
Introduction
Ever found yourself in a situation where you needed a large amount of dummy data for testing, but didn't want to spend hours writing scripts or manually inserting records? Or perhaps you've been curious about how to leverage new features in MySQL 8.0 to streamline your database tasks? Well, you're in for a treat! In this post, we'll explore how to use Common Table Expressions (CTEs) to effortlessly generate and insert vast amounts of dummy data into your MySQL database.
Imagine needing to populate a table with a million hash values for load testing or performance benchmarking. Sounds like a nightmare, right? Not anymore! With the advent of CTEs in MySQL 8.0, you can achieve this in a matter of seconds. Let's dive into how this works and how you can use this powerful feature to simplify your data generation needs.
TL; DR
Common Table Expressions (CTEs), a new feature added in MySQL 8.0, can be used to easily input a large amount of simple dummy data. For example, if you want to input 1 million dummy data into a table called hashes that stores hash values, you can achieve this with the following steps:
Table Definitions
First, create the table:
CREATE TABLE hashes ( id INT PRIMARY KEY AUTO_INCREMENT, hash CHAR(64) );
Query Execution
Set the session variable to allow a higher recursion depth:
SET SESSION cte_max_recursion_depth = 1000000;
Then, execute the CTE to insert 1 million rows:
INSERT INTO hashes(hash) WITH RECURSIVE cte (n) AS ( SELECT 1 UNION ALL SELECT n + 1 FROM cte WHERE n < 1000000 ) SELECT SHA2(n, 256) FROM cte;
This method leverages a recursive common table expression to generate the dummy data.
Understanding CTEs
A Common Table Expression (CTE) is a named temporary result set that can be referenced within a single statement multiple times. CTEs are particularly useful for simplifying complex queries and improving readability.
Syntax Breakdown
Setting Recursion Depth
SET SESSION cte_max_recursion_depth = 1000000;
The cte_max_recursion_depth system variable sets the upper limit for recursion. By default, it's 1000, so to recurse more, you need to adjust it. Here, we set it to 1 million.
The CTE Query
INSERT INTO hashes(hash) WITH RECURSIVE cte (n) AS ( SELECT 1 UNION ALL SELECT n + 1 FROM cte WHERE n < 1000000 ) SELECT SHA2(n, 256) FROM cte;
Let's break down this query:
WITH RECURSIVE cte (n): This starts the CTE definition. cte is the name of the temporary result set, and n is the column.
SELECT 1: This is the non-recursive part of the CTE, serving as the starting point (initial value).
UNION ALL SELECT n + 1 FROM cte WHERE n < 1000000: This is the recursive part, which increments the value of n by 1 until it reaches 1,000,000.
SELECT SHA2(n, 256) FROM cte: This final part of the query selects the SHA-256 hash of each n value, generating the dummy data for insertion.
How It Works
The CTE recursively generates numbers from 1 to 1,000,000. For each number, it computes the SHA-256 hash and inserts it into the hashes table. This approach is efficient and leverages MySQL's recursive capabilities to handle large data volumes seamlessly.
Processing Speed
Verification Environment
To understand the impact of this feature, I used a Gitpod Enterprise workspace, leveraging the powerful and ephemeral environment to avoid the hassle of setups and installations. Here’s a glance at the setup:
- Machine: Gitpod Enterprise XXLarge workspace
- OS: Ubuntu 22.04.4 LTS (Jammy Jellyfish)
- Containerization: Docker version 26.0.1
- MySQL Version: Official MySQL 8.0 Docker image
Results
For 1 million rows, the query execution time is approximately 4.46 seconds:
mysql> INSERT INTO hashes(hash) -> WITH RECURSIVE cte (n) AS -> ( -> SELECT 1 -> UNION ALL -> SELECT n + 1 FROM cte WHERE n < 1000000 -> ) -> SELECT SHA2(n, 256) FROM cte; Query OK, 1000000 rows affected (4.43 sec) Records: 1000000 Duplicates: 0 Warnings: 0
Performance Metrics
Number of Rows | Execution Time |
---|---|
1,000 | 0.03 sec |
10,000 | 0.07 sec |
100,000 | 0.42 sec |
1,000,000 | 4.43 sec |
10,000,000 | 48.53 sec |
Conclusion
Using CTEs in MySQL 8.0 is a game-changer for generating large amounts of dummy data quickly. It's especially handy for load testing and performance benchmarking. With just a few lines of code, you can effortlessly populate your tables and then get back to the other important parts of your project.
Don't hesitate to play around with different data generation strategies and recursion depths to see what works best for you. For more tips and insights on security and log analysis, follow me on Twitter @Siddhant_K_code and stay updated with the latest & detailed tech content like this. Happy coding!
The above is the detailed content of How to inject simple dummy data at a large scale in MySQL. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











The main role of MySQL in web applications is to store and manage data. 1.MySQL efficiently processes user information, product catalogs, transaction records and other data. 2. Through SQL query, developers can extract information from the database to generate dynamic content. 3.MySQL works based on the client-server model to ensure acceptable query speed.

InnoDB uses redologs and undologs to ensure data consistency and reliability. 1.redologs record data page modification to ensure crash recovery and transaction persistence. 2.undologs records the original data value and supports transaction rollback and MVCC.

MySQL's position in databases and programming is very important. It is an open source relational database management system that is widely used in various application scenarios. 1) MySQL provides efficient data storage, organization and retrieval functions, supporting Web, mobile and enterprise-level systems. 2) It uses a client-server architecture, supports multiple storage engines and index optimization. 3) Basic usages include creating tables and inserting data, and advanced usages involve multi-table JOINs and complex queries. 4) Frequently asked questions such as SQL syntax errors and performance issues can be debugged through the EXPLAIN command and slow query log. 5) Performance optimization methods include rational use of indexes, optimized query and use of caches. Best practices include using transactions and PreparedStatemen

Compared with other programming languages, MySQL is mainly used to store and manage data, while other languages such as Python, Java, and C are used for logical processing and application development. MySQL is known for its high performance, scalability and cross-platform support, suitable for data management needs, while other languages have advantages in their respective fields such as data analytics, enterprise applications, and system programming.

MySQL is suitable for small and large enterprises. 1) Small businesses can use MySQL for basic data management, such as storing customer information. 2) Large enterprises can use MySQL to process massive data and complex business logic to optimize query performance and transaction processing.

MySQL index cardinality has a significant impact on query performance: 1. High cardinality index can more effectively narrow the data range and improve query efficiency; 2. Low cardinality index may lead to full table scanning and reduce query performance; 3. In joint index, high cardinality sequences should be placed in front to optimize query.

The basic operations of MySQL include creating databases, tables, and using SQL to perform CRUD operations on data. 1. Create a database: CREATEDATABASEmy_first_db; 2. Create a table: CREATETABLEbooks(idINTAUTO_INCREMENTPRIMARYKEY, titleVARCHAR(100)NOTNULL, authorVARCHAR(100)NOTNULL, published_yearINT); 3. Insert data: INSERTINTObooks(title, author, published_year)VA

MySQL is suitable for web applications and content management systems and is popular for its open source, high performance and ease of use. 1) Compared with PostgreSQL, MySQL performs better in simple queries and high concurrent read operations. 2) Compared with Oracle, MySQL is more popular among small and medium-sized enterprises because of its open source and low cost. 3) Compared with Microsoft SQL Server, MySQL is more suitable for cross-platform applications. 4) Unlike MongoDB, MySQL is more suitable for structured data and transaction processing.
