Home Backend Development PHP Tutorial How to optimize Mysql tens of millions of fast paging

How to optimize Mysql tens of millions of fast paging

Dec 02, 2016 pm 03:13 PM
mysql

MySQL database optimization processing enables tens of millions of fast paging analysis, let’s take a look.
Data table collect (id, title, info, vtype) has these 4 fields, of which title is fixed length, info is text, id is gradual, vtype is tinyint, and vtype is index. This is a simple model of a basic news system. Now fill it with data and fill it with 100,000 news items.
The final collect is 100,000 records, and the database table occupies 1.6G of hard disk. OK, look at the following sql statement:
select id,title from collect limit 1000,10; Very fast; basically OK in 0.01 seconds, look at the following
select id,title from collect limit 90000,10; From 90,000 The article starts pagination, the result?
Complete in 8-9 seconds, what’s wrong with my god? ? ? ? In fact, if you want to optimize this data, you can find the answer online. Look at the following statement:
select id from collect order by id limit 90000,10; It’s very fast, it’s OK in 0.04 seconds. Why? Because using the id primary key for indexing is of course faster. The online modification is:
select id,title from collect where id>=(select id from collect order by id limit 90000,1) limit 10;
This is the result of using id for indexing. But if the problem is just a little bit complicated, it’s over. Look at the following statement
select id from collect where vtype=1 order by id limit 90000,10; It’s very slow, it took 8-9 seconds!
Now that I’m here, I believe many people will feel like I’m having a breakdown! Is vtype indexed? Why is it so slow? It is good to have vtype indexed. You can directly select id from collect where vtype=1 limit 1000,10; which is very fast, basically 0.05 seconds, but it can be increased by 90 times. Starting from 90,000, that is 0.05*90=4.5 seconds. speed. And the test result is 8-9 seconds to an order of magnitude. From here, someone started to put forward the idea of ​​sub-table, which is the same idea as the discuz forum. The idea is as follows:
Build an index table: t (id, title, vtype) and set it to a fixed length, then do paging, paging out the results and then go to collect to find info. Is it possible? You will know through experimentation.
100,000 records are stored in t(id,title,vtype), and the data table size is about 20M. Use
select id from t where vtype=1 order by id limit 90000,10; It’s fast. Basically it can be run in 0.1-0.2 seconds. Why is this so? I guess it's because there is too much collect data, so paging takes a long time. limit is completely related to the size of the data table. In fact, this is still a full table scan, just because the amount of data is small, and it is only faster if it is 100,000. OK, let’s do a crazy experiment, add it to 1 million, and test the performance.
After adding 10 times the data, the t table immediately reached more than 200M, and it was a fixed length. The query statement just now takes 0.1-0.2 seconds to complete! Is there any problem with the sub-table performance? wrong! Because our limit is still 90,000, it’s fast. Give it a big one, start at 900,000
select id from t where vtype=1 order by id limit 900000,10; Look at the result, the time is 1-2 seconds!
Why?? Even after dividing the timetable, the time is still so long, which is very frustrating! Some people say that fixed length will improve the performance of limit. At first, I thought that because the length of a record is fixed, mysql should be able to calculate the position of 900,000, right? But we overestimated the intelligence of MySQL. It is not a business database. Facts have proved that fixed length and non-fixed length have little impact on limit? No wonder some people say that discuz will be very slow when it reaches 1 million records. I believe this is true. This is related to the database design!
Can’t MySQL break through the 1 million limit? ? ? Is the limit really reached when the number of pages reaches 1 million? ? ?
The answer is: NO!!!! The reason why it cannot exceed 1 million is because it does not know how to design mysql. Let’s introduce the non-table method, let’s have a crazy test! One table can handle 1 million records and a 10G database. How to quickly paginate!
Okay, our test has returned to the collect table. The conclusion of the test is: 300,000 data, using the split table method is feasible. If it exceeds 300,000, the speed will be unbearable! Of course, if you use the method of sub-table + me, it will be absolutely perfect. But after using my method, it can be solved perfectly without splitting the table!
The answer is: compound index! Once when designing a mysql index, I accidentally discovered that the index name can be chosen arbitrarily, and you can select several fields to enter. What is the use of this? The initial select id from collect order by id limit 90000,10; is so fast because the index is used, but if you add where, the index will not be used. I added an index like search(vtype,id) with the idea of ​​giving it a try. Then test
select id from collect where vtype=1 limit 90000,10; very fast! Completed in 0.04 seconds!
Test again: select id ,title from collect where vtype=1 limit 90000,10; Very sorry, 8-9 seconds, no search index!
Test again: search(id,vtype), or select id statement, which is also very regrettable, 0.5 seconds.
To sum up: If there is a where condition and you want to use limit for indexing, you must design an index that puts where first and the primary key used by limit second, and you can only select the primary key!
Perfectly solved the paging problem. If you can quickly return the ID, you can hope to optimize the limit. According to this logic, a million-level limit should be divided in 0.0x seconds. It seems that the optimization and indexing of mysql statements are very important!
Okay, back to the original question, how can we successfully and quickly apply the above research to development? If I use compound queries, my lightweight framework will be useless. I have to write the paging string myself. How troublesome is that? Here is another example, and the idea comes out:
select * from collect where id in (9000,12,50,7000); It can be checked in 0 seconds!
mygod, the index of mysql is also effective for the in statement! It seems that it is wrong to say that in cannot be indexed online!
With this conclusion, it can be easily applied to lightweight frameworks:
The code is as follows:
$db=dblink();
$db->pagesize=20;
$sql="select id from collect where vtype=$vtype";
$db->execute($sql);
$strpage=$db->strpage(); //Save the paging string in a temporary variable to facilitate output
while($rs =$db->fetch_array()){
$strid.=$rs['id'].',';
}
$strid=substr($strid,0,strlen($strid)-1); //Construct the id string
$db->pagesize=0; //It is very important to clear the paging without logging out the class, so that you only need to use the database connection once and do not need to open it again;
$db ->execute("select id,title,url,sTime,gTime,vtype,tag from collect where id in ($strid)");
fetch_array() ): ?>

 
 
 
  ;
 
   < /td>



echo $strpage;
Through simple transformation, the idea is actually very simple: 1) Through optimization Index, find the id, and spell it into a string like "123,90000,12000". 2) The second query finds the results.
A small index + a little change enables mysql to support millions or even tens of millions of efficient paging!
Through the examples here, I reflected on something: for large systems, PHP must not use frameworks, especially frameworks that cannot even see SQL statements! Because my lightweight framework almost collapsed at first! It is only suitable for the rapid development of small applications. For ERP, OA, large websites, the data layer, including the logic layer, cannot use the framework. If programmers lose control of SQL statements, the risk of the project will increase exponentially! Especially when using mysql, mysql must require a professional DBA to achieve its best performance. The performance difference caused by an index can be thousands of times!
PS: After actual testing, when it comes to 1 million data, 1.6 million data, 15G table, 190M index, even if the index is used, the limit is 0.49 seconds. Therefore, it is best not to let others see the data after 100,000 pieces of data during paging, otherwise it will be very slow! Even if you use an index. After such optimization, MySQL has reached the limit of millions of pages! But such a result is already very good. If you are using sqlserver, it will definitely get stuck! The 1.6 million data using id in (str) is very fast, basically still 0 seconds. If so, mysql should be able to easily handle tens of millions of data.

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

MySQL: An Introduction to the World's Most Popular Database MySQL: An Introduction to the World's Most Popular Database Apr 12, 2025 am 12:18 AM

MySQL is an open source relational database management system, mainly used to store and retrieve data quickly and reliably. Its working principle includes client requests, query resolution, execution of queries and return results. Examples of usage include creating tables, inserting and querying data, and advanced features such as JOIN operations. Common errors involve SQL syntax, data types, and permissions, and optimization suggestions include the use of indexes, optimized queries, and partitioning of tables.

MySQL's Place: Databases and Programming MySQL's Place: Databases and Programming Apr 13, 2025 am 12:18 AM

MySQL's position in databases and programming is very important. It is an open source relational database management system that is widely used in various application scenarios. 1) MySQL provides efficient data storage, organization and retrieval functions, supporting Web, mobile and enterprise-level systems. 2) It uses a client-server architecture, supports multiple storage engines and index optimization. 3) Basic usages include creating tables and inserting data, and advanced usages involve multi-table JOINs and complex queries. 4) Frequently asked questions such as SQL syntax errors and performance issues can be debugged through the EXPLAIN command and slow query log. 5) Performance optimization methods include rational use of indexes, optimized query and use of caches. Best practices include using transactions and PreparedStatemen

Why Use MySQL? Benefits and Advantages Why Use MySQL? Benefits and Advantages Apr 12, 2025 am 12:17 AM

MySQL is chosen for its performance, reliability, ease of use, and community support. 1.MySQL provides efficient data storage and retrieval functions, supporting multiple data types and advanced query operations. 2. Adopt client-server architecture and multiple storage engines to support transaction and query optimization. 3. Easy to use, supports a variety of operating systems and programming languages. 4. Have strong community support and provide rich resources and solutions.

How to connect to the database of apache How to connect to the database of apache Apr 13, 2025 pm 01:03 PM

Apache connects to a database requires the following steps: Install the database driver. Configure the web.xml file to create a connection pool. Create a JDBC data source and specify the connection settings. Use the JDBC API to access the database from Java code, including getting connections, creating statements, binding parameters, executing queries or updates, and processing results.

How to start mysql by docker How to start mysql by docker Apr 15, 2025 pm 12:09 PM

The process of starting MySQL in Docker consists of the following steps: Pull the MySQL image to create and start the container, set the root user password, and map the port verification connection Create the database and the user grants all permissions to the database

MySQL's Role: Databases in Web Applications MySQL's Role: Databases in Web Applications Apr 17, 2025 am 12:23 AM

The main role of MySQL in web applications is to store and manage data. 1.MySQL efficiently processes user information, product catalogs, transaction records and other data. 2. Through SQL query, developers can extract information from the database to generate dynamic content. 3.MySQL works based on the client-server model to ensure acceptable query speed.

Laravel Introduction Example Laravel Introduction Example Apr 18, 2025 pm 12:45 PM

Laravel is a PHP framework for easy building of web applications. It provides a range of powerful features including: Installation: Install the Laravel CLI globally with Composer and create applications in the project directory. Routing: Define the relationship between the URL and the handler in routes/web.php. View: Create a view in resources/views to render the application's interface. Database Integration: Provides out-of-the-box integration with databases such as MySQL and uses migration to create and modify tables. Model and Controller: The model represents the database entity and the controller processes HTTP requests.

How to install mysql in centos7 How to install mysql in centos7 Apr 14, 2025 pm 08:30 PM

The key to installing MySQL elegantly is to add the official MySQL repository. The specific steps are as follows: Download the MySQL official GPG key to prevent phishing attacks. Add MySQL repository file: rpm -Uvh https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm Update yum repository cache: yum update installation MySQL: yum install mysql-server startup MySQL service: systemctl start mysqld set up booting

See all articles