How to Efficiently Select Random Rows from Large PostgreSQL Tables?
PostgreSQL random row selection method
Traditional random row selection methods are inefficient and slow when dealing with large tables containing millions or even billions of records. Two common methods are:
-
Use
random()
to filter:select * from table where random() < 0.001;
Copy after login -
Use
order by random()
andlimit
:select * from table order by random() limit 1000;
Copy after login
However, due to the need for a full table scan or sorting, these methods are not the best choice for tables with a large number of rows and will cause performance bottlenecks.
Optimization methods for large tables
For the following types of tables, consider the following optimization method, which is significantly faster:
- Numeric ID columns with small or medium gaps (indexed for faster lookups)
- No or minimal write operations during selection
Query:
WITH params AS ( SELECT 1 AS min_id, -- 可选:自定义最小ID起始值 5100000 AS id_span -- 近似ID范围(最大ID - 最小ID + 缓冲) ) SELECT * FROM ( SELECT DISTINCT 1 + trunc(random() * p.id_span)::integer AS id FROM params p, generate_series(1, 1100) g GROUP BY 1 ) r INNER JOIN big ON r.id = big.id LIMIT 1000;
How it works:
-
ID range estimate:
- If not known exactly, query the table to estimate the minimum, maximum, and total span (max - min) of the ID column.
-
Random ID generation:
- Generate a different set of random numbers within the estimated ID range.
-
Redundancy and duplication elimination:
- Group the generated numbers to remove duplicates, reducing the possibility of selecting missing rows or already selected rows.
-
Table joins and restrictions:
- Join the random numbers with the actual table using the ID column (must be indexed). This efficient join retrieves the corresponding data for the selected row.
- Finally, apply a limit to retrieve the required number of rows.
Why it’s fast:
-
Minimal index usage:
- The query only performs an index scan on the ID column, which is much faster than a full table scan or sort operation.
-
Optimized random number generation:
- The generated random numbers are distributed over the estimated ID range, minimizing the possibility of missing or overlapping rows.
-
Redundancy and duplication elimination:
- Grouping the generated numbers ensures that only distinct rows are selected, reducing the need for additional filtering or joining to eliminate duplicates.
Other options:
-
Recursive CTE to handle gaps:
- For tables with gaps in the ID sequence, add an additional CTE to handle these gaps.
-
Function wrappers for reuse:
- Define a function that takes limit and gap percentage as parameters, allowing easy configuration and reuse with different tables.
-
Universal functions for any table:
- Create a generic function that accepts any table with integer columns as a parameter.
-
Materialize views for speed:
- Consider creating a materialized view based on an optimized query for faster retrieval of (quasi) randomly selected rows.
-
TABLE SAMPLE
in PostgreSQL 9.5:- Leverage PostgreSQL's "
TABLE SAMPLE SYSTEM
" feature to implement a faster but less random row sampling method, ensuring an accurate number of rows is returned. However, keep in mind that the sample may not be completely random due to clustering effects.
- Leverage PostgreSQL's "
The above is the detailed content of How to Efficiently Select Random Rows from Large PostgreSQL Tables?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Full table scanning may be faster in MySQL than using indexes. Specific cases include: 1) the data volume is small; 2) when the query returns a large amount of data; 3) when the index column is not highly selective; 4) when the complex query. By analyzing query plans, optimizing indexes, avoiding over-index and regularly maintaining tables, you can make the best choices in practical applications.

Yes, MySQL can be installed on Windows 7, and although Microsoft has stopped supporting Windows 7, MySQL is still compatible with it. However, the following points should be noted during the installation process: Download the MySQL installer for Windows. Select the appropriate version of MySQL (community or enterprise). Select the appropriate installation directory and character set during the installation process. Set the root user password and keep it properly. Connect to the database for testing. Note the compatibility and security issues on Windows 7, and it is recommended to upgrade to a supported operating system.

InnoDB's full-text search capabilities are very powerful, which can significantly improve database query efficiency and ability to process large amounts of text data. 1) InnoDB implements full-text search through inverted indexing, supporting basic and advanced search queries. 2) Use MATCH and AGAINST keywords to search, support Boolean mode and phrase search. 3) Optimization methods include using word segmentation technology, periodic rebuilding of indexes and adjusting cache size to improve performance and accuracy.

MySQL is an open source relational database management system. 1) Create database and tables: Use the CREATEDATABASE and CREATETABLE commands. 2) Basic operations: INSERT, UPDATE, DELETE and SELECT. 3) Advanced operations: JOIN, subquery and transaction processing. 4) Debugging skills: Check syntax, data type and permissions. 5) Optimization suggestions: Use indexes, avoid SELECT* and use transactions.

The difference between clustered index and non-clustered index is: 1. Clustered index stores data rows in the index structure, which is suitable for querying by primary key and range. 2. The non-clustered index stores index key values and pointers to data rows, and is suitable for non-primary key column queries.

MySQL and MariaDB can coexist, but need to be configured with caution. The key is to allocate different port numbers and data directories to each database, and adjust parameters such as memory allocation and cache size. Connection pooling, application configuration, and version differences also need to be considered and need to be carefully tested and planned to avoid pitfalls. Running two databases simultaneously can cause performance problems in situations where resources are limited.

In MySQL database, the relationship between the user and the database is defined by permissions and tables. The user has a username and password to access the database. Permissions are granted through the GRANT command, while the table is created by the CREATE TABLE command. To establish a relationship between a user and a database, you need to create a database, create a user, and then grant permissions.

Data Integration Simplification: AmazonRDSMySQL and Redshift's zero ETL integration Efficient data integration is at the heart of a data-driven organization. Traditional ETL (extract, convert, load) processes are complex and time-consuming, especially when integrating databases (such as AmazonRDSMySQL) with data warehouses (such as Redshift). However, AWS provides zero ETL integration solutions that have completely changed this situation, providing a simplified, near-real-time solution for data migration from RDSMySQL to Redshift. This article will dive into RDSMySQL zero ETL integration with Redshift, explaining how it works and the advantages it brings to data engineers and developers.
