Home Backend Development Golang PostgreSQL Performance Tuning: The Power of work_mem

PostgreSQL Performance Tuning: The Power of work_mem

Dec 05, 2024 am 03:36 AM

Briefing

Years ago, I was tasked with solving a performance issue in a critical system for the company I worked at. It was a tough challenge, sleepless nights and even more hair loss, The backend used PostgreSQL, and after a lot of effort and digging, the solution turned out to be as simple as one line:

ALTER USER foo SET work_mem='32MB';
Copy after login
Copy after login
Copy after login

Now, to be honest, this might or might not solve your performance issue right away. It depends heavily on your query patterns and the workload of your system. However, if you're a backend developer, I hope this post adds another tool to your arsenal for tackling problems, especially with PostgreSQL ?

In this post, we’ll create a scenario to simulate performance degradation and explore some tools to investigate the problem, like EXPLAIN, k6 for load testing, and even a dive into PostgreSQL’s source code. I’ll also share some articles to help you solve related issues.

  • ➡️ github repository with the complete implementation

Case Study

Let’s create a simple system to analyze soccer player performance. For now, the only business rule is to answer this question:

  • Who are the top N players involved in scoring the most?

The following SQL creates our data model and populates it:

CREATE TABLE players (
    player_id SERIAL PRIMARY KEY,
    nationality TEXT,
    age INT,
    position TEXT
);

CREATE TABLE matches (
    match_id SERIAL PRIMARY KEY,
    match_date DATE,
    home_team TEXT,
    away_team TEXT
);

CREATE TABLE player_stats (
    player_stat_id SERIAL PRIMARY KEY,
    player_id INT REFERENCES players(player_id),
    match_id INT REFERENCES matches(match_id),
    goals INT,
    assists INT,
    minutes_played INT
);

-- Populate players with a range of nationalities, ages, and positions
INSERT INTO players (nationality, age, position)
SELECT
    ('Country' || (1 + random()*100)::int),  -- 100 different nationalities
    (18 + random()*20)::int,                 -- Ages between 18 and 38
    (ARRAY['Forward', 'Midfielder', 'Defender', 'Goalkeeper'])[ceil(random()*4)::int]
FROM generate_series(1, 10000);
Copy after login
Copy after login
Copy after login

The script to initialize and populate the database is available in the github repository.

Yes, we could design out database to improve system performance, but the main goal here is to explore unoptimized scenarios. Trust me, you'll likely encounter systems like this, where either poor initial design choices or unexpected growth require significant effort to improve performance.

Debugging the problem

To simulate the issue related to the work_mem configuration, let’s create a query to answer this question: Who are the top 2000 players contributing the most to goals?

SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score
FROM player_stats ps
JOIN players p ON ps.player_id = p.player_id
GROUP BY p.player_id
ORDER BY total_score DESC
LIMIT 2000;
Copy after login
Copy after login
Copy after login

Alright, but how can we identify bottlenecks in this query? Like other DBMSs, PostgreSQL supports the EXPLAIN command, which helps us understand each step executed by the query planner (optimized or not).

We can analyze details such as:

  • What kind of scan was used? Index scan, Index Only scan, Seq Scan, etc.
  • Which index was used, and under what conditions?
  • If sorting is involved, what type of algorithm was used? Does it rely entirely on memory, or does it require disk usage?
  • The usage of shared buffers.
  • Execution time estimation.

You can learn more about the PostgreSQL planner/optimizer here:

  • official documentation
  • pganalyze - basics of postgres query planning
  • cybertec - how to interpret postgresql explain

Talk is cheap

Talk is cheap, so let’s dive into a practical example. First, we’ll reduce the work_mem to its smallest possible value, which is 64kB, as defined in the source code:

ALTER USER foo SET work_mem='32MB';
Copy after login
Copy after login
Copy after login

Next, let’s analyze the output of the EXPLAIN command:

CREATE TABLE players (
    player_id SERIAL PRIMARY KEY,
    nationality TEXT,
    age INT,
    position TEXT
);

CREATE TABLE matches (
    match_id SERIAL PRIMARY KEY,
    match_date DATE,
    home_team TEXT,
    away_team TEXT
);

CREATE TABLE player_stats (
    player_stat_id SERIAL PRIMARY KEY,
    player_id INT REFERENCES players(player_id),
    match_id INT REFERENCES matches(match_id),
    goals INT,
    assists INT,
    minutes_played INT
);

-- Populate players with a range of nationalities, ages, and positions
INSERT INTO players (nationality, age, position)
SELECT
    ('Country' || (1 + random()*100)::int),  -- 100 different nationalities
    (18 + random()*20)::int,                 -- Ages between 18 and 38
    (ARRAY['Forward', 'Midfielder', 'Defender', 'Goalkeeper'])[ceil(random()*4)::int]
FROM generate_series(1, 10000);
Copy after login
Copy after login
Copy after login

We can see that the execution time was 82.718ms, and the Sort Algorithm used was external merge. This algorithm relies on disk instead of memory because the data exceeded the 64kB work_mem limit.

For your information, the tuplesort.c module flags when the Sort algorithm will use disk by setting the state to SORTEDONTAPE at this line. Disk interactions is handled by the logtape.c module.

If you're a visual person (like me), there are tools that can help you understand the EXPLAIN output, such as https://explain.dalibo.com/. Below is an example showing a node with the Sort step, including details like Sort Method: external merge and Sort Space Used: 2.2MB:

PostgreSQL Performance Tuning: The Power of work_mem

The "Stats" section is especially useful for analyzing more complex queries, as it provides execution time details for each query node. In our example, it highlights a suspiciously high execution time—nearly 42ms—in one of the Sort nodes:

PostgreSQL Performance Tuning: The Power of work_mem

  • You can visualize and analyze this query plan here: https://explain.dalibo.com/plan/2gd0a8c8fab6a532#stats

As the EXPLAIN output shows, one of the main reasons for the performance problem is the Sort node using disk. A side effect of this issue, especially in systems with high workloads, is spikes in Write I/O metrics (I hope you’re monitoring these; if not, good luck when you need them!). And yes, even read-only queries can cause write spikes, as the Sort algorithm writes data to temporary files.

Solution

When we execute the same query with work_mem=4MB (the default in PostgreSQL), the execution time decreases by over 50%.

SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score
FROM player_stats ps
JOIN players p ON ps.player_id = p.player_id
GROUP BY p.player_id
ORDER BY total_score DESC
LIMIT 2000;
Copy after login
Copy after login
Copy after login
  • For a visual analysis, check this link: https://explain.dalibo.com/plan/b094ec2f1cfg44f6#

In this EXPLAIN output, one of the Sort nodes now uses an in-memory algorithm, heapsort. For context, the planner opts for heapsort only when it’s cheaper to execute than quicksort. You can dive deeper into the decision-making process in the PostgreSQL source code.

Additionally, the second Sort node, which previously accounted for almost 40ms of execution time, disappears entirely from the execution plan. This change occurs because the planner now selects a HashJoin instead of a MergeJoin, as the hash operation fits in memory, consuming approximately 480kB.

For more details about join algorithms, check out these articles:

  • HashJoin Algorithm
  • MergeJoin Algorithm

Impact on the API

The default work_mem isn’t always sufficient to handle your system’s workload. You can adjust this value at the user level using:

ALTER USER foo SET work_mem='32MB';
Copy after login
Copy after login
Copy after login

Note: If you’re using a connection pool or a connection pooler, it’s important to recycle old sessions for the new configuration to take effect.

You can also control this configuration at the database transaction level. Let’s run a simple API to understand and measure the impact of work_mem changes using load testing with k6:

  • k6-test.js

    CREATE TABLE players (
        player_id SERIAL PRIMARY KEY,
        nationality TEXT,
        age INT,
        position TEXT
    );
    
    CREATE TABLE matches (
        match_id SERIAL PRIMARY KEY,
        match_date DATE,
        home_team TEXT,
        away_team TEXT
    );
    
    CREATE TABLE player_stats (
        player_stat_id SERIAL PRIMARY KEY,
        player_id INT REFERENCES players(player_id),
        match_id INT REFERENCES matches(match_id),
        goals INT,
        assists INT,
        minutes_played INT
    );
    
    -- Populate players with a range of nationalities, ages, and positions
    INSERT INTO players (nationality, age, position)
    SELECT
        ('Country' || (1 + random()*100)::int),  -- 100 different nationalities
        (18 + random()*20)::int,                 -- Ages between 18 and 38
        (ARRAY['Forward', 'Midfielder', 'Defender', 'Goalkeeper'])[ceil(random()*4)::int]
    FROM generate_series(1, 10000);
    
    Copy after login
    Copy after login
    Copy after login

The API was implemented in Go and exposes two endpoints that execute the query with different work_mem configurations:

  • main.go

    SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score
    FROM player_stats ps
    JOIN players p ON ps.player_id = p.player_id
    GROUP BY p.player_id
    ORDER BY total_score DESC
    LIMIT 2000;
    
    Copy after login
    Copy after login
    Copy after login

Below is the docker-compose file containing all the dependencies needed to run the load test:

  • docker-compose.yaml

        /*
         * workMem is forced to be at least 64KB, the current minimum valid value
         * for the work_mem GUC.  This is a defense against parallel sort callers
         * that divide out memory among many workers in a way that leaves each
         * with very little memory.
         */
        state->allowedMem = Max(workMem, 64) * (int64) 1024;
    
    Copy after login

We can set the ENDPOINT environment variable to define the scenario to test: /low-work-mem or /optimized-work-mem. Run the test using: docker compose up --abort-on-container-exit. For this example, I used Docker version 20.10.22.

  • Test ENDPOINT: /low-work-mem - work_mem=64kB

    BEGIN; -- 1. Initialize a transaction.
    
    SET LOCAL work_mem = '64kB'; -- 2. Change work_mem at transaction level, another running transactions at the same session will have the default value(4MB).
    
    SHOW work_mem; -- 3. Check the modified work_mem value.
    
    EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) -- 4. Run explain with options that help us to analyses and indetifies bottlenecks. 
    SELECT 
        p.player_id, 
        SUM(ps.goals + ps.assists) AS total_score 
    FROM 
        player_stats ps
    INNER JOIN 
        players p ON p.player_id = ps.player_id
    GROUP BY 
        p.player_id
    ORDER BY 
        total_score DESC
    LIMIT 2000;
    --
    
    QUERY PLAN                                                                                                                                                          |
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    Limit  (cost=18978.96..18983.96 rows=2000 width=12) (actual time=81.589..81.840 rows=2000 loops=1)                                                                  |
      Output: p.player_id, (sum((ps.goals + ps.assists)))                                                                                                               |
      Buffers: shared hit=667, temp read=860 written=977                                                                                                                |
      ->  Sort  (cost=18978.96..19003.96 rows=10000 width=12) (actual time=81.587..81.724 rows=2000 loops=1)                                                            |
            Output: p.player_id, (sum((ps.goals + ps.assists)))                                                                                                         |
            Sort Key: (sum((ps.goals + ps.assists))) DESC                                                                                                               |
            Sort Method: external merge  Disk: 280kB                                                                                                                    |
            Buffers: shared hit=667, temp read=860 written=977                                                                                                          |
            ->  GroupAggregate  (cost=15076.66..17971.58 rows=10000 width=12) (actual time=40.293..79.264 rows=9998 loops=1)                                            |
                  Output: p.player_id, sum((ps.goals + ps.assists))                                                                                                     |
                  Group Key: p.player_id                                                                                                                                |
                  Buffers: shared hit=667, temp read=816 written=900                                                                                                    |
                  ->  Merge Join  (cost=15076.66..17121.58 rows=100000 width=12) (actual time=40.281..71.313 rows=100000 loops=1)                                       |
                        Output: p.player_id, ps.goals, ps.assists                                                                                                       |
                        Merge Cond: (p.player_id = ps.player_id)                                                                                                        |
                        Buffers: shared hit=667, temp read=816 written=900                                                                                              |
                        ->  Index Only Scan using players_pkey on public.players p  (cost=0.29..270.29 rows=10000 width=4) (actual time=0.025..1.014 rows=10000 loops=1)|
                              Output: p.player_id                                                                                                                       |
                              Heap Fetches: 0                                                                                                                           |
                              Buffers: shared hit=30                                                                                                                    |
                        ->  Materialize  (cost=15076.32..15576.32 rows=100000 width=12) (actual time=40.250..57.942 rows=100000 loops=1)                                |
                              Output: ps.goals, ps.assists, ps.player_id                                                                                                |
                              Buffers: shared hit=637, temp read=816 written=900                                                                                        |
                              ->  Sort  (cost=15076.32..15326.32 rows=100000 width=12) (actual time=40.247..49.339 rows=100000 loops=1)                                 |
                                    Output: ps.goals, ps.assists, ps.player_id                                                                                          |
                                    Sort Key: ps.player_id                                                                                                              |
                                    Sort Method: external merge  Disk: 2208kB                                                                                           |
                                    Buffers: shared hit=637, temp read=816 written=900                                                                                  |
                                    ->  Seq Scan on public.player_stats ps  (cost=0.00..1637.00 rows=100000 width=12) (actual time=0.011..8.378 rows=100000 loops=1)    |
                                          Output: ps.goals, ps.assists, ps.player_id                                                                                    |
                                          Buffers: shared hit=637                                                                                                       |
    Planning:                                                                                                                                                           |
      Buffers: shared hit=6                                                                                                                                             |
    Planning Time: 0.309 ms                                                                                                                                             |
    Execution Time: 82.718 ms                                                                                                                                    |
    
    COMMIT; -- 5. You can also execute a ROLLBACK, in case you want to analyze queries like INSERT, UPDATE and DELETE.
    
    Copy after login
  • Test ENDPOINT: /optimized-work-mem - work_mem=4MB

    EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) 
    SELECT 
        p.player_id, 
        SUM(ps.goals + ps.assists) AS total_score 
    FROM 
        player_stats ps
    INNER JOIN 
        players p ON p.player_id = ps.player_id
    GROUP BY 
        p.player_id
    ORDER BY 
        total_score DESC
    LIMIT 2000;
    --
    QUERY PLAN                                                                                                                                          |
    ----------------------------------------------------------------------------------------------------------------------------------------------------+
    Limit  (cost=3646.90..3651.90 rows=2000 width=12) (actual time=41.672..41.871 rows=2000 loops=1)                                                    |
      Output: p.player_id, (sum((ps.goals + ps.assists)))                                                                                               |
      Buffers: shared hit=711                                                                                                                           |
      ->  Sort  (cost=3646.90..3671.90 rows=10000 width=12) (actual time=41.670..41.758 rows=2000 loops=1)                                              |
            Output: p.player_id, (sum((ps.goals + ps.assists)))                                                                                         |
            Sort Key: (sum((ps.goals + ps.assists))) DESC                                                                                               |
            Sort Method: top-N heapsort  Memory: 227kB                                                                                                  |
            Buffers: shared hit=711                                                                                                                     |
            ->  HashAggregate  (cost=2948.61..3048.61 rows=10000 width=12) (actual time=38.760..40.073 rows=9998 loops=1)                               |
                  Output: p.player_id, sum((ps.goals + ps.assists))                                                                                     |
                  Group Key: p.player_id                                                                                                                |
                  Batches: 1  Memory Usage: 1169kB                                                                                                      |
                  Buffers: shared hit=711                                                                                                               |
                  ->  Hash Join  (cost=299.00..2198.61 rows=100000 width=12) (actual time=2.322..24.273 rows=100000 loops=1)                            |
                        Output: p.player_id, ps.goals, ps.assists                                                                                       |
                        Inner Unique: true                                                                                                              |
                        Hash Cond: (ps.player_id = p.player_id)                                                                                         |
                        Buffers: shared hit=711                                                                                                         |
                        ->  Seq Scan on public.player_stats ps  (cost=0.00..1637.00 rows=100000 width=12) (actual time=0.008..4.831 rows=100000 loops=1)|
                              Output: ps.player_stat_id, ps.player_id, ps.match_id, ps.goals, ps.assists, ps.minutes_played                             |
                              Buffers: shared hit=637                                                                                                   |
                        ->  Hash  (cost=174.00..174.00 rows=10000 width=4) (actual time=2.298..2.299 rows=10000 loops=1)                                |
                              Output: p.player_id                                                                                                       |
                              Buckets: 16384  Batches: 1  Memory Usage: 480kB                                                                           |
                              Buffers: shared hit=74                                                                                                    |
                              ->  Seq Scan on public.players p  (cost=0.00..174.00 rows=10000 width=4) (actual time=0.004..0.944 rows=10000 loops=1)    |
                                    Output: p.player_id                                                                                                 |
                                    Buffers: shared hit=74                                                                                              |
    Planning:                                                                                                                                           |
      Buffers: shared hit=6                                                                                                                             |
    Planning Time: 0.236 ms                                                                                                                             |
    Execution Time: 41.998 ms                                                                                                                           |
    
    Copy after login

The results demonstrate that the endpoint with a higher work_mem outperformed the one with a lower configuration. The p90 latency dropped by over 43ms, and throughput improved significantly under the test workload.

If percentile metrics are new to you, I recommend studying and understanding them. These metrics are incredibly helpful for guiding performance analyses. Here are some resources to get you started:

  • k6 response time
  • p90 vs p99

Conclusion

After dreaming about the problem, waking up multiple times to try new solutions, and finally discovering that work_mem could help, the next challenge is figuring out the right value for this configuration. ?

The default value of 4MB for work_mem, like many other PostgreSQL settings, is conservative. This allows PostgreSQL to run on smaller machines with limited computational power. However, we must be cautious not to crash the PostgreSQL instance with out-of-memory errors. A single query, if complex enough, can consume multiple times the memory specified by work_mem, depending on the number of operations like Sorts, Merge Joins, Hash Joins (influenced by hash_mem_multiplier), and more. As noted in the official documentation:

it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, memoize nodes and hash-based processing of IN subqueries.

Unfortunately, there’s no magic formula for setting work_mem. It depends on your system’s available memory, workload, and query patterns. The TimescaleDB Team has a tool to autotune and the topic is widely discussed. Here are some excellent resources to guide you:

  • Everything you know about work_mem is wrong
  • How should I tune work_mem for a given system

But at the end of the day, IMHO, the answer is: TEST. TEST TODAY. TEST TOMORROW. TEST FOREVER. Keep testing until you find an acceptable value for your use case that enhances query performance without blowing up your database. ?

The above is the detailed content of PostgreSQL Performance Tuning: The Power of work_mem. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Golang's Purpose: Building Efficient and Scalable Systems Golang's Purpose: Building Efficient and Scalable Systems Apr 09, 2025 pm 05:17 PM

Go language performs well in building efficient and scalable systems. Its advantages include: 1. High performance: compiled into machine code, fast running speed; 2. Concurrent programming: simplify multitasking through goroutines and channels; 3. Simplicity: concise syntax, reducing learning and maintenance costs; 4. Cross-platform: supports cross-platform compilation, easy deployment.

Golang and C  : Concurrency vs. Raw Speed Golang and C : Concurrency vs. Raw Speed Apr 21, 2025 am 12:16 AM

Golang is better than C in concurrency, while C is better than Golang in raw speed. 1) Golang achieves efficient concurrency through goroutine and channel, which is suitable for handling a large number of concurrent tasks. 2)C Through compiler optimization and standard library, it provides high performance close to hardware, suitable for applications that require extreme optimization.

The Performance Race: Golang vs. C The Performance Race: Golang vs. C Apr 16, 2025 am 12:07 AM

Golang and C each have their own advantages in performance competitions: 1) Golang is suitable for high concurrency and rapid development, and 2) C provides higher performance and fine-grained control. The selection should be based on project requirements and team technology stack.

Golang vs. Python: Performance and Scalability Golang vs. Python: Performance and Scalability Apr 19, 2025 am 12:18 AM

Golang is better than Python in terms of performance and scalability. 1) Golang's compilation-type characteristics and efficient concurrency model make it perform well in high concurrency scenarios. 2) Python, as an interpreted language, executes slowly, but can optimize performance through tools such as Cython.

Golang vs. Python: Key Differences and Similarities Golang vs. Python: Key Differences and Similarities Apr 17, 2025 am 12:15 AM

Golang and Python each have their own advantages: Golang is suitable for high performance and concurrent programming, while Python is suitable for data science and web development. Golang is known for its concurrency model and efficient performance, while Python is known for its concise syntax and rich library ecosystem.

C   and Golang: When Performance is Crucial C and Golang: When Performance is Crucial Apr 13, 2025 am 12:11 AM

C is more suitable for scenarios where direct control of hardware resources and high performance optimization is required, while Golang is more suitable for scenarios where rapid development and high concurrency processing are required. 1.C's advantage lies in its close to hardware characteristics and high optimization capabilities, which are suitable for high-performance needs such as game development. 2.Golang's advantage lies in its concise syntax and natural concurrency support, which is suitable for high concurrency service development.

Golang's Impact: Speed, Efficiency, and Simplicity Golang's Impact: Speed, Efficiency, and Simplicity Apr 14, 2025 am 12:11 AM

Goimpactsdevelopmentpositivelythroughspeed,efficiency,andsimplicity.1)Speed:Gocompilesquicklyandrunsefficiently,idealforlargeprojects.2)Efficiency:Itscomprehensivestandardlibraryreducesexternaldependencies,enhancingdevelopmentefficiency.3)Simplicity:

Golang and C  : The Trade-offs in Performance Golang and C : The Trade-offs in Performance Apr 17, 2025 am 12:18 AM

The performance differences between Golang and C are mainly reflected in memory management, compilation optimization and runtime efficiency. 1) Golang's garbage collection mechanism is convenient but may affect performance, 2) C's manual memory management and compiler optimization are more efficient in recursive computing.

See all articles