Home Database Redis What will happen if Redis memory is too large?

What will happen if Redis memory is too large?

May 26, 2023 pm 11:19 PM
redis

1 Main database downtime

First let’s take a look at the main database downtime disaster recovery process: as shown below

What will happen if Redis memory is too large?

In When the main database goes down, our most common disaster recovery strategy is "cutting off the main database". Specifically, it selects a slave library from the remaining slave libraries of the cluster and upgrades it to the master library. After the slave library is upgraded to the master library, the remaining slave libraries are mounted under it to become its slave library, and finally the entire master-slave database is restored. Cluster structure.

The above is a complete disaster recovery process, and the most costly process is the remounting of the slave library, not the switching of the main library.

This is because redis cannot continue to synchronize data from the new main database after the main database changes based on synchronization points like mysql and mongodb. Once the slave database changes master in the redis cluster, redis's approach is to clear the slave database of the replaced master database and then completely synchronize a copy of the data from the new master database before resuming the transfer.

The entire slave database redo process is as follows:

  1. The main library bgsave its own data to the disk

  2. The main library sends rdb file to the slave library

  3. Start loading from the library

  4. After the loading is completed, the upload will resume and the service will start at the same time

Obviously, the larger the memory size of redis during this process, the time for each step above will be lengthened. The actual test data is as follows (we believe that our machine performance is better):

What will happen if Redis memory is too large?

It can be seen that when the data reaches 20G, the recovery time of a slave database has been extended to nearly 20 minutes. If there are 10 slave databases, it will take a total of 10 slave databases to recover sequentially. 200 minutes, and if the slave library is responsible for a large number of read requests at this time, can you tolerate such a long recovery time?

Seeing this, you will definitely ask: Why can't all slave libraries be redone at the same time? This is because if all the slave libraries request RDB files from the main library at the same time, the network card of the main library will be full immediately and enter a state where services cannot be provided normally. At this time, the main library will die again, which is simply adding insult to injury.

Of course, we can restore slave databases in batches, for example, in groups of two, then the recovery time of all slave databases is only reduced from 200 minutes to 100 minutes. Isn’t this a fifty-step solution to a hundred steps?

Another important issue lies in the red position in the fourth point. The resume transfer can be understood as a simplified mongodb oplog. It is a fixed-volume memory space, which we call the "synchronization buffer".

The write operation of the redis main library will be stored in this area and then sent to the slave library. If steps 1, 2, and 3 above take too long, then it is likely that the synchronization buffer will be Rewrite, what will it do if the slave library cannot find the corresponding resumption location? The answer is to redo steps 1, 2, and 3!

But because we cannot solve the time-consuming steps 1, 2, and 3 Therefore, the slave library will forever enter a vicious cycle: it will constantly request complete data from the main library, which will have a serious impact on the network card of the main library.

2 Capacity expansion problem

Many times there will be a sudden increase in traffic. Usually, before the cause is found, our emergency approach is to expand the capacity.

According to the table in Scenario 1, it takes nearly 20 minutes to expand a 20G redis slave database. Can the 20-minute business be tolerated at this critical moment? It may be dead before the expansion is completed.

3 A poor network leads to redoing the slave library and eventually triggering an avalanche

The biggest problem in this scenario is that the synchronization between the master library and the slave library is interrupted. It is likely that the slave library is still accepting write requests, so the synchronization buffer is likely to be overwritten once the interruption time is too long. At this time, the last synchronization position of the slave library has been lost. After the network is restored, although the master library has not changed, because the synchronization position of the slave library is lost, the slave library must be redone, which is 1, 2, and 3 in question 1. 4 steps. If the memory size of the main library is too large at this time, the redo speed of the slave library will be very slow, and the read requests sent to the slave library will be seriously affected. At the same time, because the size of the transferred rdb file is too large, the main library's network card will It will be severely affected for a long time.

4 The larger the memory, the longer the operation that triggers persistence blocks the main thread.

Redis is a single-threaded in-memory database. In redis, time-consuming operations need to be performed. During operation, a new process will be forked, such as bgsave and bgrewriteaof. When forking a new process, although the shareable data content does not need to be copied, the memory page table of the previous process space will be copied. This copying is done by the main thread and will block all read and write operations. As the memory usage increases, The longer it takes. For example: for redis with 20G of memory, bgsave takes about 750ms to copy the memory page table, and the redis main thread will also be blocked for 750ms.

Solution

The solution is of course to try to reduce memory usage. Under normal circumstances, we do this:

1 Set the expiration time

Set the expiration time for time-sensitive keys, and use redis’ own expired key cleanup strategy to reduce the memory usage of expired keys. It can also reduce business troubles and eliminate It needs to be cleaned regularly

2 Do not store garbage in redis

This is simply nonsense, but is there anyone who has the same problem as us?

3 Clean up useless data in a timely manner

For example, a redis carries 3 businesses Data, two businesses will go offline after a period of time, then you should clean up the relevant data of these two businesses

4 Try to compress the data as much as possible

For example, for some long text data, compression can significantly reduce memory usage

5 Pay attention to memory growth and locate large-capacity keys

Whether it is a DBA or a developer, When you use redis, you must pay attention to memory, otherwise, you are actually incompetent. Here you can analyze which keys in the redis instance are relatively large to help the business quickly locate abnormal keys (unexpected growth of keys is often the source of the problem)

6 pika

If you really don’t want to be so tired, then migrate the business to the new open source pika, so that you don’t have to pay too much attention to the memory. Redis memory is too The problems caused by it are no longer a problem.

The above is the detailed content of What will happen if Redis memory is too large?. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to build the redis cluster mode How to build the redis cluster mode Apr 10, 2025 pm 10:15 PM

Redis cluster mode deploys Redis instances to multiple servers through sharding, improving scalability and availability. The construction steps are as follows: Create odd Redis instances with different ports; Create 3 sentinel instances, monitor Redis instances and failover; configure sentinel configuration files, add monitoring Redis instance information and failover settings; configure Redis instance configuration files, enable cluster mode and specify the cluster information file path; create nodes.conf file, containing information of each Redis instance; start the cluster, execute the create command to create a cluster and specify the number of replicas; log in to the cluster to execute the CLUSTER INFO command to verify the cluster status; make

How to read redis queue How to read redis queue Apr 10, 2025 pm 10:12 PM

To read a queue from Redis, you need to get the queue name, read the elements using the LPOP command, and process the empty queue. The specific steps are as follows: Get the queue name: name it with the prefix of "queue:" such as "queue:my-queue". Use the LPOP command: Eject the element from the head of the queue and return its value, such as LPOP queue:my-queue. Processing empty queues: If the queue is empty, LPOP returns nil, and you can check whether the queue exists before reading the element.

How to clear redis data How to clear redis data Apr 10, 2025 pm 10:06 PM

How to clear Redis data: Use the FLUSHALL command to clear all key values. Use the FLUSHDB command to clear the key value of the currently selected database. Use SELECT to switch databases, and then use FLUSHDB to clear multiple databases. Use the DEL command to delete a specific key. Use the redis-cli tool to clear the data.

How to configure Lua script execution time in centos redis How to configure Lua script execution time in centos redis Apr 14, 2025 pm 02:12 PM

On CentOS systems, you can limit the execution time of Lua scripts by modifying Redis configuration files or using Redis commands to prevent malicious scripts from consuming too much resources. Method 1: Modify the Redis configuration file and locate the Redis configuration file: The Redis configuration file is usually located in /etc/redis/redis.conf. Edit configuration file: Open the configuration file using a text editor (such as vi or nano): sudovi/etc/redis/redis.conf Set the Lua script execution time limit: Add or modify the following lines in the configuration file to set the maximum execution time of the Lua script (unit: milliseconds)

How to set the redis expiration policy How to set the redis expiration policy Apr 10, 2025 pm 10:03 PM

There are two types of Redis data expiration strategies: periodic deletion: periodic scan to delete the expired key, which can be set through expired-time-cap-remove-count and expired-time-cap-remove-delay parameters. Lazy Deletion: Check for deletion expired keys only when keys are read or written. They can be set through lazyfree-lazy-eviction, lazyfree-lazy-expire, lazyfree-lazy-user-del parameters.

How to use the redis command line How to use the redis command line Apr 10, 2025 pm 10:18 PM

Use the Redis command line tool (redis-cli) to manage and operate Redis through the following steps: Connect to the server, specify the address and port. Send commands to the server using the command name and parameters. Use the HELP command to view help information for a specific command. Use the QUIT command to exit the command line tool.

How to implement redis counter How to implement redis counter Apr 10, 2025 pm 10:21 PM

Redis counter is a mechanism that uses Redis key-value pair storage to implement counting operations, including the following steps: creating counter keys, increasing counts, decreasing counts, resetting counts, and obtaining counts. The advantages of Redis counters include fast speed, high concurrency, durability and simplicity and ease of use. It can be used in scenarios such as user access counting, real-time metric tracking, game scores and rankings, and order processing counting.

How to optimize the performance of debian readdir How to optimize the performance of debian readdir Apr 13, 2025 am 08:48 AM

In Debian systems, readdir system calls are used to read directory contents. If its performance is not good, try the following optimization strategy: Simplify the number of directory files: Split large directories into multiple small directories as much as possible, reducing the number of items processed per readdir call. Enable directory content caching: build a cache mechanism, update the cache regularly or when directory content changes, and reduce frequent calls to readdir. Memory caches (such as Memcached or Redis) or local caches (such as files or databases) can be considered. Adopt efficient data structure: If you implement directory traversal by yourself, select more efficient data structures (such as hash tables instead of linear search) to store and access directory information

See all articles