A summary of 20+ must-know Redis interview questions, come and collect them! !
This article will share with you some Redis interview questions so that you can check and fill in the gaps and improve your knowledge points. It has certain reference value. Friends in need can refer to it. I hope it will be helpful to everyone.
Application scenarios
Cache
Shared Session
Message queue system
Distributed lock
Related recommendations: Redis video Tutorial
Why is single-threaded Redis fast
Pure memory operation
Single-threaded Redis Thread operation avoids frequent context switching
Reasonable and efficient data structure
Adopts non-blocking I/O multiplexing mechanism (There is one file descriptor that monitors multiple file descriptors at the same time for data arrival)
Data structure and usage scenarios of Redis
String string: The string type is the most basic data structure of Redis. First of all, the keys are all string types, and several other data structures are built on the basis of the string type. We often use set The key value command is a string. Commonly used in caching, counting, shared session, rate limiting, etc.
Hash hash: In Redis, the hash type refers to the key value itself which is a key-value pair structure. Hash can be used to store user information, such as implementing a shopping cart.
List list (doubly linked list): The list (list) type is used to store multiple ordered strings. Can perform simple message queue function.
Set collection: The set (set) type is also used to save multiple string elements, but unlike the list type, duplicate elements are not allowed in the set, and the set The elements in are unordered and cannot be obtained through index subscripts. By using Set's intersection, union, difference and other operations, you can calculate common preferences, all preferences, your own unique preferences and other functions.
Sorted Set ordered set (jump list implementation): Sorted Set has an additional weight parameter Score, and the elements in the set can be arranged according to Score. It can be used as a ranking application to perform TOP N operations.
Data expiration strategy of Redis
The data expiration strategy in Redis adopts regular deletion and lazy deletion strategy
Periodic deletion strategy: Redis enables a timer to monitor all keys regularly to determine whether the key has expired and delete it if it expires. This strategy can ensure that expired keys will eventually be deleted, but it also has serious shortcomings: traversing all the data in the memory every time, which consumes a lot of CPU resources, and when the key has expired, but the timer is still in the unawakened state, The key can still be used during this time.
Lazy deletion strategy: When obtaining the key, first determine whether the key has expired, and delete it if it expires. This method has a disadvantage: if the key has not been used, it will always be in the memory. In fact, it has expired, which will waste a lot of space.
These two strategies are naturally complementary. After being combined, the scheduled deletion strategy has undergone some changes. It no longer scans all the keys every time, but randomly selects a part of the keys. Checking is performed, thus reducing the consumption of CPU resources. The lazy deletion strategy complements the unchecked keys and basically meets all requirements. But sometimes it is such a coincidence that it is neither extracted by the timer nor used. How does this data disappear from the memory? It doesn't matter, there is also a memory elimination mechanism. When the memory is not enough, the memory elimination mechanism will come into play. The elimination strategy is divided into: when the memory is not enough to accommodate the newly written data, the new write operation will report an error. (Redis default policy) When the memory is insufficient to accommodate newly written data, in the key space, remove the least recently used Key. (LRU recommended) When the memory is not enough to accommodate newly written data, a Key is randomly removed in the key space. When the memory is insufficient to accommodate newly written data, in the key space with an expiration time set, the least recently used Key is removed. This situation is generally used when Redis is used as both a cache and a persistent storage. When the memory is insufficient to accommodate newly written data, a Key is randomly removed from the key space with an expiration time set. When the memory is insufficient to accommodate newly written data, in the key space with an expiration time set, the Key with an earlier expiration time will be removed first.
Redis’ set and setnx
setnx in Redis does not support setting the expiration time. When doing distributed locks, you must avoid a certain client being interrupted and causing death. For locks, the lock expiration time needs to be set. Setnx and expire cannot implement atomic operations when concurrency is high. If you want to use it, you must display the lock in the program code. Using SET instead of SETNX is equivalent to SETNX EXPIRE achieving atomicity. There is no need to worry about SETNX succeeding and EXPIRE failing.
The specific implementation of Redis's LRU:
The traditional LRU uses a stack. Each time, the latest used one is moved to the top of the stack, but using the stack will As a result, a large amount of non-hot data occupies the header data when selecting * is executed, so it needs to be improved. Every time Redis obtains a value by key, it will update the lru field in the value to the current second-level timestamp. The initial implementation algorithm of Redis is very simple. Five keys are randomly taken from the dict and the one with the smallest lru field value is eliminated. In 3.0, another version of the algorithm was improved. First, the first randomly selected keys will be put into a pool (the size of the pool is 16). The keys in the pool are arranged in order of LRU size. Next, each randomly selected keylru value must be smaller than the smallest lru in the pool before it will continue to be added until the pool is full. After it is full, every time a new key needs to be put in, the key with the largest lru in the pool needs to be taken out. When eliminating, directly select a minimum lru value from the pool and then eliminate it.
How Redis discovers hot keys
Based on experience, make predictions: For example, if you know the start of an activity in advance, then use this Key serves as the hotspot Key.
Server-side collection: Before operating redis, add a line of code for data statistics.
Capture packets for evaluation: Redis uses the TCP protocol to communicate with the client. The communication protocol uses RESP, so you can also intercept packets for analysis by writing your own program to monitor the port.
At the proxy layer, each redis request is collected and reported.
Redis comes with command query: Redis4.0.4 version provides redis-cli –hotkeys to find hot keys. (If you want to use Redis's own command to query, please note that you need to set the memory eviction policy to allkeys-lfu or volatile-lfu first, otherwise an error will be returned. Enter Redis and use config set maxmemory-policy allkeys-lfu. )
Redis’s hotspot key solution
Server-side caching: caching hotspot data into the server’s memory .(Use the message notification mechanism that comes with Redis to ensure the data consistency between Redis and the server hotspot key, and establish a monitor for the hotspot key client. When the hotspot key is updated, the server will also update accordingly.)
Backup Hotspot Key: Randomly distribute the hotspot Key random number to other Redis nodes. In this way, when accessing the hotspot key, it will not all hit the same machine.
How to solve the Redis cache avalanche problem
Use Redis high-availability architecture: Use Redis cluster to ensure that the Redis service is not Will crash
The cache time is inconsistent, add a random value to the cache expiration time to avoid collective failure
Current limiting downgrade strategy : There are certain filings. For example, the personalized recommendation service is no longer available. It is replaced by the hot data recommendation service
How to solve the Redis cache penetration problem
Verify on the interface
Save null value (cache breakdown and lock, or set not to expire)
Bloom filter interception: Map all possible query keys to the Bloom filter first. When querying, first determine whether the key exists in the Bloom filter, and then continue execution if it exists. If it does not exist, it will return directly. The Bloom filter stores the value in multiple hash bits. If the Bloom filter says that a certain element is present, it may be misjudged. If the Bloom filter says that an element is not there, then it must not be there.
Redis’ persistence mechanism
In order to ensure efficiency, Redis caches data in memory, but will periodically update the data. Write to disk or write modification operations to appended record files to ensure data persistence. There are two persistence strategies for Redis:
RDB: The snapshot form is to directly save the data in the memory to a dump file, save it regularly, and save the strategy. When Redis needs to be persisted, Redis will fork a child process, and the child process will write the data to a temporary RDB file on the disk. When the child process completes writing the temporary file, replace the original RDB.
AOF: Save all the commands to modify the Redis server into a file, a collection of commands.
Use AOF for persistence. Each write command is appended to appendonly.aof through the write function. The default policy of aof is to fsync once per second. Under this configuration, even if a failure occurs, data will be lost for up to one second. The disadvantage is that for the same data set, the file size of AOF is usually larger than the size of RDB file. Depending on the fsync strategy used, AOF may be slower than RDB. Redis defaults to the persistence method of snapshot RDB. For master-slave synchronization, when the master-slave is just connected, full synchronization (RDB) is performed; after full synchronization is completed, incremental synchronization (AOF) is performed.
Redis transactions
The essence of a Redis transaction is a set of commands. Transactions support executing multiple commands at one time, and all commands in a transaction will be serialized. During the transaction execution process, the commands in the queue will be executed serially in order, and command requests submitted by other clients will not be inserted into the transaction execution command sequence. To summarize: a redis transaction is a one-time, sequential, and exclusive execution of a series of commands in a queue.
Redis transactions do not have the concept of isolation level. The batch operation is put into the queue cache before sending the EXEC command and will not be actually executed. There is no query within the transaction to see. Updates in the transaction cannot be seen by queries outside the transaction.
In Redis, a single command is executed atomically, but transactions are not guaranteed to be atomic and there is no rollback. If any command in the transaction fails to execute, the remaining commands will still be executed.
Redis transaction related commands
watch key1 key2... : Monitor one or more keys, if Before the transaction is executed, if the monitored key is changed by other commands, the transaction will be interrupted (similar to optimistic locking)
#multi: Mark the beginning of a transaction block (queued)
exec: Execute the commands of all transaction blocks (once exec is executed, the previously added monitoring locks will be canceled)
discard: Cancel the transaction, give up All commands in the transaction block
unwatch: Cancel watch’s monitoring of all keys
The difference between Redis and memcached
In terms of storage method: memcache will store all data in the memory and will hang up after a power outage. The data cannot exceed the memory size. Some data in redis is stored on the hard disk, which ensures the durability of the data.
In terms of data support types: memcache has simple support for data types and only supports simple key-value, while redis supports five data types.
The underlying models are different: the underlying implementation methods and the application protocols for communication with the client are different. Redis directly built its own VM mechanism, because if the general system calls system functions, it will waste a certain amount of time to move and request.
Value size: redis can reach 1GB, while memcache is only 1MB.
Several cluster modes of Redis
Master-slave replication
Sentinel mode
cluster mode
Redis’s sentry mode
The sentinel is a distributed System, based on master-slave replication, you can run multiple sentinel processes in an architecture. These processes use the rumor protocol to receive information about whether the Master is offline, and use the voting protocol to decide whether to perform automatic failover and select Which Slave serves as the new Master.
Each sentry will regularly send messages to other sentinels, masters, and slaves to confirm whether the other party is alive. If it is found that the other party has not responded within the specified time (configurable), it is temporarily considered that the other party has hung up (so-called "Subjectively considered to be down").
If most sentinels in the "sentinel group" report that a certain master is unresponsive, the system will consider the master to be "completely dead" (i.e., a real down machine objectively). Through a certain vote algorithm, From the remaining slave nodes, select one to be promoted to master, and then automatically modify the relevant configurations.
Redis's rehash
Redis's rehash operation is not completed in a one-time and centralized manner, but is completed in multiple, progressive steps. Redis will maintain Maintain an index counter variable rehashidx to indicate the progress of rehash.
This kind of progressive rehash avoids the huge amount of calculation and memory operations caused by centralized rehash, but it should be noted that when redis is rehash, normal access requests may need to access the hashtable twice ( ht[0], ht[1]), for example, if the key value is rehashed to the new ht1, you need to access ht0 first. If it cannot be found in ht0, it will be found in ht1.
Conditions for the expansion of the Redis hash table
The number of keys saved in the hash table exceeds the size of the hash table.
The Redis server is not currently executing the BGSAVE command (rdb) or the BGREWRITEAOF command, and the load factor of the hash table is greater than or equal to 1.
Redis server The BGSAVE command (rdb) or the BGREWRITEAOF command is currently being executed, and the load factor of the hash table is greater than or equal to 5. (Load factor = the number of nodes saved in the hash table / the size of the hash table, when the load factor of the hash table is less than 0.1 , perform a shrinking operation on the hash table.)
Solution for Redis concurrent competition key
Distributed lock Timestamp
Using message queue
Redis pipeline
For single-thread blocking Redis, Pipeline can meet batch operations, continuously send multiple commands to the Redis Server, and then parse the response results one by one. Pipelining can improve batch processing performance. The main reason for the improvement is that the "interaction round trip" time is reduced in the TCP connection. The bottom layer of the pipeline encapsulates all operations into streams, and redis defines its own incoming and outgoing output streams. Perform operations in the sync() method. Each request is placed in the queue and the response packet is parsed.
Double-write consistency scheme between Redis and Mysql
Update the database first, and then delete the cache. The read operation of the database is much faster than the write operation, so dirty data is difficult to appear. You can implement an asynchronous delayed deletion strategy to ensure that the deletion operation is performed after the read request is completed.
For more programming related knowledge, please visit: Programming Video! !
The above is the detailed content of A summary of 20+ must-know Redis interview questions, come and collect them! !. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











Redis cluster mode deploys Redis instances to multiple servers through sharding, improving scalability and availability. The construction steps are as follows: Create odd Redis instances with different ports; Create 3 sentinel instances, monitor Redis instances and failover; configure sentinel configuration files, add monitoring Redis instance information and failover settings; configure Redis instance configuration files, enable cluster mode and specify the cluster information file path; create nodes.conf file, containing information of each Redis instance; start the cluster, execute the create command to create a cluster and specify the number of replicas; log in to the cluster to execute the CLUSTER INFO command to verify the cluster status; make

How to clear Redis data: Use the FLUSHALL command to clear all key values. Use the FLUSHDB command to clear the key value of the currently selected database. Use SELECT to switch databases, and then use FLUSHDB to clear multiple databases. Use the DEL command to delete a specific key. Use the redis-cli tool to clear the data.

To read a queue from Redis, you need to get the queue name, read the elements using the LPOP command, and process the empty queue. The specific steps are as follows: Get the queue name: name it with the prefix of "queue:" such as "queue:my-queue". Use the LPOP command: Eject the element from the head of the queue and return its value, such as LPOP queue:my-queue. Processing empty queues: If the queue is empty, LPOP returns nil, and you can check whether the queue exists before reading the element.

On CentOS systems, you can limit the execution time of Lua scripts by modifying Redis configuration files or using Redis commands to prevent malicious scripts from consuming too much resources. Method 1: Modify the Redis configuration file and locate the Redis configuration file: The Redis configuration file is usually located in /etc/redis/redis.conf. Edit configuration file: Open the configuration file using a text editor (such as vi or nano): sudovi/etc/redis/redis.conf Set the Lua script execution time limit: Add or modify the following lines in the configuration file to set the maximum execution time of the Lua script (unit: milliseconds)

Use the Redis command line tool (redis-cli) to manage and operate Redis through the following steps: Connect to the server, specify the address and port. Send commands to the server using the command name and parameters. Use the HELP command to view help information for a specific command. Use the QUIT command to exit the command line tool.

Redis counter is a mechanism that uses Redis key-value pair storage to implement counting operations, including the following steps: creating counter keys, increasing counts, decreasing counts, resetting counts, and obtaining counts. The advantages of Redis counters include fast speed, high concurrency, durability and simplicity and ease of use. It can be used in scenarios such as user access counting, real-time metric tracking, game scores and rankings, and order processing counting.

There are two types of Redis data expiration strategies: periodic deletion: periodic scan to delete the expired key, which can be set through expired-time-cap-remove-count and expired-time-cap-remove-delay parameters. Lazy Deletion: Check for deletion expired keys only when keys are read or written. They can be set through lazyfree-lazy-eviction, lazyfree-lazy-expire, lazyfree-lazy-user-del parameters.

In Debian systems, readdir system calls are used to read directory contents. If its performance is not good, try the following optimization strategy: Simplify the number of directory files: Split large directories into multiple small directories as much as possible, reducing the number of items processed per readdir call. Enable directory content caching: build a cache mechanism, update the cache regularly or when directory content changes, and reduce frequent calls to readdir. Memory caches (such as Memcached or Redis) or local caches (such as files or databases) can be considered. Adopt efficient data structure: If you implement directory traversal by yourself, select more efficient data structures (such as hash tables instead of linear search) to store and access directory information
