How to use caffeine_redis to customize the second level cache
Questions
Based on the requirements raised, I think there are mainly the following two questions:
Because there are Local cache, how to ensure data consistency. When the data of one node changes, how does the data of other nodes become invalid?
The data is incorrect and needs to be resynchronized. How to invalidate the cache?
Flow chart
The next step is to cooperate with the product and other developers to draw a flow chart, as follows:
Use a A configuration table records whether caching is needed and whether caching is enabled to achieve cache invalidation when notified.
Because the project requirements are general, even if the message is lost, there will not be much impact, so I finally chose the subscription and publishing functions in redis to notify other nodes of invalid local cache.
Development
The above questions are clear, and the flow chart is also clear. Then get ready to start writing bugs. The overall idea is to customize the annotation implementation aspects and try to reduce the coupling to the business code.
CacheConfig
Mainly explained in the code, defining a CacheManager to integrate with the business. Special attention needs to be paid to the maximum number of cacheable items to avoid occupying too much program memory and causing the memory to become full. Of course, it cannot be too small, because the hit rate issue also needs to be considered. Therefore, the final size must be determined based on the actual business.
@Bean(name = JKDDCX) @Primary public CacheManager cacheManager() { CaffeineCacheManager cacheManager = new CaffeineCacheManager(); cacheManager.setCaffeine(Caffeine.newBuilder() // 设置最后一次写入或访问后经过固定时间过期 .expireAfterAccess(EXPIRE, TIME_UNIT) //设置本地缓存写入后过期时间 .expireAfterWrite(EXPIRE, TIME_UNIT) // 初始的缓存空间大小 .initialCapacity(500) // 缓存的最大条数 .maximumSize(1000));// 使用人数 * 5 (每个人不同的入参 5 条)\ return cacheManager; }
@CaffeineCache
Customize annotations and add all available parameters.
@Target({ ElementType.METHOD ,ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @Documented public @interface CaffeineCache { public String moudleId() default ""; //用于在数据库中配置参数 public String methodId() default ""; public String cachaName() default ""; //动态切换实际的 CacheManager public String cacheManager() default ""; }
CacheMessageListener
Cache listener is mainly responsible for ensuring the consistency of multi-node data. When a node cache is updated, other nodes are notified to handle it accordingly. Use the publish and subscribe function of Redis and implement the MessageListener interface to implement the main technology.
Of course, another detail below is that the Redis#keys command is disabled in general production environments, so you have to scan the corresponding keys in another way.
public class CacheMessageListener implements MessageListener { @Override public void onMessage(Message message, byte[] pattern) { CacheMessage cacheMessage = (CacheMessage) redisTemplate.getValueSerializer().deserialize(message.getBody()); logger.info("收到redis清除缓存消息, 开始清除本地缓存, the cacheName is {}, the key is {}", cacheMessage.getCacheName(), cacheMessage.getKey()); // redisCaffeineCacheManager.clearLocal(cacheMessage.getCacheName(), cacheMessage.getKey()); /** * 如果是一个类上使用了 注解 @CaffeineCache ,那么所有接口都会缓存。 * 下面的逻辑是:除了当前模块的接口访问的入参 key,其他的 redis 缓存都会被清除 * (比如此模块的表更新了,但是当前调用此接口只是缓存了当前这个入参的redis,其他的数据删除) */ String prefixKey = RedisConstant.WXYMG_DATA_CACHE + cacheMessage.getCacheName(); Set<String> keys = redisTemplate.execute((RedisCallback<Set<String>>) connection -> { Set<String> keysTmp = new HashSet<>(); Cursor<byte[]> cursor = connection.scan(new ScanOptions.ScanOptionsBuilder(). match(prefixKey + "*"). count(50).build()); while (cursor.hasNext()) { keysTmp.add(new String(cursor.next())); } return keysTmp; }); Iterator iterator = keys.iterator(); while (iterator.hasNext()) { if (iterator.next().toString().equals(cacheMessage.getKey())) { iterator.remove(); } } redisTemplate.delete(keys); cacheConfig.cacheManager().getCache(cacheMessage.getCacheName()).clear(); //cacheName 下的都删除 } }
CaffeineCacheAspect
Then there is the logical processing of the aspect. The content inside is exactly the same as the flow chart, except that the code is used to implement the requirements.
Among them: The following code is Redis publishing messages.
redisTemplate.convertAndSend(CacheConfig.TOPIC, new CacheMessage(caffeineCache.cachaName(), redisKey));
CacheMessage
This is a message body when Redis publishes a message. It is also customized. You can add more parameter attributes
public class CacheMessage implements Serializable { private static final long serialVersionUID = -1L; private String cacheName; private Object key; public CacheMessage(String cacheName, Object key) { super(); this.cacheName = cacheName; this.key = key; } }
The above is the detailed content of How to use caffeine_redis to customize the second level cache. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











Redis cluster mode deploys Redis instances to multiple servers through sharding, improving scalability and availability. The construction steps are as follows: Create odd Redis instances with different ports; Create 3 sentinel instances, monitor Redis instances and failover; configure sentinel configuration files, add monitoring Redis instance information and failover settings; configure Redis instance configuration files, enable cluster mode and specify the cluster information file path; create nodes.conf file, containing information of each Redis instance; start the cluster, execute the create command to create a cluster and specify the number of replicas; log in to the cluster to execute the CLUSTER INFO command to verify the cluster status; make

How to clear Redis data: Use the FLUSHALL command to clear all key values. Use the FLUSHDB command to clear the key value of the currently selected database. Use SELECT to switch databases, and then use FLUSHDB to clear multiple databases. Use the DEL command to delete a specific key. Use the redis-cli tool to clear the data.

To read a queue from Redis, you need to get the queue name, read the elements using the LPOP command, and process the empty queue. The specific steps are as follows: Get the queue name: name it with the prefix of "queue:" such as "queue:my-queue". Use the LPOP command: Eject the element from the head of the queue and return its value, such as LPOP queue:my-queue. Processing empty queues: If the queue is empty, LPOP returns nil, and you can check whether the queue exists before reading the element.

On CentOS systems, you can limit the execution time of Lua scripts by modifying Redis configuration files or using Redis commands to prevent malicious scripts from consuming too much resources. Method 1: Modify the Redis configuration file and locate the Redis configuration file: The Redis configuration file is usually located in /etc/redis/redis.conf. Edit configuration file: Open the configuration file using a text editor (such as vi or nano): sudovi/etc/redis/redis.conf Set the Lua script execution time limit: Add or modify the following lines in the configuration file to set the maximum execution time of the Lua script (unit: milliseconds)

Use the Redis command line tool (redis-cli) to manage and operate Redis through the following steps: Connect to the server, specify the address and port. Send commands to the server using the command name and parameters. Use the HELP command to view help information for a specific command. Use the QUIT command to exit the command line tool.

Redis counter is a mechanism that uses Redis key-value pair storage to implement counting operations, including the following steps: creating counter keys, increasing counts, decreasing counts, resetting counts, and obtaining counts. The advantages of Redis counters include fast speed, high concurrency, durability and simplicity and ease of use. It can be used in scenarios such as user access counting, real-time metric tracking, game scores and rankings, and order processing counting.

There are two types of Redis data expiration strategies: periodic deletion: periodic scan to delete the expired key, which can be set through expired-time-cap-remove-count and expired-time-cap-remove-delay parameters. Lazy Deletion: Check for deletion expired keys only when keys are read or written. They can be set through lazyfree-lazy-eviction, lazyfree-lazy-expire, lazyfree-lazy-user-del parameters.

In Debian systems, readdir system calls are used to read directory contents. If its performance is not good, try the following optimization strategy: Simplify the number of directory files: Split large directories into multiple small directories as much as possible, reducing the number of items processed per readdir call. Enable directory content caching: build a cache mechanism, update the cache regularly or when directory content changes, and reduce frequent calls to readdir. Memory caches (such as Memcached or Redis) or local caches (such as files or databases) can be considered. Adopt efficient data structure: If you implement directory traversal by yourself, select more efficient data structures (such as hash tables instead of linear search) to store and access directory information
