Table of Contents
1. Memcached
2. Redis
Home Backend Development PHP Tutorial Use ngx_lua to build high-concurrency applications

Use ngx_lua to build high-concurrency applications

Jul 30, 2016 pm 01:31 PM
memcached nbsp pipeline redis

This article mainly focuses on how to communicate non-blockingly with the backend memcached and redis through ngx_lua.

1. Memcached

Accessing Memcached in Nginx requires module support. Here, HttpMemcModule is selected. This module can perform non-blocking communication with the back-end Memcached. We know that Memcached is officially provided. This module only supports get operations, while Memc supports most Memcached commands.

                                                                                                                                                                                                                            to be passed in the Memc module as parameters. All variables prefixed with $memc_ are the entry variables of Memc. memc_pass points to the backend Memcached Server.

Configuration:

[plain] view plaincopyprint?

  1. #Use HttpMemcModule
  2. location = /memc {
  3. set $memc_cmd $arg_cmd;
  4. set $memc_key $arg_key;
  5. set $memc_value $arg_val;
  6. set $memc_exptime $arg_exptime;
  7. memc_pass '127.0.0.1:11211'; }
  8. Output:

[plain] view plaincopyprint?

    $ curl 'http://localhost/memc?cmd=set&key=foo&val=Hello'
  1. $ STORED
  2. $ curl 'http://localhost/memc?cmd=get&key =foo'
  3. $ Hello
  4. This realizes access to memcached. Let’s take a look at how to access memcached in lua.
Configuration:

[plain] view plaincopyprint?

    #Access Memcached in Lua
  1. location = /memc {
  2. internal; #Only internal access
  3. set $memc_cmd get;
  4. set $memc_key $ arg_key;
  5. memc_pass '127.0.0.1:11211';
  6. }
  7. location = /lua_memc {
  8. content_by_lua '
  9. local res = ngx.location.capture("/ memc", {
  10. args = { key = ngx.var.arg_key }
  11. })
  12. if res.status == 200 then
  13. ngx.say(res.body)
  14. End plaincopyprint?
  15. $ curl ‘http://localhost/lua_memc?key=foo’
  16. $ Hello
Accessing memcached through Lua is mainly implemented through sub-requests in a method similar to function calls. First, a memc location is defined for communication through backend memcached, which is equivalent to memcached storage. Since the entire Memc module is non-blocking, ngx.location.capture is also non-blocking, so the entire operation is non-blocking.

2. Redis

Accessing redis requires the support of HttpRedis2Module, which can also communicate with redis in a non-blocking manner. However, the response of redis2 is the native response of redis, so when used in Lua, this response needs to be parsed. LuaRedisModule can be used. This module can construct redis's native request and parse redis's native response.

Configuration:

[plain] view plaincopyprint?

  1. #Access Redis in Lua
  2. location = /redis {
  3. internal; #Only internal access
  4. redis2 _query get $arg_key;
  5. redis2_pass '127.0. 0.1:6379';
  6. }
  7. location = /lua_redis { #Requires LuaRedisParser
  8. content_by_lua '
  9. Local parser = require("redis.parser")
  10. Local res = ngx.location.capture("/redis", {
  11. args = { key = ngx.var.arg_key }
  12. })
  13. if res.status == 200 then
  14. Reply = parser.parse_reply(res.body)
  15. ngx.say(reply)
  16. end
  17. '; }
  18. Output:
[plain] view plaincopyprint?

    $ curl 'http://localhost/lua_redis?key=foo'
  1. $ Hello
  2. Similar to accessing memcached, you need to provide a redis storage specifically for querying redis. then Call redis through subrequests.
3. Redis Pipeline When actually accessing redis, it may be necessary to query multiple keys at the same time. We can use ngx.location.capture_multi to send multiple sub-requests to redis storage, and then parse the response content. However, there is a limit. The Nginx kernel stipulates that the number of sub-requests that can be initiated at one time cannot exceed 50, so when the number of keys exceeds 50, this solution is no longer applicable.

Fortunately, redis provides a pipeline mechanism that can execute multiple commands in one connection, which can reduce the round-trip delay of executing commands multiple times. After the client sends multiple commands through the pipeline, redis receives and executes these commands sequentially, and then outputs the results of the commands in sequence. When using pipeline in Lua, you need to use redis2_raw_queries of the redis2 module to perform redis' native request queries. Configuration:

[plain] view plaincopyprint?

  1. #Access Redis in Lua
  2. location = /redis {
  3. internal; #Only internal access
  4. redis2_raw_queries $args $echo_request_body;
  5. redis2_pass '127.0.0.1:6379';
  6. }
  7. location = /pipeline {
  8. content_by_lua 'conf/pipeline. lua';
  9. }
  10. pipeline .lua
[plain] view plaincopyprint?

-- conf/pipeline.lua file
  1. local parser = require('redis.parser')
  2. local reqs = {
  3. {'get', 'one' }, {'get', 'two'}
  4. }
  5. --Construct native redis query, get onernget twon
  6. local raw_reqs = {}
  7. for i , req in ipairs (reqs) do
  8. table.insert(raw_reqs, parser.build_query(req))
  9. end
  10. local res = ngx.location.capture('/redis?'..#reqs, { body = table.concat(raw_reqs, '') })
  11. if res.status and res.body then
  12. -- Parse the native response of redis
  13.      local replies = parser. parse_replies(res.body, #reqs)
  14. for i, reply in ipairs(replies) do
  15. ngx.say(reply[1])
  16. End Output:
  17. [plain] view plaincopyprint?
$ curl 'http://localhost/pipeline'

$ first

    second
  1. 4. In the previous example of accessing redis and memcached, in Every time a request is processed, a connection is established with the backend server, and then the connection is released after the request is processed. In this process, there will be some overhead such as three handshakes and timewait, which is intolerable for high-concurrency applications. The connection pool is introduced here to eliminate this overhead.
  2. The connection pool requires the support of the HttpUpstreamKeepaliveModule module. Configuration:
  3. [plain] view plaincopyprint?
  1. Http {##Need httpupstreamKeepaliveModule
  2. upstream redis_pool {
  3. Server 127.0.0.1:6379;
  4. #can accommodate 1024 connected connectors
  5. 1024 single; in}
  6. server {
  7. local =/redis {
  8. ...
  9. redis2_pass redis_pool;
  10. }
  11. This module provides keepalive instructions, and its context is upstream. We know that upstream is used when using Nginx as a reverse proxy. Actually upstream refers to "upstream". This "upstream" can be some servers such as redis, memcached or mysql. Upstream can define a virtual server cluster, and these backend servers can enjoy load balancing. Keepalive 1024 defines the size of the connection pool. When the number of connections exceeds this size, subsequent connections will automatically degenerate into short connections. The use of the connection pool is very simple, just replace the original IP and port number.
  12. Someone once measured that when accessing memcached (using the previous Memc module) without using a connection pool, the rps was 20,000. After using the connection pool, rps soared all the way to 140,000. In actual situations, such a big improvement may not be achieved, but basically a 100-200% improvement is still possible.
  13. 5. Summary
  14. Here is a summary of access to memcached and redis. 1. Nginx provides a powerful programming model. Location is equivalent to a function, sub-request is equivalent to a function call, and location can also send sub-requests to itself, thus forming a recursive model, so this model is used to implement complex businesses logic.
  15. 2. Nginx’s IO operations must be non-blocking. If Nginx blocks there, it will greatly reduce Nginx’s performance. Therefore, in Lua, subrequests must be issued through ngx.location.capture to delegate these IO operations to Nginx's event model. 3. When you need to use tcp connection, try to use the connection pool. This eliminates a lot of overhead in establishing and releasing connections.
The above introduces the use of ngx_lua to build high-concurrency applications, including aspects of the content. I hope it will be helpful to friends who are interested in PHP tutorials.
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1655
14
PHP Tutorial
1254
29
C# Tutorial
1228
24
How to build the redis cluster mode How to build the redis cluster mode Apr 10, 2025 pm 10:15 PM

Redis cluster mode deploys Redis instances to multiple servers through sharding, improving scalability and availability. The construction steps are as follows: Create odd Redis instances with different ports; Create 3 sentinel instances, monitor Redis instances and failover; configure sentinel configuration files, add monitoring Redis instance information and failover settings; configure Redis instance configuration files, enable cluster mode and specify the cluster information file path; create nodes.conf file, containing information of each Redis instance; start the cluster, execute the create command to create a cluster and specify the number of replicas; log in to the cluster to execute the CLUSTER INFO command to verify the cluster status; make

How to clear redis data How to clear redis data Apr 10, 2025 pm 10:06 PM

How to clear Redis data: Use the FLUSHALL command to clear all key values. Use the FLUSHDB command to clear the key value of the currently selected database. Use SELECT to switch databases, and then use FLUSHDB to clear multiple databases. Use the DEL command to delete a specific key. Use the redis-cli tool to clear the data.

How to read redis queue How to read redis queue Apr 10, 2025 pm 10:12 PM

To read a queue from Redis, you need to get the queue name, read the elements using the LPOP command, and process the empty queue. The specific steps are as follows: Get the queue name: name it with the prefix of "queue:" such as "queue:my-queue". Use the LPOP command: Eject the element from the head of the queue and return its value, such as LPOP queue:my-queue. Processing empty queues: If the queue is empty, LPOP returns nil, and you can check whether the queue exists before reading the element.

How to configure Lua script execution time in centos redis How to configure Lua script execution time in centos redis Apr 14, 2025 pm 02:12 PM

On CentOS systems, you can limit the execution time of Lua scripts by modifying Redis configuration files or using Redis commands to prevent malicious scripts from consuming too much resources. Method 1: Modify the Redis configuration file and locate the Redis configuration file: The Redis configuration file is usually located in /etc/redis/redis.conf. Edit configuration file: Open the configuration file using a text editor (such as vi or nano): sudovi/etc/redis/redis.conf Set the Lua script execution time limit: Add or modify the following lines in the configuration file to set the maximum execution time of the Lua script (unit: milliseconds)

How to use the redis command line How to use the redis command line Apr 10, 2025 pm 10:18 PM

Use the Redis command line tool (redis-cli) to manage and operate Redis through the following steps: Connect to the server, specify the address and port. Send commands to the server using the command name and parameters. Use the HELP command to view help information for a specific command. Use the QUIT command to exit the command line tool.

How to set the redis expiration policy How to set the redis expiration policy Apr 10, 2025 pm 10:03 PM

There are two types of Redis data expiration strategies: periodic deletion: periodic scan to delete the expired key, which can be set through expired-time-cap-remove-count and expired-time-cap-remove-delay parameters. Lazy Deletion: Check for deletion expired keys only when keys are read or written. They can be set through lazyfree-lazy-eviction, lazyfree-lazy-expire, lazyfree-lazy-user-del parameters.

How to implement redis counter How to implement redis counter Apr 10, 2025 pm 10:21 PM

Redis counter is a mechanism that uses Redis key-value pair storage to implement counting operations, including the following steps: creating counter keys, increasing counts, decreasing counts, resetting counts, and obtaining counts. The advantages of Redis counters include fast speed, high concurrency, durability and simplicity and ease of use. It can be used in scenarios such as user access counting, real-time metric tracking, game scores and rankings, and order processing counting.

How to optimize the performance of debian readdir How to optimize the performance of debian readdir Apr 13, 2025 am 08:48 AM

In Debian systems, readdir system calls are used to read directory contents. If its performance is not good, try the following optimization strategy: Simplify the number of directory files: Split large directories into multiple small directories as much as possible, reducing the number of items processed per readdir call. Enable directory content caching: build a cache mechanism, update the cache regularly or when directory content changes, and reduce frequent calls to readdir. Memory caches (such as Memcached or Redis) or local caches (such as files or databases) can be considered. Adopt efficient data structure: If you implement directory traversal by yourself, select more efficient data structures (such as hash tables instead of linear search) to store and access directory information

See all articles