Usage of curl_multi function set in php
1. Introduction
I’ve been quite busy during this period, and I haven’t written a blog for a long time. Today I will talk about my experience in using the curl_multi_* function set and the issue of http requests.
When our user php initiates an http request. What would we first think of using? That's right, we will create curl to request. What if we need to initiate multiple http requests in one execution. This is simple, make a url request for each URL. Request to play the first one and then request the second one... Is this the end? What else can we say?
Official website link: http://php.net/manual/zh/book.curl.php
2. Disadvantages of multiple simple curl requests
Let’s give a chestnut. There are now three http requests. Each request takes 2s. If you follow a simple curl request (Figure 1-(1)). It took 6 seconds. This is intolerable. The more requests there are, the more time it takes.
Is there a way to reduce query time? Can three http requests be executed at the same time (Figure 1-(1))? There are many ways to solve this problem and reduce the time consumption to 2 seconds. Such as: multi-process, thread, event loop, curl_multi_*, etc. The simplest way is to use curl_multi_* functions. In fact, the internal implementation of curl_multi_* uses an event loop.
3. Simple curl_multi_* application
<code><span><?php</span><span>/** * * curl_multi_*简单运用 * *<span> @author</span>: rudy *<span> @date</span>: 2016/07/12 */</span><span>/** * 根据url,postData获取curl请求对象,这个比较简单,可以看官方文档 */</span><span><span>function</span><span>getCurlObject</span><span>(<span>$url</span>,<span>$postData</span>=array<span>()</span>,<span>$header</span>=array<span>()</span>)</span>{</span><span>$options</span> = <span>array</span>(); <span>$url</span> = trim(<span>$url</span>); <span>$options</span>[CURLOPT_URL] = <span>$url</span>; <span>$options</span>[CURLOPT_TIMEOUT] = <span>10</span>; <span>$options</span>[CURLOPT_USERAGENT] = <span>'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.89 Safari/537.36'</span>; <span>$options</span>[CURLOPT_RETURNTRANSFER] = <span>true</span>; <span>// $options[CURLOPT_PROXY] = '127.0.0.1:8888';</span><span>foreach</span>(<span>$header</span><span>as</span><span>$key</span>=><span>$value</span>){ <span>$options</span>[<span>$key</span>] =<span>$value</span>; } <span>if</span>(!<span>empty</span>(<span>$postData</span>) && is_array(<span>$postData</span>)){ <span>$options</span>[CURLOPT_POST] = <span>true</span>; <span>$options</span>[CURLOPT_POSTFIELDS] = http_build_query(<span>$postData</span>); } <span>if</span>(stripos(<span>$url</span>,<span>'https'</span>) === <span>0</span>){ <span>$options</span>[CURLOPT_SSL_VERIFYPEER] = <span>false</span>; } <span>$ch</span> = curl_init(); curl_setopt_array(<span>$ch</span>,<span>$options</span>); <span>return</span><span>$ch</span>; } <span>// 创建三个待请求的url对象</span><span>$chList</span> = <span>array</span>(); <span>$chList</span>[] = getCurlObject(<span>'https://www.baidu.com'</span>); <span>$chList</span>[] = getCurlObject(<span>'http://www.jd.com'</span>); <span>$chList</span>[] = getCurlObject(<span>'http://www.jianshu.com/'</span>); <span>// 创建多请求执行对象</span><span>$downloader</span> = curl_multi_init(); <span>// 将三个待请求对象放入下载器中</span><span>foreach</span> (<span>$chList</span><span>as</span><span>$ch</span>){ curl_multi_add_handle(<span>$downloader</span>,<span>$ch</span>); } <span>// 轮询</span><span>do</span> { <span>while</span> ((<span>$execrun</span> = curl_multi_exec(<span>$downloader</span>, <span>$running</span>)) == CURLM_CALL_MULTI_PERFORM) ; <span>if</span> (<span>$execrun</span> != CURLM_OK) { <span>break</span>; } <span>// 一旦有一个请求完成,找出来,处理,因为curl底层是select,所以最大受限于1024</span><span>while</span> (<span>$done</span> = curl_multi_info_read(<span>$downloader</span>)) { <span>// 从请求中获取信息、内容、错误</span><span>$info</span> = curl_getinfo(<span>$done</span>[<span>'handle'</span>]); <span>$output</span> = curl_multi_getcontent(<span>$done</span>[<span>'handle'</span>]); <span>$error</span> = curl_error(<span>$done</span>[<span>'handle'</span>]); <span>// 将请求结果保存,我这里是打印出来</span><span>print</span><span>$output</span>; <span>// print "一个请求下载完成!\n";</span><span>// 把请求已经完成了得 curl handle 删除</span> curl_multi_remove_handle(<span>$downloader</span>, <span>$done</span>[<span>'handle'</span>]); } <span>// 当没有数据的时候进行堵塞,把 CPU 使用权交出来,避免上面 do 死循环空跑数据导致 CPU 100%</span><span>if</span> (<span>$running</span>) { <span>$rel</span> = curl_multi_select(<span>$downloader</span>, <span>1</span>); <span>if</span>(<span>$rel</span> == -<span>1</span>){ usleep(<span>1000</span>); } } <span>if</span>( <span>$running</span> == <span>false</span>){ <span>break</span>; } } <span>while</span> (<span>true</span>); <span>// 下载完毕,关闭下载器</span> curl_multi_close(<span>$downloader</span>); <span>echo</span><span>"所有请求下载完成!"</span>;</span></code>
In this example, first create three or more url request objects to be requested. Create a downloader through curl_multi_* functions. Write the request to the downloader. Last poll. Waiting for three requests to complete now. Do processing.
4. Complex curl_multi_* usage
Is this how curl_multi_* is used? too yong too simple! In the above example. The requests in the downloader $downloader are added from the beginning. Can we dynamically add requests to the downloader. Dynamically retrieve completed requests from the downloader. Think about it. What's this? Isn't this the core part of the crawler - the dynamic downloader. How to add it dynamically? We can use multi-process to add via IPC. We can add waits through queues through coroutines.
').addClass('pre-numbering').hide(); $(this).addClass('has-numbering').parent().append($numbering); for (i = 1; i ').text(i)); }; $numbering.fadeIn(1700); }); });I implemented a crawler framework through coroutine + curl_multi_*.
Tspider: https://github.com/hirudy/Tspider.
A single process can handle requests 2000-5000/min.
The above introduces the usage of the curl_multi function set in PHP, including the relevant content. I hope it will be helpful to friends who are interested in PHP tutorials.

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











Hitomi Downloader is an open source media downloader for Microsoft Windows operating systems that can be extended using user scripts. The downloader supports downloading from many websites by default, including YouTube, Facebook, Twitch, Flickr, Imgur, Instagram, Wayback Machine, Weibo, and Pinterest. By default, the downloader also supports several manga websites as well as adult websites and services. Hitomi Downloader also supports all sites supported by youtube-dl. Users can extend support via scripts. You can find it on the project’s GitHub page

Both curl and Pythonrequests are powerful tools for sending HTTP requests. While curl is a command-line tool that allows you to send requests directly from the terminal, Python's requests library provides a more programmatic way to send requests from Python code. The basic syntax for converting curl to Pythonrequestscurl command is as follows: curl[OPTIONS]URL When converting curl command to Python request, we need to convert the options and URL into Python code. Here is an example curlPOST command: curl-XPOST https://example.com/api

PHP8.1 released: Introducing curl for concurrent processing of multiple requests. Recently, PHP officially released the latest version of PHP8.1, which introduced an important feature: curl for concurrent processing of multiple requests. This new feature provides developers with a more efficient and flexible way to handle multiple HTTP requests, greatly improving performance and user experience. In previous versions, handling multiple requests often required creating multiple curl resources and using loops to send and receive data respectively. Although this method can achieve the purpose

From start to finish: How to use php extension cURL for HTTP requests Introduction: In web development, it is often necessary to communicate with third-party APIs or other remote servers. Using cURL to make HTTP requests is a common and powerful way. This article will introduce how to use PHP to extend cURL to perform HTTP requests, and provide some practical code examples. 1. Preparation First, make sure that php has the cURL extension installed. You can execute php-m|grepcurl on the command line to check

To update the curl version under Linux, you can follow the steps below: Check the current curl version: First, you need to determine the curl version installed in the current system. Open a terminal and execute the following command: curl --version This command will display the current curl version information. Confirm available curl version: Before updating curl, you need to confirm the latest version available. You can visit curl's official website (curl.haxx.se) or related software sources to find the latest version of curl. Download the curl source code: Using curl or a browser, download the source code file for the curl version of your choice (usually .tar.gz or .tar.bz2

How to handle 301 redirection of web pages in PHPCurl? When using PHPCurl to send network requests, you will often encounter a 301 status code returned by the web page, indicating that the page has been permanently redirected. In order to handle this situation correctly, we need to add some specific options and processing logic to the Curl request. The following will introduce in detail how to handle 301 redirection of web pages in PHPCurl, and provide specific code examples. 301 redirect processing principle 301 redirect means that the server returns a 30

In Linux, curl is a very practical tool for transferring data to and from the server. It is a file transfer tool that uses URL rules to work under the command line; it supports file upload and download, and is a comprehensive transfer tool. . Curl provides a lot of very useful functions, including proxy access, user authentication, ftp upload and download, HTTP POST, SSL connection, cookie support, breakpoint resume and so on.

How to set cookies in php curl: 1. Create a PHP sample file; 2. Set cURL transmission options through the "curl_setopt" function; 3. Pass the cookie in CURL.
