


Detailed explanation of how Python crawlers use proxy to crawl web pages
Proxy type (proxy): transparent proxy, anonymous proxy, confusion proxy and high-anonymity proxy. Here is some knowledge of pythoncrawlers using proxy, and a proxy pool class. It is convenient for everyone to deal with various aspects of work. A complex crawling problem.
urllib module uses proxy
urllib/urllib2 It is more troublesome to use proxy. You need to build a ProxyHandler class first, and then use this class to build the opener class that opens the web page, and then in the request Install the opener.
The proxy format is "http://127.0.0.1:80". If you want the account password, it is "http://user:password@127.0.0.1:80".
proxy="http://127.0.0.1:80" # 创建一个ProxyHandler对象 proxy_support=urllib.request.ProxyHandler({'http':proxy}) # 创建一个opener对象 opener = urllib.request.build_opener(proxy_support) # 给request装载opener urllib.request.install_opener(opener) # 打开一个url r = urllib.request.urlopen('http://youtube.com',timeout = 500)
requests module uses proxy
Using proxy for requests is much simpler than urllib...Here is a single proxy as an example. If it is used multiple times, you can use session to build a class.
If you need to use a proxy, you can configure a single request by providing the proxies parameter to any request method:
import requests proxies = { "http": "http://127.0.0.1:3128", "https": "http://127.0.0.1:2080", } r=requests.get("http://youtube.com", proxies=proxies) print r.text
You can also configure the proxy through the environment variables HTTP_PROXY and HTTPS_PROXY.
export HTTP_PROXY="http://127.0.0.1:3128" export HTTPS_PROXY="http://127.0.0.1:2080" python >>> import requests >>> r=requests.get("http://youtube.com") >>> print r.text
If your proxy needs to use HTTP Basic Auth, you can use http://user:password@host/ Syntax:
proxies = { "http": "http://user:pass@127.0.0.1:3128/", }
Using python's proxy is very simple. The most important thing is to Find an agent with a stable and reliable network. If you have any questions, please leave a message
The above is the detailed content of Detailed explanation of how Python crawlers use proxy to crawl web pages. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Solution to permission issues when viewing Python version in Linux terminal When you try to view Python version in Linux terminal, enter python...

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

When using Python's pandas library, how to copy whole columns between two DataFrames with different structures is a common problem. Suppose we have two Dats...

How does Uvicorn continuously listen for HTTP requests? Uvicorn is a lightweight web server based on ASGI. One of its core functions is to listen for HTTP requests and proceed...

Fastapi ...

Using python in Linux terminal...

Understanding the anti-crawling strategy of Investing.com Many people often try to crawl news data from Investing.com (https://cn.investing.com/news/latest-news)...
