Home Backend Development Python Tutorial Summary of Python crawler skills

Summary of Python crawler skills

Feb 24, 2017 pm 03:22 PM
python reptile

Python crawler: Summary of some commonly used crawler techniques

There are also many reuse processes in the development process of crawlers. Here is a summary, which can save some things in the future.

1. Basic crawling of web pages

get method

import urllib2
url "http://www.baidu.com"
respons = urllib2.urlopen(url)
print response.read()
Copy after login

post method

import urllib
import urllib2

url = "http://abcde.com"
form = {'name':'abc','password':'1234'}
form_data = urllib.urlencode(form)
request = urllib2.Request(url,form_data)
response = urllib2.urlopen(request)
print response.read()
Copy after login

2. Use proxy IP

In the process of developing crawlers, IPs are often blocked. In this case, you need to use the proxy IP;

There is a ProxyHandler class in the urllib2 package, through which you can set up a proxy to access the web page, as shown in the following code snippet:

import urllib2

proxy = urllib2.ProxyHandler({'http': '127.0.0.1:8087'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
response = urllib2.urlopen('http://www.baidu.com')
print response.read()
Copy after login

3. Cookies processing

Cookies are data (usually encrypted) stored on the user's local terminal by some websites in order to identify the user's identity and perform session tracking. , Python provides the cookielib module for processing cookies. The main function of the cookielib module is to provide objects that can store cookies, so that it can be used with the urllib2 module to access Internet resources.

Code snippet:

import urllib2, cookielib

cookie_support= urllib2.HTTPCookieProcessor(cookielib.CookieJar())
opener = urllib2.build_opener(cookie_support)
urllib2.install_opener(opener)
content = urllib2.urlopen('http://XXXX').read()
Copy after login

The key is CookieJar(), which is used to manage HTTP cookie values, store cookies generated by HTTP requests, and add cookie objects to outgoing HTTP requests. The entire cookie is stored in memory, and the cookie will be lost after garbage collection of the CookieJar instance. All processes do not need to be operated separately.

Add cookie manually


Copy code The code is as follows:

cookie = "PHPSESSID=91rurfqm2329bopnosfu4fvmu7; kmsign= 55d2c12c9b1e3; KMUID=b6Ejc1XSwPq9o756AxnBAg="
request.add_header("Cookie", cookie)

4. Disguise as a browser

Some websites are disgusted with crawlers Visit, so all requests to crawlers will be rejected. Therefore, HTTP Error 403: Forbidden often occurs when using urllib2 to directly access websites.

Pay special attention to some headers. The server will check these headers.

1).User-Agent Some Server or Proxy will check this value to determine whether it is a Request
2).Content-Type. When using the REST interface, Server will check this value to determine how the content in the HTTP Body should be parsed. .

This can be achieved by modifying the header in the http package. The code snippet is as follows:

import urllib2

headers = {
 'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'
}
request = urllib2.Request(
 url = 'http://my.oschina.net/jhao104/blog?catalog=3463517',
 headers = headers
)
print urllib2.urlopen(request).read()
Copy after login

5. Page parsing

The most powerful tool for page parsing is of course regular expressions. This is different for different users of different websites. Without too much explanation, here are two better URLs:

Regular Expressions Online test: http://tool.oschina.net/regex/

The second is the parsing library. Two commonly used ones are lxml and BeautifulSoup. For the use of these two, we will introduce two better websites. :

lxml:http://my.oschina.net/jhao104/blog/639448

BeautifulSoup:http://cuiqingcai.com/1319.html

for My evaluation of these two libraries is that they are both HTML/XML processing libraries. Beautifulsoup is implemented purely in python and is inefficient, but its functions are practical. For example, the source code of an HTML node can be obtained through search results; lxmlC language encoding is highly efficient. , Support Xpath

6, Verification code processing

For some simple verification codes, simple identification can be performed. I have only done some simple verification code recognition. However, some anti-human verification codes, such as 12306, can be manually coded through the coding platform. Of course, this requires a fee.

7. Gzip compression

Have you ever encountered some web pages that are garbled no matter how they are transcoded? Haha, that means you don’t know that many web services have the ability to send compressed data, which can reduce the large amount of data transmitted on the network line by more than 60%. This is especially true for XML web services because XML data can be compressed to a very high degree.

But generally the server will not send compressed data for you unless you tell the server that you can handle compressed data.

So you need to modify the code like this:

import urllib2, httplib
request = urllib2.Request('http://xxxx.com')
request.add_header('Accept-encoding', 'gzip') 1
opener = urllib2.build_opener()
f = opener.open(request)
Copy after login

This is the key: create a Request object and add an Accept-encoding header to tell the server that you can Accept gzip compressed data

Then decompress the data:

import StringIO
import gzip

compresseddata = f.read() 
compressedstream = StringIO.StringIO(compresseddata)
gzipper = gzip.GzipFile(fileobj=compressedstream) 
print gzipper.read()
Copy after login

8. Multi-threaded concurrent capture

If a single thread is too slow, multi-threading is needed. Here is a simple thread pool template. This program simply prints 1-10, but it can be seen that it is concurrent.

Although Python's multi-threading is useless, it can still improve efficiency to a certain extent for network-frequent crawlers.

from threading import Thread
from Queue import Queue
from time import sleep
# q是任务队列
#NUM是并发线程总数
#JOBS是有多少任务
q = Queue()
NUM = 2
JOBS = 10
#具体的处理函数,负责处理单个任务
def do_somthing_using(arguments):
 print arguments
#这个是工作进程,负责不断从队列取数据并处理
def working():
 while True:
 arguments = q.get()
 do_somthing_using(arguments)
 sleep(1)
 q.task_done()
#fork NUM个线程等待

 alert(“Hello CSDN”);
for i in range(NUM):
 t = Thread(target=working)
 t.setDaemon(True)
 t.start()
#把JOBS排入队列
for i in range(JOBS):
 q.put(i)
#等待所有JOBS完成
q.join()
Copy after login

The above is the entire content of this article. I hope it will be helpful to everyone's learning. I also hope that everyone will support the PHP Chinese website.

For more articles related to the summary of Python crawler skills, please pay attention to the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1664
14
PHP Tutorial
1268
29
C# Tutorial
1243
24
PHP and Python: Different Paradigms Explained PHP and Python: Different Paradigms Explained Apr 18, 2025 am 12:26 AM

PHP is mainly procedural programming, but also supports object-oriented programming (OOP); Python supports a variety of paradigms, including OOP, functional and procedural programming. PHP is suitable for web development, and Python is suitable for a variety of applications such as data analysis and machine learning.

Choosing Between PHP and Python: A Guide Choosing Between PHP and Python: A Guide Apr 18, 2025 am 12:24 AM

PHP is suitable for web development and rapid prototyping, and Python is suitable for data science and machine learning. 1.PHP is used for dynamic web development, with simple syntax and suitable for rapid development. 2. Python has concise syntax, is suitable for multiple fields, and has a strong library ecosystem.

PHP and Python: A Deep Dive into Their History PHP and Python: A Deep Dive into Their History Apr 18, 2025 am 12:25 AM

PHP originated in 1994 and was developed by RasmusLerdorf. It was originally used to track website visitors and gradually evolved into a server-side scripting language and was widely used in web development. Python was developed by Guidovan Rossum in the late 1980s and was first released in 1991. It emphasizes code readability and simplicity, and is suitable for scientific computing, data analysis and other fields.

Python vs. JavaScript: The Learning Curve and Ease of Use Python vs. JavaScript: The Learning Curve and Ease of Use Apr 16, 2025 am 12:12 AM

Python is more suitable for beginners, with a smooth learning curve and concise syntax; JavaScript is suitable for front-end development, with a steep learning curve and flexible syntax. 1. Python syntax is intuitive and suitable for data science and back-end development. 2. JavaScript is flexible and widely used in front-end and server-side programming.

How to run sublime code python How to run sublime code python Apr 16, 2025 am 08:48 AM

To run Python code in Sublime Text, you need to install the Python plug-in first, then create a .py file and write the code, and finally press Ctrl B to run the code, and the output will be displayed in the console.

Golang vs. Python: Performance and Scalability Golang vs. Python: Performance and Scalability Apr 19, 2025 am 12:18 AM

Golang is better than Python in terms of performance and scalability. 1) Golang's compilation-type characteristics and efficient concurrency model make it perform well in high concurrency scenarios. 2) Python, as an interpreted language, executes slowly, but can optimize performance through tools such as Cython.

Where to write code in vscode Where to write code in vscode Apr 15, 2025 pm 09:54 PM

Writing code in Visual Studio Code (VSCode) is simple and easy to use. Just install VSCode, create a project, select a language, create a file, write code, save and run it. The advantages of VSCode include cross-platform, free and open source, powerful features, rich extensions, and lightweight and fast.

How to run python with notepad How to run python with notepad Apr 16, 2025 pm 07:33 PM

Running Python code in Notepad requires the Python executable and NppExec plug-in to be installed. After installing Python and adding PATH to it, configure the command "python" and the parameter "{CURRENT_DIRECTORY}{FILE_NAME}" in the NppExec plug-in to run Python code in Notepad through the shortcut key "F6".

See all articles