Home Backend Development Python Tutorial A brief discussion on the encoding processing of Python crawling web pages

A brief discussion on the encoding processing of Python crawling web pages

Feb 22, 2017 am 11:13 AM

Background

During the Mid-Autumn Festival, a friend sent me an email, saying that when he was climbing Lianjia, he found that the codes returned by the web page were all garbled. He asked me to help him with his advice (working overtime during the Mid-Autumn Festival, so dedicated = =!). In fact, I have encountered this problem very early. I read it a little bit when I was reading novels, but I didn’t take it seriously. In fact, this problem is right Caused by poor understanding of coding.

Question

A very common crawler code, the code is like this:

# ecoding=utf-8
import re
import requests
import sys
reload(sys)
sys.setdefaultencoding('utf8')

url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'
res = requests.get(url)
print res.text
Copy after login

The purpose is actually very simple, which is to crawl the content of Lianjia. However, after executing this, the returned results and all the content involving Chinese will become garbled, such as this

A brief discussion on the encoding processing of Python crawling web pages

<script type="text/template" id="newAddHouseTpl">
 <p class="newAddHouse">
  自从您上次浏览(<%=time%>ï¼‰ä¹‹åŽï¼Œè¯¥æœç´¢æ¡ä»¶ä¸‹æ–°å¢žåŠ äº†<%=count%>套房源
  <a href="<%=url%>" class="LOGNEWERSHOUFANGSHOW" <%=logText%>><%=linkText%></a>
  <span class="newHouseRightClose">x</span>
 </p>
</script>
Copy after login

Such data can be said to be useless.

Problem Analysis

The problem here is obvious, that is, the encoding of the text is incorrect, resulting in garbled characters.

View the encoding of the web page

From the header of the crawled target web page, the web page is encoded with utf-8.

<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
Copy after login

So, we must use utf-8 for the final encoding, that is to say, the final text processing must use utf-8 to decode, that is: decode('utf-8')

Text encoding and decoding

Python encoding and decoding The process is like this, the source file ===》 encode (encoding method) ===》decode (decoding method), to a large extent, it is not recommended to use

import sys
reload(sys)
sys.setdefaultencoding(&#39;utf8&#39;)
Copy after login

This way to hard-process text encoding. However, laziness is not a big problem if it does not affect you at certain times. However, it is recommended to use encode and decode to process the text after obtaining the source file.

Back to the question

The biggest problem now is the encoding method of the source file. When we use requests normally, it will automatically guess the source The encoding method of the file is then transcoded into Unicode encoding. However, after all, it is a program and it is possible to guess wrong, so if we guess wrong, we need to manually specify the encoding method. The official document describes it as follows:

When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers. The text encoding guessed by Requests is used when you access r.text. You can find out what encoding Requests is using, and change it, using the r.encoding property.

So we need to check out what encoding method is returned by requests?

# ecoding=utf-8
import re
import requests
from bs4 import BeautifulSoup
import sys
reload(sys)
sys.setdefaultencoding(&#39;utf8&#39;)

url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'

res = requests.get(url)
print res.encoding
Copy after login

The printed results are as follows:

ISO-8859-1

In other words, the source file is encoded using ISO-8859-1. Baidu searched for ISO-8859-1 and the results are as follows:

ISO8859-1, usually called Latin-1. Latin-1 includes additional characters indispensable for writing all Western European languages.

Problem Solving

After discovering this stuff, the problem is easily solved. As long as you specify the encoding, you can type Chinese correctly. The code is as follows:

# ecoding=utf-8
import requests
import sys
reload(sys)
sys.setdefaultencoding(&#39;utf8&#39;)

url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'

res = requests.get(url)
res.encoding = ('utf8')

print res.text
Copy after login

The printed result is obvious, and the Chinese characters are displayed correctly.

A brief discussion on the encoding processing of Python crawling web pages

Another way is to decode and encode the source file. The code is as follows:

# ecoding=utf-8
import requests
import sys
reload(sys)
sys.setdefaultencoding(&#39;utf8&#39;)

url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'

res = requests.get(url)
# res.encoding = ('utf8')

print res.text.encode('ISO-8859-1').decode('utf-8')
Copy after login

Another: ISO-8859-1 is also called latin1, and it is normal to use latin1 for decoding results.

Regarding character encoding, there are many things that can be said. Friends who want to know more can refer to the following information.

•《The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)》

The above article briefly discusses the coding processing of Python crawling web pages. I have compiled all the content shared with you. I hope it can give you a reference. I also hope that everyone will support the PHP Chinese website.

For more articles on coding and processing of crawling web pages with Python, please pay attention to the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to solve the permissions problem encountered when viewing Python version in Linux terminal? How to solve the permissions problem encountered when viewing Python version in Linux terminal? Apr 01, 2025 pm 05:09 PM

Solution to permission issues when viewing Python version in Linux terminal When you try to view Python version in Linux terminal, enter python...

How to teach computer novice programming basics in project and problem-driven methods within 10 hours? How to teach computer novice programming basics in project and problem-driven methods within 10 hours? Apr 02, 2025 am 07:18 AM

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading? How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading? Apr 02, 2025 am 07:15 AM

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

How to efficiently copy the entire column of one DataFrame into another DataFrame with different structures in Python? How to efficiently copy the entire column of one DataFrame into another DataFrame with different structures in Python? Apr 01, 2025 pm 11:15 PM

When using Python's pandas library, how to copy whole columns between two DataFrames with different structures is a common problem. Suppose we have two Dats...

How does Uvicorn continuously listen for HTTP requests without serving_forever()? How does Uvicorn continuously listen for HTTP requests without serving_forever()? Apr 01, 2025 pm 10:51 PM

How does Uvicorn continuously listen for HTTP requests? Uvicorn is a lightweight web server based on ASGI. One of its core functions is to listen for HTTP requests and proceed...

How to get news data bypassing Investing.com's anti-crawler mechanism? How to get news data bypassing Investing.com's anti-crawler mechanism? Apr 02, 2025 am 07:03 AM

Understanding the anti-crawling strategy of Investing.com Many people often try to crawl news data from Investing.com (https://cn.investing.com/news/latest-news)...

How to solve permission issues when using python --version command in Linux terminal? How to solve permission issues when using python --version command in Linux terminal? Apr 02, 2025 am 06:36 AM

Using python in Linux terminal...

See all articles