Home Backend Development Python Tutorial Detailed explanation of Python's method of crawling Sogou images from web pages

Detailed explanation of Python's method of crawling Sogou images from web pages

Mar 24, 2017 pm 04:26 PM

I didn’t expect python to be so powerful and fascinating. I used to copy and paste pictures one by one when I saw them. Now it’s better. If you learn python, you can use a program to save pictures one by one. The following article mainly introduces you to the relevant information on using Python3.6 to crawl images from Sogou picture web pages. Friends in need can refer to it.

Preface

In the past few days, I have studied the crawler algorithm that I have always been curious about. Here I will write down some of my experiences in the past few days. Enter the text below:

We use sogou as the crawling object here.

First we enter Sogou Pictures and enter the wallpaper category (of course it is just an example Q_Q), because if we need to crawl a certain website information, we need to have a preliminary understanding of it...

Detailed explanation of Pythons method of crawling Sogou images from web pages

After entering, this is it, then F12 to enter the developer options. The author uses Chrome.

Right-click on the image>>Check

Detailed explanation of Pythons method of crawling Sogou images from web pages

We found that the src of the image we needed was under the img tag, so we first tried to use Python requests to extract the image component, and then obtain the src of img and then use urllib.request.urlretrieve to download the pictures one by one, so as to achieve the purpose of obtaining data in batches. The idea is good. The following should tell the program that the url to be crawled is http://pic.sogou.com/ pics/recommend?category=%B1%DA%D6%BD, this url comes from the address bar after entering the category. Now that we understand the url address, let’s start the happy coding time:

When writing this crawler program, it is best to debug it step by step to ensure that every step of our operation is correct. This is what programmers should do. Habit. The author doesn't know if I am a programmer. Let’s analyze the web page pointed to by this url.

import requests
import urllib
from bs4 import BeautifulSoup
res = requests.get('http://pic.sogou.com/pics/recommend?category=%B1%DA%D6%BD')
soup = BeautifulSoup(res.text,'html.parser')
print(soup.select('img'))
Copy after login

output:

Detailed explanation of Pythons method of crawling Sogou images from web pages

It is found that the output content does not contain the image elements we want, but only parses the img of the logo. This is obviously not what we want. need. In other words, the required picture information is not in the URL http://pic.sogou.com/pics/recommend?category=%B1%DA%D6%BD. Therefore, it is considered that the element may be dynamic. Careful students may find that when sliding the mouse wheel down on the web page, the image is dynamically refreshed. In other words, the web page does not load all the resources at once, but Dynamically load resources. This also prevents the webpage from being too bloated and affecting the loading speed. The painful exploration begins below. We want to find the real URLs of all pictures. The author is new to this and is not very experienced in finding this. The last location found is F12>>Network>>XHR>>(Click the file under XHR)>>Preview.

Detailed explanation of Pythons method of crawling Sogou images from web pages

I found that it is a bit close to the elements we need. Click on all_items and find that the following are 0 1 2 3... one by one, they seem to be picture elements. Try opening a url. I found it was really the address of the picture. After finding the target. Click Headers

under XHR to get the second line

Request URL:

http://pic.sogou.com/pics/channel/getAllRecomPicByTag.jsp?category= %E5%A3%81%E7%BA%B8&tag=%E5%85%A8%E9%83%A8&start=0&len=15&width=1536&height=864, try to remove some unnecessary parts, the trick is to delete possible After the section, access is not affected. Screened by the author. The final url obtained: http://pic.sogou.com/pics/channel/getAllRecomPicByTag.jsp?category=%E5%A3%81%E7%BA%B8&tag=%E5%85%A8%E9%83%A8&start =0&len=15 Literal meaning, knowing that category may be followed by classification. start is the starting subscript, len is the length, that is, the number of pictures. Okay, let’s start the happy coding time:

The development environment is Win7 Python 3.6. Python needs to install requests when running.

To install requests for Python3.6, you should type in CMD:

pip install requests
Copy after login

The author is also debugging and writing here, and the final code is posted here:

import requests
import json
import urllib

def getSogouImag(category,length,path):
 n = length
 cate = category
 imgs = requests.get('http://pic.sogou.com/pics/channel/getAllRecomPicByTag.jsp?category='+cate+'&tag=%E5%85%A8%E9%83%A8&start=0&len='+str(n))
 jd = json.loads(imgs.text)
 jd = jd['all_items']
 imgs_url = []
 for j in jd:
  imgs_url.append(j['bthumbUrl'])
 m = 0
 for img_url in imgs_url:
   print('***** '+str(m)+'.jpg *****'+' Downloading...')
   urllib.request.urlretrieve(img_url,path+str(m)+'.jpg')
   m = m + 1
 print('Download complete!')

getSogouImag('壁纸',2000,'d:/download/壁纸/')
Copy after login

When the program started running, the author was still a little excited. Come and feel it:

Detailed explanation of Pythons method of crawling Sogou images from web pages

Detailed explanation of Pythons method of crawling Sogou images from web pages

At this point, the description of the programming process of the crawler program is completed. Overall, finding the URL where the element needs to be crawled is the key to many aspects of the crawler

The above is the detailed content of Detailed explanation of Python's method of crawling Sogou images from web pages. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to solve the permissions problem encountered when viewing Python version in Linux terminal? How to solve the permissions problem encountered when viewing Python version in Linux terminal? Apr 01, 2025 pm 05:09 PM

Solution to permission issues when viewing Python version in Linux terminal When you try to view Python version in Linux terminal, enter python...

How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading? How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading? Apr 02, 2025 am 07:15 AM

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

How to efficiently copy the entire column of one DataFrame into another DataFrame with different structures in Python? How to efficiently copy the entire column of one DataFrame into another DataFrame with different structures in Python? Apr 01, 2025 pm 11:15 PM

When using Python's pandas library, how to copy whole columns between two DataFrames with different structures is a common problem. Suppose we have two Dats...

How to teach computer novice programming basics in project and problem-driven methods within 10 hours? How to teach computer novice programming basics in project and problem-driven methods within 10 hours? Apr 02, 2025 am 07:18 AM

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

How does Uvicorn continuously listen for HTTP requests without serving_forever()? How does Uvicorn continuously listen for HTTP requests without serving_forever()? Apr 01, 2025 pm 10:51 PM

How does Uvicorn continuously listen for HTTP requests? Uvicorn is a lightweight web server based on ASGI. One of its core functions is to listen for HTTP requests and proceed...

How to solve permission issues when using python --version command in Linux terminal? How to solve permission issues when using python --version command in Linux terminal? Apr 02, 2025 am 06:36 AM

Using python in Linux terminal...

How to get news data bypassing Investing.com's anti-crawler mechanism? How to get news data bypassing Investing.com's anti-crawler mechanism? Apr 02, 2025 am 07:03 AM

Understanding the anti-crawling strategy of Investing.com Many people often try to crawl news data from Investing.com (https://cn.investing.com/news/latest-news)...

See all articles