python method of crawling WeChat articles
This article shares with you a small program that uses python to crawl WeChat articles through the Sogou entrance. It is very simple and practical. Friends in need can refer to it
I think I set up a website to collect WeChat articles, but unfortunately I couldn’t find the entry link from WeChat. I looked through a lot of information on the Internet and found that everyone’s methods are generally similar, and they all use Sogou as the entry point. The following is a python code compiled by the author to crawl WeChat articles. If you are interested, please read it
#coding:utf-8 author = 'haoning' **#!/usr/bin/env python import time import datetime import requests** import json import sys reload(sys) sys.setdefaultencoding( "utf-8" ) import re import xml.etree.ElementTree as ET import os #OPENID = 'oIWsFtyel13ZMva1qltQ3pfejlwU' OPENID = 'oIWsFtw_-W2DaHwRz1oGWzL-wF9M&ext' XML_LIST = [] # get current time in milliseconds current_milli_time = lambda: int(round(time.time() * 1000)) def get_json(pageIndex): global OPENID the_headers = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36', 'Referer': 'http://weixin.sogou.com/gzh?openid={0}'.format(OPENID), 'Host': 'weixin.sogou.com' } url = 'http://weixin.sogou.com/gzhjs?cb=sogou.weixin.gzhcb&openid={0}&page={1}&t={2}'.format(OPENID, pageIndex, current_milli_time()) #url print(url) response = requests.get(url, headers = the_headers) # TO-DO; check if match the reg response_text = response.text print response_text json_start = response_text.index('sogou.weixin.gzhcb(') + 19 json_end = response_text.index(')') - 2 json_str = response_text[json_start : json_end] #get json #print(json_str) # convert json_str to json object json_obj = json.loads(json_str) #get json obj # print json_obj['totalPages'] return json_obj def add_xml(jsonObj): global XML_LIST xmls = jsonObj['items'] #get item #print type(xmls) XML_LIST.extend(xmls) #用新列表扩展原来的列表 **[#www.oksousou.com][2]** # ------------ Main ---------------- print 'play it :) ' # get total pages default_json_obj = get_json(1) total_pages = 0 total_items = 0 if(default_json_obj): # add the default xmls add_xml(default_json_obj) # get the rest items total_pages = default_json_obj['totalPages'] total_items = default_json_obj['totalItems'] print total_pages # iterate all pages if(total_pages >= 2): for pageIndex in range(2, total_pages + 1): add_xml(get_json(pageIndex)) #extend print 'load page ' + str(pageIndex) print len(XML_LIST)
The above is the detailed content of python method of crawling WeChat articles. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Solution to permission issues when viewing Python version in Linux terminal When you try to view Python version in Linux terminal, enter python...

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

When using Python's pandas library, how to copy whole columns between two DataFrames with different structures is a common problem. Suppose we have two Dats...

How does Uvicorn continuously listen for HTTP requests? Uvicorn is a lightweight web server based on ASGI. One of its core functions is to listen for HTTP requests and proceed...

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

Using python in Linux terminal...

Fastapi ...

Understanding the anti-crawling strategy of Investing.com Many people often try to crawl news data from Investing.com (https://cn.investing.com/news/latest-news)...
