How to ping Baidu and Google using Python scripts
Ping service
Ping is an update notification service based on the XML_RPC standard protocol. It is used by blogs to quickly notify search engines of content updates so that search engines can crawl and update them in a timely manner.
The computer is equivalent to the RPC Client, used to initiate requests to the RPC Server and accept the execution results of the method.
Python implementation method
Python has built-in XMLRPClib, which can easily handle the XMLRPC protocol, eliminating the trouble of packet unpacking.
Usage is very simple, first import the library:
import xmlrpclib
Generate xmlrpc server object:
sever = xmlrpclib.ServerProxy(ServerProxy)
where ServerProxy is the search engine The RPC server endpoint address.
Then you can execute the RPC server method. Take Baidu as an example:
result = server.weblogUpdates.extendedPing(blog_name,index_addr,new_post_addr,rss_addr)
weblogUpdates.extendedPing is the method that needs to be executed, with the four parameters in brackets It is required on the Baidu ping service page. result is the execution result returned by the method.
Encapsulation code
Just put the link that needs to be pinged in the ping_all function, and the parameters are passed as required.
#!/usr/bin/env python # -*- coding:utf-8 -*- import json import xmlrpclib from db import redis def ping(ping_url, *args, **kwds): """args: site_name, site_host, post_url, rss_url.""" rpc_server = xmlrpclib.ServerProxy(ping_url) result = rpc_server.weblogUpdates.extendedPing(*args) print result def ping_all(*args, **kwds): ping_url_list = [ 'http://ping.baidu.com/ping/RPC2', 'http://rpc.pingomatic.com/', 'http://blogsearch.google.com/ping/RPC2', ] for url in ping_url_list: ping(url, *args, **kwds) def main(): client = redis.pubsub() client.subscribe(['ping']) while True: for item in client.listen(): if item['type'] == 'message': msg = item['data'] if msg: post = json.loads(msg) print post ping_all(post.get('site_name'), post.get('site_host'), post.get('post_url'), post.get('rss_url')) def test(): site_name = "tech2ipo" site_host = "http://alpha.tech2ipo.com" post_url = 'http://alpha.tech2ipo.com/100855' rss_url = "http://alpha.tech2ipo.com/rss/alpha.tech2ipo.com" ping_all(site_name, site_host, post_url, rss_url) if __name__ == '__main__': main()
Summary
The above is the entire content of this article. I hope the content of this article can bring some help to everyone learning or using python. If you have any questions You can leave messages to communicate.
For more related articles on how to use Python scripts to ping Baidu and Google, please pay attention to the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Solution to permission issues when viewing Python version in Linux terminal When you try to view Python version in Linux terminal, enter python...

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

When using Python's pandas library, how to copy whole columns between two DataFrames with different structures is a common problem. Suppose we have two Dats...

How does Uvicorn continuously listen for HTTP requests? Uvicorn is a lightweight web server based on ASGI. One of its core functions is to listen for HTTP requests and proceed...

Fastapi ...

Using python in Linux terminal...

Understanding the anti-crawling strategy of Investing.com Many people often try to crawl news data from Investing.com (https://cn.investing.com/news/latest-news)...
