Use python to crawl articles from prose website

Configure python 2.7
<code> bs4 requests</code>
Installation using pip sudo pip install bs4
sudo pip install requests
Briefly explain the use of bs4 because it is crawling web pages, so I will introduce find and find_all
The difference between find and find_all is that what is returned is different. What return is the first matched tag and the content in the tag
find_all returns a list
For example, we write a test.html to test the difference between find and find_all. The content is:
<html> <head> </head> <body> <div id="one"><a></a></div> <div id="two"><a href="#">abc</a></div> <div id="three"><a href="#">three a</a><a href="#">three a</a><a href="#">three a</a></div> <div id="four"><a href="#">four<p>four p</p><p>four p</p><p>four p</p> a</a></div> </body> </html>
<code class="xml"><span class="hljs-tag"> </span></code>
Then the code of test.py is:
from bs4 import BeautifulSoup import lxml if __name__=='__main__': s = BeautifulSoup(open('test.html'),'lxml') print s.prettify() print "------------------------------" print s.find('div') print s.find_all('div') print "------------------------------" print s.find('div',id='one') print s.find_all('div',id='one') print "------------------------------" print s.find('div',id="two") print s.find_all('div',id="two") print "------------------------------" print s.find('div',id="three") print s.find_all('div',id="three") print "------------------------------" print s.find('div',id="four") print s.find_all('div',id="four") print "------------------------------"
<code class="python"><span class="hljs-keyword"> </span></code>
After running, we can see the result. When getting the specified tag, there is not much difference between the two. When getting a group of tags, the difference between the two will be displayed.

So we have to pay attention to what we want when using it, otherwise an error will occur
The next step is to obtain the web page information through requests. I don’t quite understand why others write heard and other things
Me Directly access the web page, obtain several classified secondary web pages of Prose Network through the get method, and then pass a group test to crawl all the web pages
<span style="color: #0000ff">def</span><span style="color: #000000"> get_html(): url </span>= <span style="color: #800000">"</span><span style="color: #800000"></span><span style="color: #800000">"</span><span style="color: #000000"> two_html </span>= [<span style="color: #800000">'</span><span style="color: #800000">sanwen</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">shige</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">zawen</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">suibi</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">rizhi</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">novel</span><span style="color: #800000">'</span><span style="color: #000000">] </span><span style="color: #0000ff">for</span> doc <span style="color: #0000ff">in</span><span style="color: #000000"> two_html: i</span>=1 <span style="color: #0000ff">if</span> doc==<span style="color: #800000">'</span><span style="color: #800000">sanwen</span><span style="color: #800000">'</span><span style="color: #000000">: </span><span style="color: #0000ff">print</span> <span style="color: #800000">"</span><span style="color: #800000">running sanwen -----------------------------</span><span style="color: #800000">"</span> <span style="color: #0000ff">if</span> doc==<span style="color: #800000">'</span><span style="color: #800000">shige</span><span style="color: #800000">'</span><span style="color: #000000">: </span><span style="color: #0000ff">print</span> <span style="color: #800000">"</span><span style="color: #800000">running shige ------------------------------</span><span style="color: #800000">"</span> <span style="color: #0000ff">if</span> doc==<span style="color: #800000">'</span><span style="color: #800000">zawen</span><span style="color: #800000">'</span><span style="color: #000000">: </span><span style="color: #0000ff">print</span> <span style="color: #800000">'</span><span style="color: #800000">running zawen -------------------------------</span><span style="color: #800000">'</span> <span style="color: #0000ff">if</span> doc==<span style="color: #800000">'</span><span style="color: #800000">suibi</span><span style="color: #800000">'</span><span style="color: #000000">: </span><span style="color: #0000ff">print</span> <span style="color: #800000">'</span><span style="color: #800000">running suibi -------------------------------</span><span style="color: #800000">'</span> <span style="color: #0000ff">if</span> doc==<span style="color: #800000">'</span><span style="color: #800000">rizhi</span><span style="color: #800000">'</span><span style="color: #000000">: </span><span style="color: #0000ff">print</span> <span style="color: #800000">'</span><span style="color: #800000">running ruzhi -------------------------------</span><span style="color: #800000">'</span> <span style="color: #0000ff">if</span> doc==<span style="color: #800000">'</span><span style="color: #800000">nove</span><span style="color: #800000">'</span><span style="color: #000000">: </span><span style="color: #0000ff">print</span> <span style="color: #800000">'</span><span style="color: #800000">running xiaoxiaoshuo -------------------------</span><span style="color: #800000">'</span> <span style="color: #0000ff">while</span>(i<10<span style="color: #000000">): par </span>= {<span style="color: #800000">'</span><span style="color: #800000">p</span><span style="color: #800000">'</span><span style="color: #000000">:i} res </span>= requests.get(url+doc+<span style="color: #800000">'</span><span style="color: #800000">/</span><span style="color: #800000">'</span>,params=<span style="color: #000000">par) </span><span style="color: #0000ff">if</span> res.status_code==200<span style="color: #000000">: soup(res.text) i</span>+=i
<code class="python"><span class="hljs-function"><span class="hljs-keyword"> </span></span></code>
In this part of the code, I did not process the res.status_code that is not 200. The resulting problem is that errors will not be displayed and the crawled content will be lost. Then I analyzed the web page of Sanwen.net and found that it was www.sanwen.net/rizhi/&p=1
. The maximum value of p is 10. I don’t understand. The last time I crawled the disk, it was 100 pages. I’ll analyze it later. Then get the content of each page through the get method.
After obtaining the content of each page, the author and title are analyzed. The code is like this
<span style="color: #0000ff">def</span><span style="color: #000000"> soup(html_text): s </span>= BeautifulSoup(html_text,<span style="color: #800000">'</span><span style="color: #800000">lxml</span><span style="color: #800000">'</span><span style="color: #000000">) link </span>= s.find(<span style="color: #800000">'</span><span style="color: #800000">div</span><span style="color: #800000">'</span>,class_=<span style="color: #800000">'</span><span style="color: #800000">categorylist</span><span style="color: #800000">'</span>).find_all(<span style="color: #800000">'</span><span style="color: #800000">li</span><span style="color: #800000">'</span><span style="color: #000000">) </span><span style="color: #0000ff">for</span> i <span style="color: #0000ff">in</span><span style="color: #000000"> link: </span><span style="color: #0000ff">if</span> i!=s.find(<span style="color: #800000">'</span><span style="color: #800000">li</span><span style="color: #800000">'</span>,class_=<span style="color: #800000">'</span><span style="color: #800000">page</span><span style="color: #800000">'</span><span style="color: #000000">): title </span>= i.find_all(<span style="color: #800000">'</span><span style="color: #800000">a</span><span style="color: #800000">'</span>)[1<span style="color: #000000">] author </span>= i.find_all(<span style="color: #800000">'</span><span style="color: #800000">a</span><span style="color: #800000">'</span>)[2<span style="color: #000000">].text url </span>= title.attrs[<span style="color: #800000">'</span><span style="color: #800000">href</span><span style="color: #800000">'</span><span style="color: #000000">] sign </span>= re.compile(r<span style="color: #800000">'</span><span style="color: #800000">(//)|/</span><span style="color: #800000">'</span><span style="color: #000000">) match </span>=<span style="color: #000000"> sign.search(title.text) file_name </span>=<span style="color: #000000"> title.text </span><span style="color: #0000ff">if</span><span style="color: #000000"> match: file_name </span>= sign.sub(<span style="color: #800000">'</span><span style="color: #800000">a</span><span style="color: #800000">'</span>,str(title.text))
<code class="python"><span class="hljs-function"><span class="hljs-keyword"> </span></span></code>
There is a cheating thing when getting the title. Guys, why do you add slashes in the title when writing prose? Not only add one but also add two. This problem directly caused the file name to appear when I wrote the file later. Wrong, so write a regular expression and I'll change it for you.
The last step is to get the prose content. Through the analysis of each page, we can get the article address, and then get the content directly. I originally wanted to get it one by one by changing the web page address, which saves trouble.
<span style="color: #0000ff">def</span><span style="color: #000000"> get_content(url): res </span>= requests.get(<span style="color: #800000">'</span><span style="color: #800000"></span><span style="color: #800000">'</span>+<span style="color: #000000">url) </span><span style="color: #0000ff">if</span> res.status_code==200<span style="color: #000000">: soup </span>= BeautifulSoup(res.text,<span style="color: #800000">'</span><span style="color: #800000">lxml</span><span style="color: #800000">'</span><span style="color: #000000">) contents </span>= soup.find(<span style="color: #800000">'</span><span style="color: #800000">div</span><span style="color: #800000">'</span>,class_=<span style="color: #800000">'</span><span style="color: #800000">content</span><span style="color: #800000">'</span>).find_all(<span style="color: #800000">'</span><span style="color: #800000">p</span><span style="color: #800000">'</span><span style="color: #000000">) content </span>= <span style="color: #800000">''</span> <span style="color: #0000ff">for</span> i <span style="color: #0000ff">in</span><span style="color: #000000"> contents: content</span>+=i.text+<span style="color: #800000">'</span><span style="color: #800000">\n</span><span style="color: #800000">'</span> <span style="color: #0000ff">return</span> content
<code class="python"><span class="hljs-function"><span class="hljs-keyword"> </span></span></code>
The last thing is to write the file and save it ok
f = open(file_name+<span style="color: #800000">'</span><span style="color: #800000">.txt</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">w</span><span style="color: #800000">'</span><span style="color: #000000">) </span><span style="color: #0000ff">print</span> <span style="color: #800000">'</span><span style="color: #800000">running w txt</span><span style="color: #800000">'</span>+file_name+<span style="color: #800000">'</span><span style="color: #800000">.txt</span><span style="color: #800000">'</span><span style="color: #000000"> f.write(title.text</span>+<span style="color: #800000">'</span><span style="color: #800000">\n</span><span style="color: #800000">'</span><span style="color: #000000">) f.write(author</span>+<span style="color: #800000">'</span><span style="color: #800000">\n</span><span style="color: #800000">'</span><span style="color: #000000">) content</span>=<span style="color: #000000">get_content(url) f.write(content) f.close()</span>
The three functions obtain prose from Prose.com, but there is a problem. The problem is that I don’t know why some prose is lost. I can only get about 400 articles, which is much different from the articles from Prose.com, but it is true. It was obtained page by page. I hope someone can help me with this question. Maybe we should make the web page inaccessible. Of course, I think it has something to do with the broken network in my dormitory
f = open(file_name+<span style="color: #800000">'</span><span style="color: #800000">.txt</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">w</span><span style="color: #800000">'</span><span style="color: #000000">) </span><span style="color: #0000ff">print</span> <span style="color: #800000">'</span><span style="color: #800000">running w txt</span><span style="color: #800000">'</span>+file_name+<span style="color: #800000">'</span><span style="color: #800000">.txt</span><span style="color: #800000">'</span><span style="color: #000000"> f.write(title.text</span>+<span style="color: #800000">'</span><span style="color: #800000">\n</span><span style="color: #800000">'</span><span style="color: #000000">) f.write(author</span>+<span style="color: #800000">'</span><span style="color: #800000">\n</span><span style="color: #800000">'</span><span style="color: #000000">) content</span>=<span style="color: #000000">get_content(url) f.write(content) f.close()</span>
Almost forgot about the renderings

The above is the detailed content of Use python to crawl articles from prose website. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

PHP is mainly procedural programming, but also supports object-oriented programming (OOP); Python supports a variety of paradigms, including OOP, functional and procedural programming. PHP is suitable for web development, and Python is suitable for a variety of applications such as data analysis and machine learning.

PHP is suitable for web development and rapid prototyping, and Python is suitable for data science and machine learning. 1.PHP is used for dynamic web development, with simple syntax and suitable for rapid development. 2. Python has concise syntax, is suitable for multiple fields, and has a strong library ecosystem.

Python is more suitable for beginners, with a smooth learning curve and concise syntax; JavaScript is suitable for front-end development, with a steep learning curve and flexible syntax. 1. Python syntax is intuitive and suitable for data science and back-end development. 2. JavaScript is flexible and widely used in front-end and server-side programming.

PHP originated in 1994 and was developed by RasmusLerdorf. It was originally used to track website visitors and gradually evolved into a server-side scripting language and was widely used in web development. Python was developed by Guidovan Rossum in the late 1980s and was first released in 1991. It emphasizes code readability and simplicity, and is suitable for scientific computing, data analysis and other fields.

VS Code can run on Windows 8, but the experience may not be great. First make sure the system has been updated to the latest patch, then download the VS Code installation package that matches the system architecture and install it as prompted. After installation, be aware that some extensions may be incompatible with Windows 8 and need to look for alternative extensions or use newer Windows systems in a virtual machine. Install the necessary extensions to check whether they work properly. Although VS Code is feasible on Windows 8, it is recommended to upgrade to a newer Windows system for a better development experience and security.

VS Code can be used to write Python and provides many features that make it an ideal tool for developing Python applications. It allows users to: install Python extensions to get functions such as code completion, syntax highlighting, and debugging. Use the debugger to track code step by step, find and fix errors. Integrate Git for version control. Use code formatting tools to maintain code consistency. Use the Linting tool to spot potential problems ahead of time.

Running Python code in Notepad requires the Python executable and NppExec plug-in to be installed. After installing Python and adding PATH to it, configure the command "python" and the parameter "{CURRENT_DIRECTORY}{FILE_NAME}" in the NppExec plug-in to run Python code in Notepad through the shortcut key "F6".

VS Code extensions pose malicious risks, such as hiding malicious code, exploiting vulnerabilities, and masturbating as legitimate extensions. Methods to identify malicious extensions include: checking publishers, reading comments, checking code, and installing with caution. Security measures also include: security awareness, good habits, regular updates and antivirus software.
