Node.js development information crawler process code sharing
This article mainly introduces the process of developing information crawlers using Node.js. The crawler process can be summarized as downloading the HTML of the target website to the local and then extracting the data. Please refer to this article for specific details, I hope it can help you.
The recent project needs some information, because the project is written in Node.js, so it is natural to use Node.js to write the crawler
Project address: github.com/mrtanweijie… , the project crawled the information content of Readhub, Open Source China, Developer Toutiao, and 36Kr. It has not processed multiple pages for the time being, because the crawler will run once a day. Now it can meet the needs every time it gets the latest. , and will be improved later.
The crawler process can be summarized as downloading the HTML of the target website to the local and then extracting the data.
1. Download page
Node.js has many http request libraries, request is used here, the main code is as follows:
requestDownloadHTML () { const options = { url: this.url, headers: { 'User-Agent': this.randomUserAgent() } } return new Promise((resolve, reject) => { request(options, (err, response, body) => { if (!err && response.statusCode === 200) { return resolve(body) } else { return reject(err) } }) }) }
Use Promise for packaging, which is convenient Use async/await when using it later. Because many websites are rendered on the client, the downloaded pages may not necessarily contain the desired HTML content. We can use Google's puppeteer to download client-rendered website pages. As we all know, when using npm i, puppeteer may fail to install because it needs to download the Chrome kernel. Just try a few more times:)
puppeteerDownloadHTML () { return new Promise(async (resolve, reject) => { try { const browser = await puppeteer.launch({ headless: true }) const page = await browser.newPage() await page.goto(this.url) const bodyHandle = await page.$('body') const bodyHTML = await page.evaluate(body => body.innerHTML, bodyHandle) return resolve(bodyHTML) } catch (err) { console.log(err) return reject(err) } }) }
Of course, it is best to render the page directly on the client side Use the interface request method, so that the subsequent HTML parsing is not needed. Simply encapsulate it and then use it like this: #Funny:)
await new Downloader('http://36kr.com/newsflashes', DOWNLOADER.puppeteer).downloadHTML()
2. HTML content extraction
HTML content extraction is of course using the artifact cheerio. Cheerio exposes the same interface as jQuery and is very simple to use. Open the page F12 in the browser to view the extracted page element nodes, and then extract the content according to the needs
readHubExtract () { let nodeList = this.$('#itemList').find('.enableVisited') nodeList.each((i, e) => { let a = this.$(e).find('a') this.extractData.push( this.extractDataFactory( a.attr('href'), a.text(), '', SOURCECODE.Readhub ) ) }) return this.extractData }
3. Scheduled tasks
cron 每天跑一跑 function job () { let cronJob = new cron.CronJob({ cronTime: cronConfig.cronTime, onTick: () => { spider() }, start: false }) cronJob.start() }
4. Data persistence
Theoretically, data persistence should not be within the scope of concern of crawlers. Use mongoose to create Model
import mongoose from 'mongoose' const Schema = mongoose.Schema const NewsSchema = new Schema( { title: { type: 'String', required: true }, url: { type: 'String', required: true }, summary: String, recommend: { type: Boolean, default: false }, source: { type: Number, required: true, default: 0 }, status: { type: Number, required: true, default: 0 }, createdTime: { type: Date, default: Date.now } }, { collection: 'news' } ) export default mongoose.model('news', NewsSchema)
Basic operations
import { OBJ_STATUS } from '../../Constants' class BaseService { constructor (ObjModel) { this.ObjModel = ObjModel } saveObject (objData) { return new Promise((resolve, reject) => { this.ObjModel(objData).save((err, result) => { if (err) { return reject(err) } return resolve(result) }) }) } } export default BaseService
Information
import BaseService from './BaseService' import News from '../models/News' class NewsService extends BaseService {} export default new NewsService(News)
Save happily Data
await newsService.batchSave(newsListTem)
For more information, just go to Github and clone the project to see it.
Related recommendations:
NodeJS Encyclopedia Crawler Example Tutorial
Related issues in solving crawler problems
nodeJS implementation of web crawler function example code
The above is the detailed content of Node.js development information crawler process code sharing. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

How to use WebSocket and JavaScript to implement an online speech recognition system Introduction: With the continuous development of technology, speech recognition technology has become an important part of the field of artificial intelligence. The online speech recognition system based on WebSocket and JavaScript has the characteristics of low latency, real-time and cross-platform, and has become a widely used solution. This article will introduce how to use WebSocket and JavaScript to implement an online speech recognition system.

WebSocket and JavaScript: Key technologies for realizing real-time monitoring systems Introduction: With the rapid development of Internet technology, real-time monitoring systems have been widely used in various fields. One of the key technologies to achieve real-time monitoring is the combination of WebSocket and JavaScript. This article will introduce the application of WebSocket and JavaScript in real-time monitoring systems, give code examples, and explain their implementation principles in detail. 1. WebSocket technology

Introduction to how to use JavaScript and WebSocket to implement a real-time online ordering system: With the popularity of the Internet and the advancement of technology, more and more restaurants have begun to provide online ordering services. In order to implement a real-time online ordering system, we can use JavaScript and WebSocket technology. WebSocket is a full-duplex communication protocol based on the TCP protocol, which can realize real-time two-way communication between the client and the server. In the real-time online ordering system, when the user selects dishes and places an order

With the popularity of mobile Internet, Toutiao has become one of the most popular news information platforms in my country. Many users hope to have multiple accounts on the Toutiao platform to meet different needs. So, how to open multiple Toutiao accounts? This article will introduce in detail the method and application process of opening multiple Toutiao accounts. 1. How to open multiple Toutiao accounts? The method of opening multiple Toutiao accounts is as follows: On the Toutiao platform, users can register accounts through different mobile phone numbers. Each mobile phone number can only register one Toutiao account, which means that users can use multiple mobile phone numbers to register multiple accounts. 2. Email registration: Use different email addresses to register a Toutiao account. Similar to mobile phone number registration, each email address can also register a Toutiao account. 3. Log in with third-party account

How to use WebSocket and JavaScript to implement an online reservation system. In today's digital era, more and more businesses and services need to provide online reservation functions. It is crucial to implement an efficient and real-time online reservation system. This article will introduce how to use WebSocket and JavaScript to implement an online reservation system, and provide specific code examples. 1. What is WebSocket? WebSocket is a full-duplex method on a single TCP connection.

In today's fast-paced society, sleep quality problems are plaguing more and more people. In order to improve users' sleep quality, a group of special sleep anchors appeared on the Douyin platform. They interact with users through live broadcasts, share sleep tips, and provide relaxing music and sounds to help viewers fall asleep peacefully. So, are these sleep anchors profitable? This article will focus on this issue. 1. Are Douyin sleep anchors profitable? Douyin sleep anchors can indeed earn certain profits. First, they can receive gifts and transfers through the tipping function in the live broadcast room, and these benefits depend on their number of fans and audience satisfaction. Secondly, the Douyin platform will give the anchor a certain share based on the number of views, likes, shares and other data of the live broadcast. Some sleep anchors will also

JavaScript and WebSocket: Building an efficient real-time weather forecast system Introduction: Today, the accuracy of weather forecasts is of great significance to daily life and decision-making. As technology develops, we can provide more accurate and reliable weather forecasts by obtaining weather data in real time. In this article, we will learn how to use JavaScript and WebSocket technology to build an efficient real-time weather forecast system. This article will demonstrate the implementation process through specific code examples. We

JavaScript tutorial: How to get HTTP status code, specific code examples are required. Preface: In web development, data interaction with the server is often involved. When communicating with the server, we often need to obtain the returned HTTP status code to determine whether the operation is successful, and perform corresponding processing based on different status codes. This article will teach you how to use JavaScript to obtain HTTP status codes and provide some practical code examples. Using XMLHttpRequest
