


How to implement information crawler using Node.js (detailed tutorial)
This article mainly introduces the process of developing information crawlers using Node.js. The crawler process can be summarized as downloading the HTML of the target website to the local and then extracting the data. Please refer to this article for specific content details
The recent project needs some information, because the project is written in Node.js, so it is natural to use Node.js to write the crawler
Project address :github.com/mrtanweijie…, the project crawls the information content of Readhub, Open Source China, Developer Headlines, and 36Kr. Multiple pages are not processed for the time being, because the crawler will run once a day, and now it is obtained every time The latest one can meet the needs, and will be improved later.
The crawler process can be summarized as downloading the HTML of the target website to the local and then extracting the data.
1. Download page
Node.js has many http request libraries. request is used here. The main code is as follows:
requestDownloadHTML () { const options = { url: this.url, headers: { 'User-Agent': this.randomUserAgent() } } return new Promise((resolve, reject) => { request(options, (err, response, body) => { if (!err && response.statusCode === 200) { return resolve(body) } else { return reject(err) } }) }) }
Use Promise for packaging so that async/await can be used later. Because many websites are rendered on the client, the downloaded pages may not necessarily contain the desired HTML content. We can use Google's puppeteer to download client-rendered website pages. As we all know, when using npm i, puppeteer may fail to install because it needs to download the Chrome kernel. Just try a few more times:)
puppeteerDownloadHTML () { return new Promise(async (resolve, reject) => { try { const browser = await puppeteer.launch({ headless: true }) const page = await browser.newPage() await page.goto(this.url) const bodyHandle = await page.$('body') const bodyHTML = await page.evaluate(body => body.innerHTML, bodyHandle) return resolve(bodyHTML) } catch (err) { console.log(err) return reject(err) } }) }
Of course, it is best to use the interface request directly for pages rendered by the client. way, so that the subsequent HTML parsing is not needed, just do a simple encapsulation, and then you can use it like this: #Funny:)
await new Downloader('http://36kr.com/newsflashes', DOWNLOADER.puppeteer).downloadHTML()
2. HTML content extraction
HTML content extraction is of course using the artifact cheerio. Cheerio exposes the same interface as jQuery and is very simple to use. Open the page F12 in the browser to view the extracted page element nodes, and then extract the content according to the needs
readHubExtract () { let nodeList = this.$('#itemList').find('.enableVisited') nodeList.each((i, e) => { let a = this.$(e).find('a') this.extractData.push( this.extractDataFactory( a.attr('href'), a.text(), '', SOURCECODE.Readhub ) ) }) return this.extractData }
3. Scheduled tasks
cron 每天跑一跑 function job () { let cronJob = new cron.CronJob({ cronTime: cronConfig.cronTime, onTick: () => { spider() }, start: false }) cronJob.start() }
4. Data persistence
Theoretically, data persistence should not be within the scope of crawler concern. Use mongoose to create Model
import mongoose from 'mongoose' const Schema = mongoose.Schema const NewsSchema = new Schema( { title: { type: 'String', required: true }, url: { type: 'String', required: true }, summary: String, recommend: { type: Boolean, default: false }, source: { type: Number, required: true, default: 0 }, status: { type: Number, required: true, default: 0 }, createdTime: { type: Date, default: Date.now } }, { collection: 'news' } ) export default mongoose.model('news', NewsSchema)
Basic operations
import { OBJ_STATUS } from '../../Constants' class BaseService { constructor (ObjModel) { this.ObjModel = ObjModel } saveObject (objData) { return new Promise((resolve, reject) => { this.ObjModel(objData).save((err, result) => { if (err) { return reject(err) } return resolve(result) }) }) } } export default BaseService
Information
import BaseService from './BaseService' import News from '../models/News' class NewsService extends BaseService {} export default new NewsService(News)
Happily save data
await newsService.batchSave(newsListTem)
For more information, just go to Github and clone the project to see it.
Summary
The above is what I compiled for everyone. I hope it will be helpful to everyone in the future.
Related articles:
How to build a d3 force-directed graph using react (detailed tutorial)
How to implement instant messaging using nodejs
About axios issues related to Vue.use
The above is the detailed content of How to implement information crawler using Node.js (detailed tutorial). For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Frequently Asked Questions and Solutions for Front-end Thermal Paper Ticket Printing In Front-end Development, Ticket Printing is a common requirement. However, many developers are implementing...

JavaScript is the cornerstone of modern web development, and its main functions include event-driven programming, dynamic content generation and asynchronous programming. 1) Event-driven programming allows web pages to change dynamically according to user operations. 2) Dynamic content generation allows page content to be adjusted according to conditions. 3) Asynchronous programming ensures that the user interface is not blocked. JavaScript is widely used in web interaction, single-page application and server-side development, greatly improving the flexibility of user experience and cross-platform development.

There is no absolute salary for Python and JavaScript developers, depending on skills and industry needs. 1. Python may be paid more in data science and machine learning. 2. JavaScript has great demand in front-end and full-stack development, and its salary is also considerable. 3. Influencing factors include experience, geographical location, company size and specific skills.

How to merge array elements with the same ID into one object in JavaScript? When processing data, we often encounter the need to have the same ID...

Learning JavaScript is not difficult, but it is challenging. 1) Understand basic concepts such as variables, data types, functions, etc. 2) Master asynchronous programming and implement it through event loops. 3) Use DOM operations and Promise to handle asynchronous requests. 4) Avoid common mistakes and use debugging techniques. 5) Optimize performance and follow best practices.

Discussion on the realization of parallax scrolling and element animation effects in this article will explore how to achieve similar to Shiseido official website (https://www.shiseido.co.jp/sb/wonderland/)...

In-depth discussion of the root causes of the difference in console.log output. This article will analyze the differences in the output results of console.log function in a piece of code and explain the reasons behind it. �...

Explore the implementation of panel drag and drop adjustment function similar to VSCode in the front-end. In front-end development, how to implement VSCode similar to VSCode...
