Home Backend Development Python Tutorial Analysis of link extractor and deduplication tools in Scrapy

Analysis of link extractor and deduplication tools in Scrapy

Jun 22, 2023 am 09:17 AM
scrapy (provided) link extractor Deduplication tool

Scrapy is an excellent Python crawler framework. It supports advanced features such as concurrency, distribution, and asynchronousness, and can help developers crawl data on the Internet faster and more stably. In Scrapy, link extractors and deduplication tools are very important components to assist crawlers in completing automated data capture and processing. This article will analyze the link extractor and deduplication tools in Scrapy, explore how they are implemented, and their application in the Scrapy crawling process.

1. The function and implementation of the link extractor

Link Extractor is a tool in the Scrapy crawler framework that automatically extracts URL links. In a complete crawler process, it is often necessary to extract some URL links from the web page, and then further access and process them based on these links. The link extractor is used to implement this process. It can automatically extract links from web pages according to some rules, and save these links to Scrapy's request queue for subsequent processing.

In Scrapy, the link extractor matches through regular expressions or XPath expressions. Scrapy provides two link extractors: LinkExtractor based on regular expressions and LxmlLinkExtractor based on XPath expressions.

  1. Regular expression-based LinkExtractor

Regular expression-based LinkExtractor can automatically extract successfully matched links by performing regular matching on URLs in web pages. For example, if we want to extract all links starting with http://example.com/ from a web page, we can use the following code:

from scrapy.linkextractors import LinkExtractor

link_extractor = LinkExtractor(allow=r'^http://example.com/')
links = link_extractor.extract_links(response)
Copy after login

The allow parameter specifies a regular expression to match all links starting with http Links starting with ://example.com/. The extract_links() method can extract all successfully matched links and save them in a list of Link objects.

The Link object is a data structure used to represent links in the Scrapy framework, which contains information such as the link's URL, title, anchor text, and link type. Through these objects, we can easily obtain the required links and further process and access them in the Scrapy crawler.

  1. LxmlLinkExtractor based on XPath expressions

LxmlLinkExtractor based on XPath expressions can automatically extract successful matches by matching XPath expressions on HTML tags in web pages Link. For example, if we want to extract all a links with class attributes equal to "storylink" from a web page, we can use the following code:

from scrapy.linkextractors import LxmlLinkExtractor

link_extractor = LxmlLinkExtractor(restrict_xpaths='//a[@class="storylink"]')
links = link_extractor.extract_links(response)
Copy after login

restrict_xpaths parameter specifies an XPath expression to match all class attributes equal to "storylink" " a tag. LxmlLinkExtractor is used similarly to LinkExtractor, and can save the extracted links in a list of Link objects. It should be noted that since LxmlLinkExtractor uses the lxml library for HTML parsing, the following code needs to be added to the project configuration file:

# settings.py
DOWNLOAD_HANDLERS = {
    's3': None,
}
Copy after login

The above code can disable the default downloader in Scrapy and use the lxml library. HTML parser.

2. The role and implementation of deduplication tools

When crawling the Web, link deduplication is very important, because in most cases, different links to the same web page are It will appear repeatedly. If the duplicates are not removed, it will cause repeated crawling problems and waste bandwidth and time. Therefore, the Duplicate Filter was introduced in Scrapy to mark and judge the links that have been crawled to avoid repeated visits.

The principle of deduplication tool is to save the visited URL link into a data structure, and then judge whether the new URL link has been visited. If it has been visited, the URL link will be discarded. , otherwise add it to the crawler's request queue. Scrapy has many built-in deduplication tools, including memory-based Set deduplication, disk-based SQLite3 deduplication, and Redis-based deduplication. Different deduplicators have different applicable scenarios. Let’s take the Redis deduplicator as an example to illustrate.

  1. Redis-based deduplicator

Redis is a high-performance NoSQL in-memory database that can support advanced features such as distribution, persistence, and rich data structures. Very suitable for implementing Scrapy's deduplication tool. The Redis deduplicator in Scrapy can mark URL links that have been visited to avoid repeated visits.

Scrapy uses the memory-based Set class deduplicator by default. If you need to use the Redis deduplicator, you can add the following code to the project configuration file:

# settings.py
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_PERSIST = True
REDIS_HOST = "localhost"
REDIS_PORT = 6379
Copy after login

Among them, the DUPEFILTER_CLASS parameter Specifies the deduplication strategy used by the deduplication tool. Here we use scrapy_redis.dupefilter.RFPDupeFilter, which is implemented based on the set data structure of Redis.

The SCHEDULER parameter specifies the scheduling strategy used by the scheduler. Here we use scrapy_redis.scheduler.Scheduler, which is implemented based on the sorted set data structure of Redis.

The SCHEDULER_PERSIST parameter specifies whether the scheduler needs to be persisted in Redis, that is, whether it needs to save the state of the last crawl to avoid re-crawling URLs that have already been crawled.

The REDIS_HOST and REDIS_PORT parameters specify the IP address and port number of the Redis database respectively. If the Redis database is not local, you need to set the corresponding IP address.

使用Redis去重器之后,需要在爬虫中添加redis_key参数,用来指定Redis中保存URL链接的key名。例如:

# spider.py
class MySpider(scrapy.Spider):
    name = 'myspider'
    start_urls = ['http://example.com']

    custom_settings = {
        'REDIS_HOST': 'localhost',
        'REDIS_PORT': 6379,
        'DUPEFILTER_CLASS': 'scrapy_redis.dupefilter.RFPDupeFilter',
        'SCHEDULER': 'scrapy_redis.scheduler.Scheduler',
        'SCHEDULER_PERSIST': True,
        'SCHEDULER_QUEUE_CLASS': 'scrapy_redis.queue.SpiderPriorityQueue',
        'REDIS_URL': 'redis://user:pass@localhost:6379',
        'ITEM_PIPELINES': {
            'scrapy_redis.pipelines.RedisPipeline': 400,
        },
        'DOWNLOADER_MIDDLEWARES': {
            'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
            'scrapy_useragents.downloadermiddlewares.useragents.UserAgentsMiddleware': 500,
        },
        'FEED_URI': 'result.json',
        'FEED_FORMAT': 'json',
        'LOG_LEVEL': 'INFO',
        'SPIDER_MIDDLEWARES': {
            'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 300,
        }
    }

    def __init__(self, *args, **kwargs):
        domain = kwargs.pop('domain', '')
        self.allowed_domains = filter(None, domain.split(','))
        self.redis_key = '%s:start_urls' % self.name
        super(MySpider, self).__init__(*args, **kwargs)

    def parse(self, response):
        pass
Copy after login

以上是一个简单的爬虫示例,redis_key参数指定了在Redis中保存URL链接的键名为myspider:start_urls。在parse()方法中,需要编写自己的网页解析代码,提取出需要的信息。

三、总结

链接提取器和去重工具是Scrapy爬虫框架中非常重要的组件,它们可以大大简化我们编写爬虫的工作,并提高爬虫的效率。在使用Scrapy爬虫时,我们可以根据自己的需求选择不同的链接提取器和去重工具,从而实现更为高效和灵活的爬虫功能。

The above is the detailed content of Analysis of link extractor and deduplication tools in Scrapy. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Roblox: Bubble Gum Simulator Infinity - How To Get And Use Royal Keys
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Nordhold: Fusion System, Explained
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1664
14
PHP Tutorial
1269
29
C# Tutorial
1249
24
Python vs. C  : Applications and Use Cases Compared Python vs. C : Applications and Use Cases Compared Apr 12, 2025 am 12:01 AM

Python is suitable for data science, web development and automation tasks, while C is suitable for system programming, game development and embedded systems. Python is known for its simplicity and powerful ecosystem, while C is known for its high performance and underlying control capabilities.

The 2-Hour Python Plan: A Realistic Approach The 2-Hour Python Plan: A Realistic Approach Apr 11, 2025 am 12:04 AM

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

Python: Games, GUIs, and More Python: Games, GUIs, and More Apr 13, 2025 am 12:14 AM

Python excels in gaming and GUI development. 1) Game development uses Pygame, providing drawing, audio and other functions, which are suitable for creating 2D games. 2) GUI development can choose Tkinter or PyQt. Tkinter is simple and easy to use, PyQt has rich functions and is suitable for professional development.

Python vs. C  : Learning Curves and Ease of Use Python vs. C : Learning Curves and Ease of Use Apr 19, 2025 am 12:20 AM

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

Python and Time: Making the Most of Your Study Time Python and Time: Making the Most of Your Study Time Apr 14, 2025 am 12:02 AM

To maximize the efficiency of learning Python in a limited time, you can use Python's datetime, time, and schedule modules. 1. The datetime module is used to record and plan learning time. 2. The time module helps to set study and rest time. 3. The schedule module automatically arranges weekly learning tasks.

Python vs. C  : Exploring Performance and Efficiency Python vs. C : Exploring Performance and Efficiency Apr 18, 2025 am 12:20 AM

Python is better than C in development efficiency, but C is higher in execution performance. 1. Python's concise syntax and rich libraries improve development efficiency. 2.C's compilation-type characteristics and hardware control improve execution performance. When making a choice, you need to weigh the development speed and execution efficiency based on project needs.

Python: Automation, Scripting, and Task Management Python: Automation, Scripting, and Task Management Apr 16, 2025 am 12:14 AM

Python excels in automation, scripting, and task management. 1) Automation: File backup is realized through standard libraries such as os and shutil. 2) Script writing: Use the psutil library to monitor system resources. 3) Task management: Use the schedule library to schedule tasks. Python's ease of use and rich library support makes it the preferred tool in these areas.

Which is part of the Python standard library: lists or arrays? Which is part of the Python standard library: lists or arrays? Apr 27, 2025 am 12:03 AM

Pythonlistsarepartofthestandardlibrary,whilearraysarenot.Listsarebuilt-in,versatile,andusedforstoringcollections,whereasarraysareprovidedbythearraymoduleandlesscommonlyusedduetolimitedfunctionality.

See all articles