Home Backend Development Python Tutorial Detailed tutorial on how to use scrapy shell to verify the results of xpath selection

Detailed tutorial on how to use scrapy shell to verify the results of xpath selection

Jul 19, 2017 pm 02:19 PM
ipython scrapy shell

1. scrapy shell

is a good interactive tool of scrapy package. Currently I use it mainly to verify the results of xpath selection. After installing scrapy, you can operate the scrapy shell directly on cmd.

Scrapy Shell

The Scrapy terminal is an interactive terminal. We can try and debug the code without starting the spider. It can also be used to test XPath or CSS expressions and see how they work. way to facilitate the extraction of data from the web pages we crawl.

If IPython is installed, the Scrapy terminal will use IPython (instead of the standard Python terminal). The IPython terminal is more powerful than others, providing intelligent auto-completion, highlighted output, and other features. (It is recommended to install IPython)

Start Scrapy Shell

Enter the root directory of the project and execute the following command to start the shell:

scrapy shell "http://www.itcast. cn/channel/teacher.shtml"

Scrapy Shell will automatically create some convenient objects based on the downloaded page, such as Response object and Selector object (for HTML and XML content).

When the shell is loaded, you will get a local response variable containing response data. Entering response.body will output the response body, and output response.headers to see the response header.

When you enter response.selector, you will get an object of class Selector initialized by response. At this time, you can query the response by using response.selector.xpath() or response.selector.css().

Scrapy also provides some shortcuts, such as response.xpath() or response.css() which can also take effect (as in the previous case).

Selectors selector

Scrapy Selectors built-in XPath and CSS Selector expression mechanism

Selector has four basic methods, the most commonly used is xpath:

xpath(): Pass in the xpath expression and return the selector list of all nodes corresponding to the expression

extract(): Serialize the node into a Unicode string and return the list

css(): Pass in a CSS expression and return the selector list of all nodes corresponding to the expression. The syntax is the same as BeautifulSoup4

#re(): Extract data based on the passed in regular expression. Return the Unicode string list list


2. ipython

on the official website It is recommended to use ipython to run scrapy shell, so I tried to install it. Because my python environment was configured through conda before (see the previous article), it is very convenient to install ipython through conda

conda install -c conda-forge ipython
Copy after login

Then the entire ipython package will be downloaded, because all It is compiled, and there is no annoying compilation failure process.

3. Run ipython and run scrapy shell on ipython

In the current cmd run box, because The system environment has been configured and you can run the python package directly, so directly typing ipython in the cmd run box will enter the ipython run box, which is similar to the system standard cmd, but has richer functions, richer colors, and layout. It can be good too.

But when I type the scrapy shell command directly on it, it keeps saying that there is no such command and fails. Stuck here.

Later by carefully reading the instructions of scrapy shell

If you have IPython installed, the Scrapy shell will use it (instead of the standard Python console).

It means that scrapy shell will find the ipython running box by itself.

So directly enter scrapy shell in the standard run box of cmd, and the returned result is directly called to the ipython run box.

The above is the detailed content of Detailed tutorial on how to use scrapy shell to verify the results of xpath selection. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to quickly delete the line at the end of a file in Linux How to quickly delete the line at the end of a file in Linux Mar 01, 2024 pm 09:36 PM

When processing files under Linux systems, it is sometimes necessary to delete lines at the end of the file. This operation is very common in practical applications and can be achieved through some simple commands. This article will introduce the steps to quickly delete the line at the end of the file in Linux system, and provide specific code examples. Step 1: Check the last line of the file. Before performing the deletion operation, you first need to confirm which line is the last line of the file. You can use the tail command to view the last line of the file. The specific command is as follows: tail-n1filena

Scrapy implements crawling and analysis of WeChat public account articles Scrapy implements crawling and analysis of WeChat public account articles Jun 22, 2023 am 09:41 AM

Scrapy implements article crawling and analysis of WeChat public accounts. WeChat is a popular social media application in recent years, and the public accounts operated in it also play a very important role. As we all know, WeChat public accounts are an ocean of information and knowledge, because each public account can publish articles, graphic messages and other information. This information can be widely used in many fields, such as media reports, academic research, etc. So, this article will introduce how to use the Scrapy framework to crawl and analyze WeChat public account articles. Scr

Scrapy asynchronous loading implementation method based on Ajax Scrapy asynchronous loading implementation method based on Ajax Jun 22, 2023 pm 11:09 PM

Scrapy is an open source Python crawler framework that can quickly and efficiently obtain data from websites. However, many websites use Ajax asynchronous loading technology, making it impossible for Scrapy to obtain data directly. This article will introduce the Scrapy implementation method based on Ajax asynchronous loading. 1. Ajax asynchronous loading principle Ajax asynchronous loading: In the traditional page loading method, after the browser sends a request to the server, it must wait for the server to return a response and load the entire page before proceeding to the next step.

ipython installation tutorial ipython installation tutorial Dec 05, 2023 pm 02:15 PM

Installation tutorial: 1. Make sure that Python has been installed. ipython is a Python package, so you need to install Python first; 2. Open the command line or terminal and enter the "pip install ipython" command to install ipython; 3. If Python2 is also installed in the system and Python3, you can use the "pip3 install ipython" command to install ipython3; 4. The installation is complete.

Scrapy case analysis: How to crawl company information on LinkedIn Scrapy case analysis: How to crawl company information on LinkedIn Jun 23, 2023 am 10:04 AM

Scrapy is a Python-based crawler framework that can quickly and easily obtain relevant information on the Internet. In this article, we will use a Scrapy case to analyze in detail how to crawl company information on LinkedIn. Determine the target URL First, we need to make it clear that our target is the company information on LinkedIn. Therefore, we need to find the URL of the LinkedIn company information page. Open the LinkedIn website, enter the company name in the search box, and

How to convert IPython notebook to PDF and HTML? How to convert IPython notebook to PDF and HTML? Sep 08, 2023 pm 08:33 PM

IPython Notebook is a very popular scientific computing and data analysis tool widely used by researchers, analysts, and programmers. They make it easy to explore data, develop models, and communicate results by allowing users to integrate code, text, and interactive visualizations into a single document. However, sharing IPython notebooks with others can be difficult, especially when the recipient lacks the software or expertise required to run them. One solution to this challenge is to convert IPython notebooks to PDF and HTML, which are universally supported and easily accessible on any device. In this article, we will delve into three methods of converting IPython notebooks to PDF and HTML.

Scrapy optimization tips: How to reduce crawling of duplicate URLs and improve efficiency Scrapy optimization tips: How to reduce crawling of duplicate URLs and improve efficiency Jun 22, 2023 pm 01:57 PM

Scrapy is a powerful Python crawler framework that can be used to obtain large amounts of data from the Internet. However, when developing Scrapy, we often encounter the problem of crawling duplicate URLs, which wastes a lot of time and resources and affects efficiency. This article will introduce some Scrapy optimization techniques to reduce the crawling of duplicate URLs and improve the efficiency of Scrapy crawlers. 1. Use the start_urls and allowed_domains attributes in the Scrapy crawler to

Using Selenium and PhantomJS in Scrapy crawler Using Selenium and PhantomJS in Scrapy crawler Jun 22, 2023 pm 06:03 PM

Using Selenium and PhantomJS in Scrapy crawlers Scrapy is an excellent web crawler framework under Python and has been widely used in data collection and processing in various fields. In the implementation of the crawler, sometimes it is necessary to simulate browser operations to obtain the content presented by certain websites. In this case, Selenium and PhantomJS are needed. Selenium simulates human operations on the browser, allowing us to automate web application testing

See all articles