How to scrape

Aug 16, 2024 pm 06:01 PM

scraping or web scraping is a technique used to extract data from websites in an automated manner. It consists of using programs or scripts to navigate a web page, extract specific information (such as text, images, product prices, etc.), and save it.

In this post, I will teach the process I use to do scraping and what important points to keep in mind when doing it.

In my case, I will perform scraping in PcComponentes to collect information about laptops. This data will be used to create a dataset that will serve as the basis for a Machine Learning model, designed to predict the price of a laptop based on the components that are specified.

First, it is necessary to identify which URL the script should access to do the scraping:

Cómo hacer scrapping

In this case, if we look at the PcComponentes URL, we can see that it passes a parameter through the URL, which we can use to specify what we want to search for.

Once this is done, we will see the search result:

Cómo hacer scrapping

After this, we will use the developer tool that almost all browsers have integrated:

Cómo hacer scrapping

By right-clicking and then selecting the "Inspect" option, the developer tool will open, and we will see the following:

Cómo hacer scrapping

A tag of type anchor () that contains a lot of information regarding the product that we see in the search results.

If we look at the following area, we will see practically all the product data:

Cómo hacer scrapping

Done! We have the area from which to extract the data. Now it's time to create the script to extract them.

But we ran into a problem: if you access PcComponentes directly, it always asks us to accept the cookie policies. So, we can't make a GET request and scraping the result, since we wouldn't get anything.

Therefore, we will have to use Selenium to simulate the browser and be able to interact with it.

We start by doing the following:

from selenium import webdriver
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By

options = Options()
options.headless = True
#Abrimos el navegador
driver = webdriver.Firefox(options=options)
time.sleep(5)
#Vamos a la página indicada pccomponentes.com/laptops
driver.get(url+str(i))
#Esperamos 30 segundos hasta que aparezca el botón de cookies y al aparecer hace clic
accept_cookies = WebDriverWait(driver, 30).until(
    EC.presence_of_element_located((By.ID, 'cookiesAcceptAll'))
)     
accept_cookies.click()
#Descargamos el HTML
html = driver.page_source
Copy after login

Once this is done, in the html variable we will obtain the HTML code of the page to scrape.

However, we ran into another problem. When opening the browser with Selenium and making 2 or 3 requests, Cloudflare limits the requests and does not allow us to make more. Therefore, we could only scrape about 3 pages, which would be about 20 different computers. Not enough to make a dataset.

One solution I came up with was to download the page locally and work with the HTML locally. After having done the scraping, we could open another browser (waiting a reasonable amount of time) and download the following one.

So I added the above code to a function and wrapped it in a for as follows:

#Función que se conecta a pccomponentes y guarda el html en local 
def guarda_datos_html(i=0):
    try:
        options = Options()
        options.headless = True
        #Abrimos el navegador
        driver = webdriver.Firefox(options=options)

        time.sleep(5)
        #Vamos a la página indicada pccomponentes.com/laptops
        driver.get(url+str(i))
        #Esperamos 30 segundos hasta que aparezca el botón de cookies y al aparecer hace clic
        accept_cookies = WebDriverWait(driver, 30).until(
            EC.presence_of_element_located((By.ID, 'cookiesAcceptAll'))
        )

        accept_cookies.click()
        #Descargamos el HTML
        html = driver.page_source
        #Lo guardamos en local
        with open(f'html/laptops_{i}.html','w',encoding="utf-8") as document:
            document.write(html)

        driver.close()
    except:
        print(f'Error en página: {i}')

for i in range(0,58):
    guarda_datos_html(i)
    time.sleep(30)
Copy after login

Now we can recover the HTML and work with them. To do this, I installed BeautifulSoup, a package that is very often used in scraping.

We are going to develop the function to collect the information from the HTML that we have downloaded thanks to the previous function.

The function looked like this:

# Función que abre el HTML guardado con anterioridad y filtra los datos
# para guardarlos en un CSV ordenados
def get_datos_html(i=0):
    try:
        with open(f'laptop_data_actual.csv','a') as ldata:

            field = ['Company','Inches','Cpu','Ram','Gpu','OpSys','SSD','Price']
            writer = csv.DictWriter(ldata, fieldnames=field)


            with open(f'html/laptops_{i}.html','r',encoding="utf-8") as document:

                html = BeautifulSoup(document.read(), 'html.parser')
                products = html.find_all('a')

                for element in products:
                    pc = element.get('data-product-name')
                    if pc:
                        pc = pc.lower()
                        marca = element.get('data-product-brand')
                        price = element.get('data-product-price')
                        pc_data = pc.split('/')
                        cpu = pc_data[0].split(' ')

                        cpu = buscar_cpu(cpu)
                        gpu = buscar_gpu(pc_data)
                        inches = '.'.join([s for s in re.findall(r'\b\d+\b', pc_data[-1])])
                        OpSys = bucar_opsys(pc_data, marca)

                        row = {
                            'Company': marca,
                            'Inches': inches,
                            'Cpu': cpu,
                            'Ram': pc_data[1],
                            'Gpu': gpu,
                            'OpSys': OpSys,
                            'SSD': pc_data[2],
                            'Price': price
                        }

                        writer.writerow(row)
    except:
        print(f'Error en página: {i}')
Copy after login

Basically, we open the CSV file where we will save the information, then we tell the CSV what fields we want it to have, and then we read and work with the HTML. As you can see, I had to do some extra functions to be able to extract the necessary information from each field that we want to save in the CSV.

I leave you the complete script here in case you want to try it!

PccomponentsScrapper

The above is the detailed content of How to scrape. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Roblox: Bubble Gum Simulator Infinity - How To Get And Use Royal Keys
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Mandragora: Whispers Of The Witch Tree - How To Unlock The Grappling Hook
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Nordhold: Fusion System, Explained
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1669
14
PHP Tutorial
1273
29
C# Tutorial
1256
24
Python vs. C  : Learning Curves and Ease of Use Python vs. C : Learning Curves and Ease of Use Apr 19, 2025 am 12:20 AM

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

Python and Time: Making the Most of Your Study Time Python and Time: Making the Most of Your Study Time Apr 14, 2025 am 12:02 AM

To maximize the efficiency of learning Python in a limited time, you can use Python's datetime, time, and schedule modules. 1. The datetime module is used to record and plan learning time. 2. The time module helps to set study and rest time. 3. The schedule module automatically arranges weekly learning tasks.

Python vs. C  : Exploring Performance and Efficiency Python vs. C : Exploring Performance and Efficiency Apr 18, 2025 am 12:20 AM

Python is better than C in development efficiency, but C is higher in execution performance. 1. Python's concise syntax and rich libraries improve development efficiency. 2.C's compilation-type characteristics and hardware control improve execution performance. When making a choice, you need to weigh the development speed and execution efficiency based on project needs.

Learning Python: Is 2 Hours of Daily Study Sufficient? Learning Python: Is 2 Hours of Daily Study Sufficient? Apr 18, 2025 am 12:22 AM

Is it enough to learn Python for two hours a day? It depends on your goals and learning methods. 1) Develop a clear learning plan, 2) Select appropriate learning resources and methods, 3) Practice and review and consolidate hands-on practice and review and consolidate, and you can gradually master the basic knowledge and advanced functions of Python during this period.

Which is part of the Python standard library: lists or arrays? Which is part of the Python standard library: lists or arrays? Apr 27, 2025 am 12:03 AM

Pythonlistsarepartofthestandardlibrary,whilearraysarenot.Listsarebuilt-in,versatile,andusedforstoringcollections,whereasarraysareprovidedbythearraymoduleandlesscommonlyusedduetolimitedfunctionality.

Python: Automation, Scripting, and Task Management Python: Automation, Scripting, and Task Management Apr 16, 2025 am 12:14 AM

Python excels in automation, scripting, and task management. 1) Automation: File backup is realized through standard libraries such as os and shutil. 2) Script writing: Use the psutil library to monitor system resources. 3) Task management: Use the schedule library to schedule tasks. Python's ease of use and rich library support makes it the preferred tool in these areas.

Python vs. C  : Understanding the Key Differences Python vs. C : Understanding the Key Differences Apr 21, 2025 am 12:18 AM

Python and C each have their own advantages, and the choice should be based on project requirements. 1) Python is suitable for rapid development and data processing due to its concise syntax and dynamic typing. 2)C is suitable for high performance and system programming due to its static typing and manual memory management.

Python for Web Development: Key Applications Python for Web Development: Key Applications Apr 18, 2025 am 12:20 AM

Key applications of Python in web development include the use of Django and Flask frameworks, API development, data analysis and visualization, machine learning and AI, and performance optimization. 1. Django and Flask framework: Django is suitable for rapid development of complex applications, and Flask is suitable for small or highly customized projects. 2. API development: Use Flask or DjangoRESTFramework to build RESTfulAPI. 3. Data analysis and visualization: Use Python to process data and display it through the web interface. 4. Machine Learning and AI: Python is used to build intelligent web applications. 5. Performance optimization: optimized through asynchronous programming, caching and code

See all articles