How to scrape
scraping or web scraping is a technique used to extract data from websites in an automated manner. It consists of using programs or scripts to navigate a web page, extract specific information (such as text, images, product prices, etc.), and save it.
In this post, I will teach the process I use to do scraping and what important points to keep in mind when doing it.
In my case, I will perform scraping in PcComponentes to collect information about laptops. This data will be used to create a dataset that will serve as the basis for a Machine Learning model, designed to predict the price of a laptop based on the components that are specified.
First, it is necessary to identify which URL the script should access to do the scraping:
In this case, if we look at the PcComponentes URL, we can see that it passes a parameter through the URL, which we can use to specify what we want to search for.
Once this is done, we will see the search result:
After this, we will use the developer tool that almost all browsers have integrated:
By right-clicking and then selecting the "Inspect" option, the developer tool will open, and we will see the following:
A tag of type anchor () that contains a lot of information regarding the product that we see in the search results.
If we look at the following area, we will see practically all the product data:
Done! We have the area from which to extract the data. Now it's time to create the script to extract them.
But we ran into a problem: if you access PcComponentes directly, it always asks us to accept the cookie policies. So, we can't make a GET request and scraping the result, since we wouldn't get anything.
Therefore, we will have to use Selenium to simulate the browser and be able to interact with it.
We start by doing the following:
from selenium import webdriver from selenium.webdriver.firefox.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By options = Options() options.headless = True #Abrimos el navegador driver = webdriver.Firefox(options=options) time.sleep(5) #Vamos a la página indicada pccomponentes.com/laptops driver.get(url+str(i)) #Esperamos 30 segundos hasta que aparezca el botón de cookies y al aparecer hace clic accept_cookies = WebDriverWait(driver, 30).until( EC.presence_of_element_located((By.ID, 'cookiesAcceptAll')) ) accept_cookies.click() #Descargamos el HTML html = driver.page_source
Once this is done, in the html variable we will obtain the HTML code of the page to scrape.
However, we ran into another problem. When opening the browser with Selenium and making 2 or 3 requests, Cloudflare limits the requests and does not allow us to make more. Therefore, we could only scrape about 3 pages, which would be about 20 different computers. Not enough to make a dataset.
One solution I came up with was to download the page locally and work with the HTML locally. After having done the scraping, we could open another browser (waiting a reasonable amount of time) and download the following one.
So I added the above code to a function and wrapped it in a for as follows:
#Función que se conecta a pccomponentes y guarda el html en local def guarda_datos_html(i=0): try: options = Options() options.headless = True #Abrimos el navegador driver = webdriver.Firefox(options=options) time.sleep(5) #Vamos a la página indicada pccomponentes.com/laptops driver.get(url+str(i)) #Esperamos 30 segundos hasta que aparezca el botón de cookies y al aparecer hace clic accept_cookies = WebDriverWait(driver, 30).until( EC.presence_of_element_located((By.ID, 'cookiesAcceptAll')) ) accept_cookies.click() #Descargamos el HTML html = driver.page_source #Lo guardamos en local with open(f'html/laptops_{i}.html','w',encoding="utf-8") as document: document.write(html) driver.close() except: print(f'Error en página: {i}') for i in range(0,58): guarda_datos_html(i) time.sleep(30)
Now we can recover the HTML and work with them. To do this, I installed BeautifulSoup, a package that is very often used in scraping.
We are going to develop the function to collect the information from the HTML that we have downloaded thanks to the previous function.
The function looked like this:
# Función que abre el HTML guardado con anterioridad y filtra los datos # para guardarlos en un CSV ordenados def get_datos_html(i=0): try: with open(f'laptop_data_actual.csv','a') as ldata: field = ['Company','Inches','Cpu','Ram','Gpu','OpSys','SSD','Price'] writer = csv.DictWriter(ldata, fieldnames=field) with open(f'html/laptops_{i}.html','r',encoding="utf-8") as document: html = BeautifulSoup(document.read(), 'html.parser') products = html.find_all('a') for element in products: pc = element.get('data-product-name') if pc: pc = pc.lower() marca = element.get('data-product-brand') price = element.get('data-product-price') pc_data = pc.split('/') cpu = pc_data[0].split(' ') cpu = buscar_cpu(cpu) gpu = buscar_gpu(pc_data) inches = '.'.join([s for s in re.findall(r'\b\d+\b', pc_data[-1])]) OpSys = bucar_opsys(pc_data, marca) row = { 'Company': marca, 'Inches': inches, 'Cpu': cpu, 'Ram': pc_data[1], 'Gpu': gpu, 'OpSys': OpSys, 'SSD': pc_data[2], 'Price': price } writer.writerow(row) except: print(f'Error en página: {i}')
Basically, we open the CSV file where we will save the information, then we tell the CSV what fields we want it to have, and then we read and work with the HTML. As you can see, I had to do some extra functions to be able to extract the necessary information from each field that we want to save in the CSV.
I leave you the complete script here in case you want to try it!
PccomponentsScrapper
The above is the detailed content of How to scrape. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

To maximize the efficiency of learning Python in a limited time, you can use Python's datetime, time, and schedule modules. 1. The datetime module is used to record and plan learning time. 2. The time module helps to set study and rest time. 3. The schedule module automatically arranges weekly learning tasks.

Python is better than C in development efficiency, but C is higher in execution performance. 1. Python's concise syntax and rich libraries improve development efficiency. 2.C's compilation-type characteristics and hardware control improve execution performance. When making a choice, you need to weigh the development speed and execution efficiency based on project needs.

Is it enough to learn Python for two hours a day? It depends on your goals and learning methods. 1) Develop a clear learning plan, 2) Select appropriate learning resources and methods, 3) Practice and review and consolidate hands-on practice and review and consolidate, and you can gradually master the basic knowledge and advanced functions of Python during this period.

Pythonlistsarepartofthestandardlibrary,whilearraysarenot.Listsarebuilt-in,versatile,andusedforstoringcollections,whereasarraysareprovidedbythearraymoduleandlesscommonlyusedduetolimitedfunctionality.

Python excels in automation, scripting, and task management. 1) Automation: File backup is realized through standard libraries such as os and shutil. 2) Script writing: Use the psutil library to monitor system resources. 3) Task management: Use the schedule library to schedule tasks. Python's ease of use and rich library support makes it the preferred tool in these areas.

Python and C each have their own advantages, and the choice should be based on project requirements. 1) Python is suitable for rapid development and data processing due to its concise syntax and dynamic typing. 2)C is suitable for high performance and system programming due to its static typing and manual memory management.

Key applications of Python in web development include the use of Django and Flask frameworks, API development, data analysis and visualization, machine learning and AI, and performance optimization. 1. Django and Flask framework: Django is suitable for rapid development of complex applications, and Flask is suitable for small or highly customized projects. 2. API development: Use Flask or DjangoRESTFramework to build RESTfulAPI. 3. Data analysis and visualization: Use Python to process data and display it through the web interface. 4. Machine Learning and AI: Python is used to build intelligent web applications. 5. Performance optimization: optimized through asynchronous programming, caching and code
