Webscraping <span> into div class
Question:
I’m trying to access a inside a class in webscraping, but without success. I need to return the number ‘597’, as image:
The error return is:
Code
url_base = 'https://shopping.smiles.com.br/telefones-e-celulares/magazine-luiza/magazineluiza?initialMap=seller&initialQuery=magazineluiza&map=category-1,sellername,seller'
headers = {'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"}
executable_path = r'C:UsersvinigDownloadschromedriver_win32chromedriver.exe'
browser = webdriver.Chrome(executable_path=executable_path)
browser.get(url_base)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
qtd_itens = soup.find('class', attrs={'class':"shoppingsmiles-search-result-0-x-totalProducts--layout pv5 ph9 bn-ns bt-s b--muted-5 tc-s tl t-action--small"}).text()
Answers:
You can use pure selenium for this. Use .find_element
and WebDriverWait
to wait until the elements appears.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
url_base = 'https://shopping.smiles.com.br/telefones-e-celulares/magazine-luiza/magazineluiza?initialMap=seller&initialQuery=magazineluiza&map=category-1,sellername,seller'
browser = webdriver.Chrome()
browser.get(url_base)
element = WebDriverWait(browser, 20).until(EC.visibility_of_element_located((By.CLASS_NAME, "shoppingsmiles-search-result-0-x-totalProducts--layout")))
print(element.text)
browser.quit()
Outputs:
590 Produtos
I’m trying to access a inside a class in webscraping, but without success. I need to return the number ‘597’, as image:
The error return is:
Code
url_base = 'https://shopping.smiles.com.br/telefones-e-celulares/magazine-luiza/magazineluiza?initialMap=seller&initialQuery=magazineluiza&map=category-1,sellername,seller'
headers = {'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"}
executable_path = r'C:UsersvinigDownloadschromedriver_win32chromedriver.exe'
browser = webdriver.Chrome(executable_path=executable_path)
browser.get(url_base)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
qtd_itens = soup.find('class', attrs={'class':"shoppingsmiles-search-result-0-x-totalProducts--layout pv5 ph9 bn-ns bt-s b--muted-5 tc-s tl t-action--small"}).text()
You can use pure selenium for this. Use .find_element
and WebDriverWait
to wait until the elements appears.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
url_base = 'https://shopping.smiles.com.br/telefones-e-celulares/magazine-luiza/magazineluiza?initialMap=seller&initialQuery=magazineluiza&map=category-1,sellername,seller'
browser = webdriver.Chrome()
browser.get(url_base)
element = WebDriverWait(browser, 20).until(EC.visibility_of_element_located((By.CLASS_NAME, "shoppingsmiles-search-result-0-x-totalProducts--layout")))
print(element.text)
browser.quit()
Outputs:
590 Produtos