selenium (in python) don't complete the page loading

Question:

I’m trying to scrap this site: https://www.politicos.org.br/Ranking
and my cell on jupyter notebook don’t complete the loading.
The page has a Cookies button to accept, but I can’t figure out how I can click on it. And I don’t know if this is the problem.

import re
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from time import sleep
options = Options()
options.add_argument('window-size=1000,800')
navegador = webdriver.Chrome(options=options)
navegador.get('https://www.politicos.org.br/Ranking')
sleep(3)
click_dep = navegador.find_element(By.XPATH, '//*[@id="__next"]/div[2]/div[1]/div[4]/button')
click_dep.click()
sleep(1)

I’m using python 3 on jupyter notebook,
thanks for you attention.

Asked By: Andre Luiz Moura

||

Answers:

I found the HTML code for Cookies button : <button class="mb-3">Aceitar</button>

So you can easily write the code to click button.

navegador.find_element(By.XPATH, '//*[@class="mb-3"]').click()
Answered By: zerg468

I can solve this using this on my Options:

options.page_load_strategy = 'eager' 

from another post on stackoverflow: "Eager" page loading strategy will make WebDriver wait until the initial HTML document has been completely loaded and parsed, and discards loading of stylesheets, images and subframes (DOMContentLoaded event fire is returned). stackoverflow.com/questions/66358904/… And, I use the zerg468 sugestion to click on that button

Answered By: Andre Luiz Moura