StaleElementReferenceException: Message: The element reference of is stale

Question:

I’m scraping links using selenium. I can print my links with my loop but I can’t navigate to them because and get all of the information
I get the following error:
Message: The element reference is stale; either the element is no longer attached to the DOM, it is not in the current frame context, or the document has been refreshed


    from selenium import webdriver
    author=[]
    MAX_PAGE_NUM = 2
    url=r"C:\Users\PERSONL\Downloads\geckodriver-v0.26.0-win64\geckodriver.exe"

    driver=webdriver.Firefox(executable_path=url)
    with open('results.csv', 'w') as f:
        f.write("Name")

    for i in range(1, MAX_PAGE_NUM + 1):
        url = url = "https://www.oddsportal.com/soccer/england/premier-league-2017-2018/results/" + "#/page/" + str(i) 
        driver.get(url)
        names = driver.find_elements_by_xpath('//td[@class="name table-participant"]')
        num_page_items = len(names)
        with open('results.csv', 'a') as f:
            for i in range(num_page_items):
                author.append(names[i].text)
                f.write(names[i].text)

    driver.close()

and also for this code can you please add wedriverwait also for this code :


    ff=['https://www.oddsportal.com/soccer/england/premier-league-2017-2018/tottenham-manchester-city-ddkDE7Ld/#over-under;2','https://www.oddsportal.com/soccer/england/premier-league-2017-2018/burnley-bournemouth-xSUUEVHO/#over-under;2']
    webD=wb.Chrome(r'C:UsersPERSONLDownloadschromedriver_win32 (1)chromedriver.exe')
    k=len(ff)
    for i in range(k):
        webD.get(ff[i])
        c03= webD.find_elements_by_class_name('bt-2')
        c05=c03.find_elements_by_class_name('table-container')
        c04=c03.find_elements_by_tag_name('strong')
        kk.append(c04)

        
    fla=kk[0]

    print(fla)
    for i in fla:
        m=i.text
        num.append(m)


Asked By: HaAbs

||

Answers:

Made a few tweaks to your script.

The key to avoid StaleElementReferenceException was to allow the table to load before collecting names. Use WebDriverWait on visibility of element for that.

You can also iterate through names directly, without the need for index (see for name in names: line). I have also added a .rstrip() which removes any trailing whitespace in the collected text. You can remove it and see how your .csv will look like to understand the need.

author=[]
MAX_PAGE_NUM = 2

with open('resultss.csv', 'w') as f:
    f.write("Namen")

for i in range(1, MAX_PAGE_NUM + 1):
    url = "https://www.oddsportal.com/soccer/england/premier-league-2017-2018/results/" + "#/page/" + str(i)
    driver.get(url)
    WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, 'table#tournamentTable')))
    names = driver.find_elements_by_xpath('.//td[@class="name table-participant"]')
    print(len(names))
    print(names[0].text)
    with open('resultss.csv', 'a') as f:
        for name in names:
            author.append(name.text.rstrip())
            f.write(name.text.rstrip()+"n")

driver.close()

These imports are required for WebDriverWait:

from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
Answered By: 0buz
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.