How can I ignore (Bypass) an "Unable to locate element" exception?

Question:

I use Python Selenium for scraping a website, but my crawler stopped because of an exception:

selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="priceblock_ourprice"]"}

How can I continue to crawl even if the element is not attached?

My code:

from selenium import webdriver
browser = webdriver.Chrome()
#Product1
browser.get('https://www.amazon.com.tr/Behringer-High-Precision-Crossover-Limiters- 
Adjustable/dp/B07GSGYRK1/ref=sr_1_1?dchild=1&keywords=behringer+cx3400+v2&qid=1630311885&sr=8- 
1')
price = price = browser.find_element_by_id('priceblock_ourprice')
price.text
import numpy as np
import pandas as pd
df = pd.DataFrame([["info", "info", price.text]], columns=["Product", "Firm", "Price"])
df.to_csv('info.csv', encoding="utf-8", index=False, header=False)
df_final = pd.read_csv('info.csv')
df_final.head()
browser.quit()
Asked By: Alex Kurkcu

||

Answers:

If you want to continue scraping even if element is not found, you can use try-except block.

try:
   price = browser.find_element_by_id(id_).text
except:
   print("Price is not found.")
   price = "-"     # for dataframe

Alternatively you can create a function to check for the existence and act accordingly. One way to do it:

from selenium import webdriver
browser = webdriver.Chrome()
import numpy as np
import pandas as pd

def check_if_exists(browser, id_):
    return len(browser.find_elements_by_css_selector("#{}".format(id_))) > 0

browser.get('https://www.amazon.com.tr/Behringer-High-Precision-Crossover-Limiters-Adjustable/dp/B07GSGYRK1/ref=sr_1_1?dchild=1&keywords=behringer+cx3400+v2&qid=1630311885&sr=8-1')

id_ = 'priceblock_ourprice'
price =  browser.find_element_by_id(id_).text if check_if_exists(browser, id_) else "-"

df = pd.DataFrame([["info", "info", price]], columns=["Product", "Firm", "Price"])
df.to_csv('info.csv', encoding="utf-8", index=False, header=False)
df_final = pd.read_csv('info.csv')
df_final.head()
browser.quit()
Answered By: Muhteva

After the Selenium update, I had to make changes to the codes. According to these new codes, how do I do the bypass process that was talked about before ("Unable to locate element" exception?. I want the process to continue even if Selenium can’t find the xpath element on the web page. Thanks

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
browser = webdriver.Chrome()
import numpy as np
import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
#Product1
browser.get('https://www.infomusicshop.com/behringer-amp800-4-kanal-stereo-kulaklik-amfisi')
xpath_ = '//*[@id="productR"]/div/div/div/div[2]/div[2]/div/span[1]'
price = browser.find_element("xpath", xpath_).text
xpath_ = '//*[@id="productR"]/div/div/div/div[4]/div/div[1]/a111'
stock = browser.find_element("xpath", xpath_).text
df = pd.DataFrame([["info", "info", "https://www.infomusicshop.com/behringer-amp800-4-kanal-stereo-kulaklik-amfisi", price, stock]], columns=["Product", "Firm", "Link", "Price", "stock"])
df.to_excel('amp800.xlsx', encoding="utf-8", index=False, header=False)
browser.quit()
Answered By: Alex Kurkcu
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.