python requests bot detection?

Question:

I have been using the requests library to mine this website. I haven’t made too many requests to it within 10 minutes. Say 25. All of a sudden, the website gives me a 404 error.

My question is: I read somewhere that getting a URL with a browser is different from getting a URL with something like a requests. Because the requests fetch does not get cookies and other things that a browser would. Is there an option in requests to emulate a browser so the server doesn’t think i’m a bot? Or is this not an issue?

Asked By: jason

||

Answers:

Basically, at least one thing you can do is to send User-Agent header:

headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:20.0) Gecko/20100101 Firefox/20.0'}

response = requests.get(url, headers=headers)

Besides requests, you can simulate a real user by using selenium – it uses a real browser – in this case there is clearly no easy way to distinguish your automated user from other users. Selenium can also make use a “headless” browser.

Also, check if the web-site you are scraping provides an API. If there is no API or you are not using it, make sure you know if the site actually allows automated web-crawling like this, study Terms of use. You know, there is probably a reason why they block you after too many requests per a period of time.

Also see:

edit1: selenium uses a webdriver rather than a real browser; i.e., it passes a webdriver = TRUE in the header, making it far easier to detect than requests.

Answered By: alecxe

Things that can help in general :

  • Headers should be similar to common browsers, including :
  • Navigation :
    • If you make multiple request, put a random timeout between them
    • If you open links found in a page, set the Referer header accordingly
    • Or better, simulate mouse activity to move, click and follow link
  • Images should be enabled
  • Javascript should be enabled
    • Check that “navigator.plugins” and “navigator.language” are set in the client javascript page context
  • Use proxies
Answered By: Grubshka

The first answer is a bit off selenium is still detectable as its a webdriver and not a normal browser it has hardcoded values that can be detected using javascript most websites use fingerprinting libraries that can find these values luckily there is a patched chromedriver called undetecatble_chromedriver that bypasses such checks

Answered By: ahmed mani

As @Grubshka mentioned, it’s always a good practice to make your request look like it’s coming from a web browser by setting "common" headers. More info here.

In your case, though, it looks like the website you are targeting is rate-limiting based on IP address, and 25 requests coming from the same IP in 10 minutes exceed their limit.

The easiest way to get around this would be to use a residential proxy, which would give you a different IP address every time you send a request and help you avoid getting rate-limited.

Here’s an example code using both anti-detection headers and a proxy server:

import requests

s = requests.Session()

url = "htttps://thesiteyouraretargeting.com"

# Add common browser's header
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
    "Accept-Encoding": "gzip, deflate, br",
    "Accept-Language": "en-US,en;q=0.5",
}

# configure a proxy server
http_proxy = "http://0.0.0.0.0:0000"
proxies = {
    "http": http_proxy,
}

r = s.get(url, headers=headers, proxies=proxies)

Answered By: Gidoneli