Measure website load time with Python requests

Question:

I’m trying to build a tool for testing the delay of my internet connection, more specifically web site load times. I thought of using the python requests module for the loading part.

Problem is, it’s got no built-in functionality to measure the time it took to get the full response. For this I thought I would use the timeit module.

What I’m not sure about is that if I run timeit like so:

t = timeit.Timer("requests.get('http://www.google.com')", "import requests")

I’m I really measuring the time it took the response to arrive or is it the time it takes for the request to be built, sent, received, etc? I’m guessing I could maybe disregard that excecution time since I’m testing networks with very long delays (~700ms)?

Is there a better way to do this programatically?

Asked By: cookM

||

Answers:

As for your question, it should be the total time for

  1. time to create the request object
  2. Send request
  3. Receive response
  4. Parse response (See comment from Thomas Orozco )

Other ways to measure a single request load time is to use urllib:

nf = urllib.urlopen(url)
start = time.time()
page = nf.read()
end = time.time()
nf.close()
# end - start gives you the page load time
Answered By: pyfunc

There is such functionality in latest version of requests:

https://requests.readthedocs.io/en/latest/api/?highlight=elapsed#requests.Response.elapsed

For example:

requests.get("http://127.0.0.1").elapsed.total_seconds()
Answered By: TJL

response.elapsed returns a timedelta object with the time elapsed from sending the request to the arrival of the response. It is often used to stop the connection after a certain point of time is elapsed.

# import requests module 
import requests 
  
# Making a get request 
response = requests.get('http://stackoverflow.com/') 
  
# print response 
print(response) 
  
# print elapsed time 
print(response.elapsed)

output:

<Response [200]>
0:00:00.343720
Answered By: Milovan Tomašević