Python Requests: Don't wait for request to finish

Question:

In Bash, it is possible to execute a command in the background by appending &. How can I do it in Python?

while True:
    data = raw_input('Enter something: ') 
    requests.post(url, data=data) # Don't wait for it to finish.
    print('Sending POST request...') # This should appear immediately.
Asked By: octosquidopus

||

Answers:

According to the doc, you should move to another library :

Blocking Or Non-Blocking?

With the default Transport Adapter in place, Requests does not provide
any kind of non-blocking IO. The Response.content property will block
until the entire response has been downloaded. If you require more
granularity, the streaming features of the library (see Streaming
Requests) allow you to retrieve smaller quantities of the response at
a time. However, these calls will still block.

If you are concerned about the use of blocking IO, there are lots of
projects out there that combine Requests with one of Python’s
asynchronicity frameworks.

Two excellent examples are
grequests and
requests-futures.

Answered By: Romain Jouin

If you can write the code to be executed separately in a separate python program, here is a possible solution based on subprocessing.

Otherwise you may find useful this question and related answer: the trick is to use the threading library to start a separate thread that will execute the separated task.

A caveat with both approach could be the number of items (that’s to say the number of threads) you have to manage. If the items in parent are too many, you may consider halting every batch of items till at least some threads have finished, but I think this kind of management is non-trivial.

For more sophisticated approach you can use an actor based approach, I have not used this library myself but I think it could help in that case.

Answered By: Chosmos

I use multiprocessing.dummy.Pool. I create a singleton thread pool at the module level, and then use pool.apply_async(requests.get, [params]) to launch the task.

This command gives me a future, which I can add to a list with other futures indefinitely until I’d like to collect all or some of the results.

multiprocessing.dummy.Pool is, against all logic and reason, a THREAD pool and not a process pool.

Example (works in both Python 2 and 3, as long as requests is installed):

from multiprocessing.dummy import Pool

import requests

pool = Pool(10) # Creates a pool with ten threads; more threads = more concurrency.
                # "pool" is a module attribute; you can be sure there will only
                # be one of them in your application
                # as modules are cached after initialization.

if __name__ == '__main__':
    futures = []
    for x in range(10):
        futures.append(pool.apply_async(requests.get, ['http://example.com/']))
    # futures is now a list of 10 futures.
    for future in futures:
        print(future.get()) # For each future, wait until the request is
                            # finished and then print the response object.

The requests will be executed concurrently, so running all ten of these requests should take no longer than the longest one. This strategy will only use one CPU core, but that shouldn’t be an issue because almost all of the time will be spent waiting for I/O.

Answered By: Andrew Gorcester

Here’s a hacky way to do it:

try:
    requests.get("http://127.0.0.1:8000/test/",timeout=0.0000000001)
except requests.exceptions.ReadTimeout: 
    pass

Edit: for those of you that observed that this will not await a response – that is my understanding of the question "fire and forget… do not wait for it to finish". There are much more thorough and complete ways to do it with threads or async if you need response context, error handling, etc.

Answered By: keithhackbarth

Elegant solution from Andrew Gorcester. In addition, without using futures, it is possible to use the callback and error_callback attributes (see
doc) in order to perform asynchronous processing:

def on_success(r: Response):
    if r.status_code == 200:
        print(f'Post succeed: {r}')
    else:
        print(f'Post failed: {r}')

def on_error(ex: Exception):
    print(f'Post requests failed: {ex}')

pool.apply_async(requests.post, args=['http://server.host'], kwargs={'json': {'key':'value'}},
                        callback=on_success, error_callback=on_error))
Answered By: Nemolovich
from multiprocessing.dummy import Pool
import requests

pool = Pool()

def on_success(r):
    print('Post succeed')

def on_error(ex):
    print('Post requests failed')

def call_api(url, data, headers):
    requests.post(url=url, data=data, headers=headers)

def pool_processing_create(url, data, headers):
    pool.apply_async(call_api, args=[url, data, headers], 
    callback=on_success, error_callback=on_error)

Simplest and Most Pythonic Solution using threading

A Simple way to go ahead and send POST/GET or to execute any other function without waiting for it to finish is using the built-in Python Module threading.

import threading
import requests

def send_req():
    requests.get("http://127.0.0.1:8000/test/")


for x in range(100):
    threading.Thread(target=send_req).start() # start's a new thread and continues. 

Other Important Features of threading

  • You can turn these threads into daemons using thread_obj.daemon = True

  • You can go ahead and wait for one to complete executing and then continue using thread_obj.join()

  • You can check if a thread is alive using thread_obj.is_alive() bool: True/False

  • You can even check the active thread count as well by threading.active_count()

Official Documentation

Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.