Terminate multiple threads when any thread completes a task

Question:

I am new to both python, and to threads. I have written python code which acts as a web crawler and searches sites for a specific keyword. My question is, how can I use threads to run three different instances of my class at the same time. When one of the instances finds the keyword, all three must close and stop crawling the web. Here is some code.

class Crawler:
      def __init__(self):
            # the actual code for finding the keyword 

 def main():  
        Crawl = Crawler()

 if __name__ == "__main__":
        main()

How can I use threads to have Crawler do three different crawls at the same time?

Asked By: user446836

||

Answers:

First off, if you’re new to python, I wouldn’t recommend facing threads yet. Get used to the language, then tackle multi-threading.

With that said, if your goal is to parallelize (you said “run at the same time”), you should know that in python (or at least in the default implementation, CPython) multiple threads WILL NOT truly run in parallel, even if multiple processor cores are available. Read up on the GIL (Global Interpreter Lock) for more information.

Finally, if you still want to go on, check the Python documentation for the threading module. I’d say Python’s docs are as good as references get, with plenty of examples and explanations.

Answered By: slezica

There doesn’t seem to be a (simple) way to terminate a thread in Python.

Here is a simple example of running multiple HTTP requests in parallel:

import threading

def crawl():
    import urllib2
    data = urllib2.urlopen("http://www.google.com/").read()

    print "Read google.com"

threads = []

for n in range(10):
    thread = threading.Thread(target=crawl)
    thread.start()

    threads.append(thread)

# to wait until all three functions are finished

print "Waiting..."

for thread in threads:
    thread.join()

print "Complete."

With additional overhead, you can use a multi-process aproach that’s more powerful and allows you to terminate thread-like processes.

I’ve extended the example to use that. I hope this will be helpful to you:

import multiprocessing

def crawl(result_queue):
    import urllib2
    data = urllib2.urlopen("http://news.ycombinator.com/").read()

    print "Requested..."

    if "result found (for example)":
        result_queue.put("result!")

    print "Read site."

processs = []
result_queue = multiprocessing.Queue()

for n in range(4): # start 4 processes crawling for the result
    process = multiprocessing.Process(target=crawl, args=[result_queue])
    process.start()
    processs.append(process)

print "Waiting for result..."

result = result_queue.get() # waits until any of the proccess have `.put()` a result

for process in processs: # then kill them all off
    process.terminate()

print "Got result:", result
Answered By: Jeremy

For this problem, you can use either the threading module (which, as others have said, will not do true threading because of the GIL) or the multiprocessing module (depending on which version of Python you’re using). They have very similar APIs, but I recommend multiprocessing, as it is more Pythonic, and I find that communicating between processes with Pipes pretty easy.

You’ll want to have your main loop, which will create your processes, and each of these processes should run your crawler have a pipe back to the main thread. Your process should listen for a message on the pipe, do some crawling, and send a message back over the pipe if it finds something (before terminating). Your main loop should loop over each of the pipes back to it, listening for this “found something” message. Once it hears that message, it should resend it over the pipes to the remaining processes, then wait for them to complete.

More information can be found here: http://docs.python.org/library/multiprocessing.html

Answered By: Robotica

Starting a thread is easy:

thread = threading.Thread(function_to_call_inside_thread)
thread.start()

Create an event object to notify when you are done:

event = threading.Event()
event.wait() # call this in the main thread to wait for the event
event.set() # call this in a thread when you are ready to stop

Once the event has fired, you’ll need to add stop() methods to your crawlers.

for crawler in crawlers:
    crawler.stop()

And then call join on the threads

thread.join() # waits for the thread to finish

If you do any amount of this kind of programming, you’ll want to look at the eventlet module. It allows you to write “threaded” code without many of the disadvantages of threading.

Answered By: Winston Ewert

First of all, threading is not a solution in Python. Due to GIL, Threads does not work in parallel. So you can handle this with multiprocessing and you’ll be limited with the number of processor cores.

What’s the goal of your work ? You want to have a crawler ? Or you have some academic goals (learning about threading and Python, etc.) ?

Another point, Crawl waste more resources than other programs, so what is the sale your crawl ?

Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.