How to run multiple functions at the same time?

Question:

I’m trying to run 2 functions at the same time.

def func1():
    print('Working')

def func2():
    print('Working')

func1()
func2()

Does anyone know how to do this?

Asked By: John

||

Answers:

Do this:

from threading import Thread

def func1():
    print('Working')

def func2():
    print("Working")

if __name__ == '__main__':
    Thread(target = func1).start()
    Thread(target = func2).start()
Answered By: chrisg

The answer about threading is good, but you need to be a bit more specific about what you want to do.

If you have two functions that both use a lot of CPU, threading (in CPython) will probably get you nowhere. Then you might want to have a look at the multiprocessing module or possibly you might want to use jython/IronPython.

If CPU-bound performance is the reason, you could even implement things in (non-threaded) C and get a much bigger speedup than doing two parallel things in python.

Without more information, it isn’t easy to come up with a good answer.

Answered By: Mattias Nilsson

One option, that looks like it makes two functions run at the same
time
, is using the threading module (example in this answer).

However, it has a small delay, as an Official Python Documentation
page describes. A better module to try using is multiprocessing.

Also, there’s other Python modules that can be used for asynchronous execution (two pieces of code working at the same time). For some information about them and help to choose one, you can read this Stack Overflow question.

Comment from another user about the threading module

He might want to know that because of the Global Interpreter Lock
they will not execute at the exact same time even if the machine in
question has multiple CPUs. wiki.python.org/moin/GlobalInterpreterLock

– Jonas Elfström Jun 2 ’10 at 11:39

Quote from the Documentation about threading module not working

CPython implementation detail: In CPython, due to the Global Interpreter
Lock, only one thread can execute Python code at once (even though
certain performance-oriented libraries might overcome this limitation).

If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing or concurrent.futures.ProcessPoolExecutor.
However, threading is still an appropriate model if you
want to run multiple I/O-bound tasks simultaneously.

Answered By: Edward

Try this

from threading import Thread

def fun1():
    print("Working1")
def fun2():
    print("Working2")

t1 = Thread(target=fun1)
t2 = Thread(target=fun2)

t1.start()
t2.start()
Answered By: Shanan Ilen

The thread module does work simultaneously unlike multiprocess, but the timing is a bit off. The code below prints a "1" and a "2". These are called by different functions respectively. I did notice that when printed to the console, they would have slightly different timings.

from threading import Thread

def one():
    while(1 == num):
        print("1")
        time.sleep(2)
    
def two():
    while(1 == num):
        print("2")
        time.sleep(2)


p1 = Thread(target = one)
p2 = Thread(target = two)

p1.start()
p2.start()

Output: (Note the space is for the wait in between printing)

1
2

2
1

12
   
21

12
   
1
2

Not sure if there is a way to correct this, or if it matters at all. Just something I noticed.

Answered By: I506dk

I think what you are trying to convey can be achieved through multiprocessing. However if you want to do it through threads you can do this.
This might help

from threading import Thread
import time

def func1():
    print 'Working'
    time.sleep(2)

def func2():
    print 'Working'
    time.sleep(2)

th = Thread(target=func1)
th.start()
th1=Thread(target=func2)
th1.start()
Answered By: Soham Kapoor

This can be done elegantly with Ray, a system that allows you to easily parallelize and distribute your Python code.

To parallelize your example, you’d need to define your functions with the @ray.remote decorator, and then invoke them with .remote.

import ray

ray.init()

# Define functions you want to execute in parallel using 
# the ray.remote decorator.
@ray.remote
def func1():
    print("Working")

@ray.remote
def func2():
    print("Working")

# Execute func1 and func2 in parallel.
ray.get([func1.remote(), func2.remote()])

If func1() and func2() return results, you need to rewrite the above code a bit, by replacing ray.get([func1.remote(), func2.remote()]) with:

ret_id1 = func1.remote()
ret_id2 = func1.remote()
ret1, ret2 = ray.get([ret_id1, ret_id2])

There are a number of advantages of using Ray over the multiprocessing module or using multithreading. In particular, the same code will run on a single machine as well as on a cluster of machines.

For more advantages of Ray see this related post.

Answered By: Ion Stoica

test using APscheduler:

from apscheduler.schedulers.background import BackgroundScheduler
import datetime

dt = datetime.datetime
Future = dt.now() + datetime.timedelta(milliseconds=2550)  # 2.55 seconds from now testing start accuracy

def myjob1():
    print('started job 1: ' + str(dt.now())[:-3])  # timed to millisecond because thats where it varies
    time.sleep(5)
    print('job 1 half at: ' + str(dt.now())[:-3])
    time.sleep(5)
    print('job 1 done at: ' + str(dt.now())[:-3])
def myjob2():
    print('started job 2: ' + str(dt.now())[:-3])
    time.sleep(5)
    print('job 2 half at: ' + str(dt.now())[:-3])
    time.sleep(5)
    print('job 2 done at: ' + str(dt.now())[:-3])

print(' current time: ' + str(dt.now())[:-3])
print('  do job 1 at: ' + str(Future)[:-3] + ''' 
  do job 2 at: ''' + str(Future)[:-3])
sched.add_job(myjob1, 'date', run_date=Future)
sched.add_job(myjob2, 'date', run_date=Future)

i got these results. which proves they are running at the same time.

 current time: 2020-12-15 01:54:26.526
  do job 1 at: 2020-12-15 01:54:29.072  # i figure these both say .072 because its 1 line of print code
  do job 2 at: 2020-12-15 01:54:29.072
started job 2: 2020-12-15 01:54:29.075  # notice job 2 started before job 1, but code calls job 1 first.
started job 1: 2020-12-15 01:54:29.076  
job 2 half at: 2020-12-15 01:54:34.077  # halfway point on each job completed same time accurate to the millisecond
job 1 half at: 2020-12-15 01:54:34.077
job 1 done at: 2020-12-15 01:54:39.078  # job 1 finished first. making it .004 seconds faster.
job 2 done at: 2020-12-15 01:54:39.091  # job 2 was .002 seconds faster the second test
Answered By: monty314thon

In case you also want to wait until both functions have been completed:

from threading import Thread

def func1():
    print 'Working'

def func2():
    print 'Working'

# Define the threads and put them in an array
threads = [
    Thread(target = self.func1),
    Thread(target = self.func2)
]

# Func1 and Func2 run in separate threads
for thread in threads:
    thread.start()

# Wait until both Func1 and Func2 have finished
for thread in threads:
    thread.join()
Answered By: thanos.a

Another approach to run multiple functions concurrently in python is using asyncio that I couldn’t see within the answers.

import asyncio

async def func1():
    for _ in range(5):
        print(func1.__name__)
        await asyncio.sleep(0)  # switches tasks every iteration.

async def func2():
    for _ in range(5):
        print(func2.__name__)
        await asyncio.sleep(0)

tasks = [func1(), func2()]
await asyncio.gather(*tasks)

Out:

func1
func2
func1
func2
func1
func2
func1
func2
func1
func2

[NOTE]:

This code below can run 2 functions parallelly:

from multiprocessing import Process

def test1():
    print("Test1")

def test2():
    print("Test2")

if __name__ == "__main__":
    process1 = Process(target=test1)
    process2 = Process(target=test2)
    process1.start()
    process2.start()
    process1.join()
    process2.join()

Result:

Test1
Test2

And, these 2 sets of code below can run 2 functions concurrently:

from threading import Thread

def test1():
    print("Test1")

def test2():
    print("Test2")

thread1 = Thread(target=test1)
thread2 = Thread(target=test2)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
from operator import methodcaller
from multiprocessing.pool import ThreadPool

def test1():
    print("Test1")

def test2():
    print("Test2")

caller = methodcaller("__call__")
ThreadPool().map(caller, [test1, test2])

Result:

Test1
Test2

And, this code below can run 2 async functions concurrently and asynchronously:

import asyncio

async def test1():
    print("Test1")
        
async def test2():
    print("Test2")
        
async def call_tests():
    await asyncio.gather(test1(), test2())

asyncio.run(call_tests())

Result:

Test1
Test2
Answered By: Kai – Kazuya Ito

I might be wrong but:
with this piece of code:

def function_sleep():
    time.sleep(5)

start_time = time.time()
        p1=Process(target=function_sleep)
        p2=Process(target=function_sleep)
        p1.start()
        p2.start()
end_time = time.time()

I took the time and I would expect to get 5/6 seconds, while it always takes the double of the argument passed to the function sleep (10 seconds in this case).
What’s the matter?

Sorry guys, as mentioned in the previous comment, the "join()" need to be called.
That’s very important!

Answered By: Enrico Damini