Python threading: second thread waits until the first one finishes

Question:

I’m quite new to Python threading, and still can’t make it working properly. I do not understand why, but threads are executed consequently and not in a parallel.

Could anyone please advice, what is incorrect in the code (I simplified it as much as I can to get it closer to the examples, but it doesn’t work as expected):

import threading, time

def func1():
    for j in range (0, 10):
        print(str(time.ctime(time.time())) + " 1")
        time.sleep(0.5)


def func2():
    for j in range (0, 10):
        print(str(time.ctime(time.time())) + " 2")
        time.sleep(0.5)

print(str(time.ctime(time.time())) + " script started")

t1 = threading.Thread(target = func1(), name = " 1")
t2 = threading.Thread(target = func2(), name = " 2")

t1.start()
t2.start()

t1.join()
t2.join()

print (str(time.ctime(time.time())) + " over")

In console output I see that the second thread only starts when the first one is finished. I’ve tried to make threads daemonic, remove .join() lines, but still no luck.

Asked By: dead_PyKto

||

Answers:

You’re calling your targets (target=func1()). Instead do as follows:

t1 = threading.Thread(target=func1, name = "1")
t2 = threading.Thread(target=func2, name = "2")

EDIT: This is how you lock your prints :

import threading, time

def func1(lock):
    for j in range (10):
        with lock:
            print(str(time.ctime(time.time())) + " 1")
        time.sleep(0.5)


def func2(lock):
    for j in range (10):
        with lock:
            print(str(time.ctime(time.time())) + " 2")
        time.sleep(0.5)

lock = threading.Lock()
t1 = threading.Thread(target = func1, name = " 1", args=(lock,))
t2 = threading.Thread(target = func2, name = " 2", args=(lock,))
Answered By: Vincent

I want to indicate the fact that a threading.Lock object and condition synchronization objects they define are used with the “with statement,” because they support the context management protocol:

lock = threading.Lock() # After: import threading
with lock:
    # critical section of code
    ...access shared resources...

Here, the context management machinery guarantees that the lock is automatically acquired before the block is executed and released once the block is complete, regardless of exception outcomes.
Therefore, the suggested solution above by Vincent seems to be addressing a more complex problem, one that deals with placing a lock on shared common resources, stopping any other thread that tries to access the resource in its tracks (in fact, stopping any thread that attempts to acquire the same lock). Notes: A threading.Lock has two states: locked and unlocked, and it is created in the unlocked state. In the following, for example, as only one thread can update the global variable “count”:

import threading, time
count = 0
def adder(addlock): # shared lock object passed in
    global count
    with addlock:
        count = count + 1 # auto acquire/release around stmt
    time.sleep(0.5)
    with addlock:
        count = count + 1 # only 1 thread updating at any time
addlock = threading.Lock()
threads = []
for i in range(100):
    thread = threading.Thread(target=adder, args=(addlock,))
    thread.start()
    threads.append(thread)
for thread in threads: thread.join()
print(count)

I’d suggest another solution using multiprocessing since your two parallel functions are basically two separate processes that don’t need to access any shared resources.

from multiprocessing import Process
import time

def func1():
    for j in range (0, 10):
        print(str(time.ctime(time.time())) + " 1")
        time.sleep(0.5)

def func2():
    for j in range (0, 10):
        print(str(time.ctime(time.time())) + " 2")
        time.sleep(0.5)

if __name__ == '__main__':
    print(str(time.ctime(time.time())) + " script started")
    p1 = Process(target=func1)
    p1.start()
    p2 = Process(target=func2)
    p2.start()
    p1.join()
    p2.join()
    print (str(time.ctime(time.time())) + " over")
Answered By: s_you