Python Threading issue, not working and I need to make sure threads dont read same line

Question:

So this is the first time I am playing around with threading so please bare with me here. In my main application (which I will implement this into), I need to add multithreading into my script. The script will read account info from a text file, then login & do some tasks with that account. I need to make sure that threads aren’t reading the same line from the accounts text file since that would screw everything up, which I’m not quite sure about how to do.

from multiprocessing import Queue, Process
from threading import Thread
from time import sleep
urls_queue = Queue()
max_process = 10
def dostuff():
    with open ('acc.txt', 'r') as accounts:
        for account in accounts:
            account.strip()
            split = account.split(":")
            a = {
                'user': split[0],
                'pass': split[1],
                'name': split[2].replace('n', ''),
            }
            sleep(1)
            print(a)
    for i in range(max_process):
        urls_queue.put("DONE")
def doshit_processor():
    while True:
        url = urls_queue.get()
        if url == "DONE":
            break
def main():
    file_reader_thread = Thread(target=dostuff)
    file_reader_thread.start()

    procs = []
    for i in range(max_process):
        p = Process(target=doshit_processor)
        procs.append(p)
        p.start()

    for p in procs:
        p.join()

    print('all done')
    # wait for all tasks in the queue
    file_reader_thread.join()


if __name__ == '__main__':
    main()

So at the moment I don’t think the threading is even working, because it’s printing one account out per second, even with 10 threads. So it should be printing 10 accounts per second which it isn’t which has me confused. Also I am not sure how to make sure that threads won’t pick the same account line. Help by a big brain is much appreciated

Asked By: john lewis

||

Answers:

The problem is that you create a single thread to generate the data for your processes but then don’t post that data to the queue. You sleep in that single thread so you see one item generated per second and then… nothing because the item isn’t queued. It seems that all you are doing is creating a process pool and the inbuilt multiprocessing.Pool should work for you.

I’ve set pool "chunk size" low so that workers are only given 1 work item at a time. This is good for workflows where processing time can vary for each work item. By default, pool tries to optimize for the case where processing time is roughly equivalent and instead tries to reduce interprocess communication time.

Your data looks like a colon-separated file and you can use csv to cut down the processing there too. This smaller script should do what you want.

import multiprocessing as mp
from time import sleep
import csv

max_process = 10

def doshit_processor(row):
    time.sleep(1) # if you want to simulate work
    print(row)

def main():
    with open ('acc.txt', newline='') as accounts:
        table = list(csv.DictReader(accounts, fieldnames=('user', 'pass', 'name'),
            delimiter=':')
    with mp.Pool(max_process) as pool:
        pool.map(doshit_processor, table, chunksize=1)
    print('all done')

if __name__ == '__main__':
    main()
Answered By: tdelaney
Categories: questions Tags:
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.