Is using an 'anonymous' threading.Lock() always an error?

Question:

I’m trying to make sense of some code and I see this function below

def get_batch(
    self,
) -> Union[Tuple[List[int], torch.Tensor], Tuple[None, None]]:
    """
    Return an inference batch
    """
    with threading.Lock():
        indices: List[int] = []
        for _ in range(self.batch_size):
            try:
                index = self.full_queue.get(timeout=0.05)
                indices.append(index)
            except:
                break

        if indices:
            # tqdm.write(str(len(jobs)))
            batch = {
                key: torch.stack([self.input_buffers[key][index] for index in indices])
                .to(torch.device('cpu'), non_blocking=True)
                .unsqueeze(0)
                for key in self.input_buffers
            }
            return indices, batch
        else:
            return None, None

the with threading.Lock() line must be an error right? Like generally speaking a lock must be shared, and this isn’t shared with anything?

Asked By: Ryan Keathley

||

Answers:

Yes, @Homer512’s comment nailed it. Each activation of the function creates a new Lock object, and there’s no way for those objects to be shared between threads. Nothing is accomplished by locking a Lock that cannot be locked by any other thread. It’s effectively a no-op.

Answered By: Solomon Slow
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.