Is extending a Python list (e.g. l += [1]) guaranteed to be thread-safe?

Question:

If I have an integer i, it is not safe to do i += 1 on multiple threads:

>>> i = 0
>>> def increment_i():
...     global i
...     for j in range(1000): i += 1
...
>>> threads = [threading.Thread(target=increment_i) for j in range(10)]
>>> for thread in threads: thread.start()
...
>>> for thread in threads: thread.join()
...
>>> i
4858  # Not 10000

However, if I have a list l, it does seem safe to do l += [1] on multiple threads:

>>> l = []
>>> def extend_l():
...     global l
...     for j in range(1000): l += [1]
...
>>> threads = [threading.Thread(target=extend_l) for j in range(10)]
>>> for thread in threads: thread.start()
...
>>> for thread in threads: thread.join()
...
>>> len(l)
10000

Is l += [1] guaranteed to be thread-safe? If so, does this apply to all Python implementations or just CPython?

Edit: It seems that l += [1] is thread-safe but l = l + [1] is not…

>>> l = []
>>> def extend_l():
...     global l
...     for j in range(1000): l = l + [1]
...
>>> threads = [threading.Thread(target=extend_l) for j in range(10)]
>>> for thread in threads: thread.start()
...
>>> for thread in threads: thread.join()
...
>>> len(l)
3305  # Not 10000
Asked By: user200783

||

Answers:

From https://docs.python.org/3/faq/library.html#what-kinds-of-global-value-mutation-are-thread-safe:

A global interpreter lock (GIL) is used internally to ensure that only one thread runs in the Python VM at a time. In general, Python offers to switch among threads only between bytecode instructions; how frequently it switches can be set via sys.setswitchinterval(). Each bytecode instruction and therefore all the C implementation code reached from each instruction is therefore atomic from the point of view of a Python program.

The following operations are all atomic (L, L1, L2 are lists, D, D1, D2 are dicts, x, y are objects, i, j are ints):

L.append(x)
L1.extend(L2)
x = L[i]
x = L.pop()
L1[i:j] = L2
L.sort()
x = y
x.field = y
D[x] = y
D1.update(D2)
D.keys()

These arenโ€™t:

i = i+1
L.append(L[-1])
L[i] = L[j]
D[x] = D[x] + 1

Above is purely CPython specific and can vary across different Python implemenation such as PyPy.

By the way there is an open issue for documenting atomic Python operations – https://bugs.python.org/issue15339

Answered By: matino

There isn’t a happy ๐Ÿ˜‰ answer to this. There’s nothing guaranteed about any of it, which you can confirm simply by noting that the Python reference manual makes no guarantees about atomicity.

In CPython it’s a matter of pragmatics. As a snipped part of effbot’s article says,

In theory, this means an exact accounting requires an exact understanding of the PVM [Python Virtual Machine] bytecode implementation.

And that’s the truth. A CPython expert knows L += [x] is atomic because they know all of the following:

  • += compiles to an INPLACE_ADD bytecode.
  • The implementation of INPLACE_ADD for list objects is written entirely in C (no Python code is on the execution path, so the GIL can’t be released between bytecodes).
  • In listobject.c, the implementation of INPLACE_ADD is function list_inplace_concat(), and nothing during its execution needs to execute any user Python code either (if it did, the GIL may again be released).

That may all sound incredibly difficult to keep straight, but for someone with effbot’s knowledge of CPython’s internals (at the time he wrote that article), it really isn’t. In fact, given that depth of knowledge, it’s all kind of obvious ๐Ÿ˜‰

So as a matter of pragmatics, CPython experts have always freely relied on that “operations that ‘look atomic’ should really be atomic”, and that also guided some language decisions. For example, an operation missing from effbot’s list (added to the language after he wrote that article):

x = D.pop(y) # or ...
x = D.pop(y, default)

One argument (at the time) in favor of adding dict.pop() was precisely that the obvious C implementation would be atomic, whereas the in-use (at the time) alternative:

x = D[y]
del D[y]

was not atomic (the retrieval and the deletion are done via distinct bytecodes, so threads can switch between them).

But the docs never said .pop() was atomic, and never will. This is a “consenting adults” kind of thing: if you’re expert enough to exploit this knowingly, you don’t need hand-holding. If you’re not expert enough, then the last sentence of effbot’s article applies:

When in doubt, use a mutex!

As a matter of pragmatic necessity, core developers will never break the atomicity of effbot’s examples (or of D.pop() or D.setdefault()) in CPython. Other implementations are under no obligation at all to mimic these pragmatic choices, though. Indeed, since atomicity in these cases relies on CPython’s specific form of bytecode combined with CPython’s use of a global interpreter lock that can only be released between bytecodes, it could be a real pain for other implementations to mimic them.

And you never know: some future version of CPython may remove the GIL too! I doubt it, but it’s theoretically possible. But if that happens, I bet a parallel version retaining the GIL will be maintained too, because a whole lot of code (especially extension modules written in C) relies on the GIL for thread safety too.

Worth repeating:

When in doubt, use a mutex!

Answered By: Tim Peters