Why is ''.join() faster than += in Python?

Question:

I’m able to find a bevy of information online (on Stack Overflow and otherwise) about how it’s a very inefficient and bad practice to use + or += for concatenation in Python.

I can’t seem to find WHY += is so inefficient. Outside of a mention here that “it’s been optimized for 20% improvement in certain cases” (still not clear what those cases are), I can’t find any additional information.

What is happening on a more technical level that makes ''.join() superior to other Python concatenation methods?

Asked By: Rodney Wells

||

Answers:

Let’s say you have this code to build up a string from three strings:

x = 'foo'
x += 'bar'  # 'foobar'
x += 'baz'  # 'foobarbaz'

In this case, Python first needs to allocate and create 'foobar' before it can allocate and create 'foobarbaz'.

So for each += that gets called, the entire contents of the string and whatever is getting added to it need to be copied into an entirely new memory buffer. In other words, if you have N strings to be joined, you need to allocate approximately N temporary strings and the first substring gets copied ~N times. The last substring only gets copied once, but on average, each substring gets copied ~N/2 times.

With .join, Python can play a number of tricks since the intermediate strings do not need to be created. CPython figures out how much memory it needs up front and then allocates a correctly-sized buffer. Finally, it then copies each piece into the new buffer which means that each piece is only copied once.


There are other viable approaches which could lead to better performance for += in some cases. E.g. if the internal string representation is actually a rope or if the runtime is actually smart enough to somehow figure out that the temporary strings are of no use to the program and optimize them away.

However, CPython certainly does not do these optimizations reliably (though it may for a few corner cases) and since it is the most common implementation in use, many best-practices are based on what works well for CPython. Having a standardized set of norms also makes it easier for other implementations to focus their optimization efforts as well.

Answered By: mgilson

I think this behaviour is best explained in Lua’s string buffer chapter.

To rewrite that explanation in context of Python, let’s start with an innocent code snippet (a derivative of the one at Lua’s docs):

s = ""
for l in some_list:
  s += l

Assume that each l is 20 bytes and the s has already been parsed to a size of 50 KB. When Python concatenates s + l it creates a new string with 50,020 bytes and copies 50 KB from s into this new string. That is, for each new line, the program moves 50 KB of memory, and growing. After reading 100 new lines (only 2 KB), the snippet has already moved more than 5 MB of memory. To make things worse, after the assignment

s += l

the old string is now garbage. After two loop cycles, there are two old strings making a total of more than 100 KB of garbage. So, the language compiler decides to run its garbage collector and frees those 100 KB. The problem is that this will happen every two cycles and the program will run its garbage collector two thousand times before reading the whole list. Even with all this work, its memory usage will be a large multiple of the list’s size.

And, at the end:

This problem is not peculiar to Lua: Other languages with true garbage
collection, and where strings are immutable objects, present a similar
behavior, Java being the most famous example. (Java offers the
structure StringBuffer to ameliorate the problem.)

Python strings are also immutable objects.

Answered By: hjpotter92
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.