How to set initial size for a dictionary in Python?
Question:
I’m putting around 4 million different keys into a Python dictionary.
Creating this dictionary takes about 15 minutes and consumes about 4GB of memory on my machine. After the dictionary is fully created, querying the dictionary is fast.
I suspect that dictionary creation is so resource consuming because the dictionary is very often rehashed (as it grows enormously).
Is is possible to create a dictionary in Python with some initial size or bucket number?
My dictionary points from a number to an object.
class MyObject:
def __init__(self):
# some fields...
d = {}
d[i] = MyObject() # 4M times on different key...
Answers:
You can try to separate key hashing from the content filling with dict.fromkeys
classmethod. It’ll create a dict
of a known size with all values defaulting to either None
or a value of your choice. After that you could iterate over it to fill with the values. It’ll help you to time the actual hashing of all keys. Not sure if you’d be able significantly increase the speed though.
If your datas need/can be stored on disc perhaps you can store your datas in a BSDDB database or use Cpickle to load/store your dictionnary
If you know C, you can take a look at dictobject.c and the Notes on Optimizing Dictionaries. There you’ll notice the parameter PyDict_MINSIZE:
PyDict_MINSIZE. Currently set to 8.
This parameter is defined in dictobject.h. So you could change it when compiling Python but this probably is a bad idea.
I tried :
a = dict.fromkeys((range(4000000)))
It creates a dictionary with 4 000 000 entries in about 3 seconds. After that, setting values are really fast. So I guess dict.fromkey is definitly the way to go.
With performance issues it’s always best to measure. Here are some timings:
d = {}
for i in xrange(4000000):
d[i] = None
# 722ms
d = dict(itertools.izip(xrange(4000000), itertools.repeat(None)))
# 634ms
dict.fromkeys(xrange(4000000))
# 558ms
s = set(xrange(4000000))
dict.fromkeys(s)
# Not including set construction 353ms
The last option doesn’t do any resizing, it just copies the hashes from the set and increments references. As you can see, the resizing isn’t taking a lot of time. It’s probably your object creation that is slow.
Do you initialize all keys with new “empty” instances of the same type? Is it not possible to write a defaultdict or something that will create the object when it is accessed?
I’m putting around 4 million different keys into a Python dictionary.
Creating this dictionary takes about 15 minutes and consumes about 4GB of memory on my machine. After the dictionary is fully created, querying the dictionary is fast.
I suspect that dictionary creation is so resource consuming because the dictionary is very often rehashed (as it grows enormously).
Is is possible to create a dictionary in Python with some initial size or bucket number?
My dictionary points from a number to an object.
class MyObject:
def __init__(self):
# some fields...
d = {}
d[i] = MyObject() # 4M times on different key...
You can try to separate key hashing from the content filling with dict.fromkeys
classmethod. It’ll create a dict
of a known size with all values defaulting to either None
or a value of your choice. After that you could iterate over it to fill with the values. It’ll help you to time the actual hashing of all keys. Not sure if you’d be able significantly increase the speed though.
If your datas need/can be stored on disc perhaps you can store your datas in a BSDDB database or use Cpickle to load/store your dictionnary
If you know C, you can take a look at dictobject.c and the Notes on Optimizing Dictionaries. There you’ll notice the parameter PyDict_MINSIZE:
PyDict_MINSIZE. Currently set to 8.
This parameter is defined in dictobject.h. So you could change it when compiling Python but this probably is a bad idea.
I tried :
a = dict.fromkeys((range(4000000)))
It creates a dictionary with 4 000 000 entries in about 3 seconds. After that, setting values are really fast. So I guess dict.fromkey is definitly the way to go.
With performance issues it’s always best to measure. Here are some timings:
d = {}
for i in xrange(4000000):
d[i] = None
# 722ms
d = dict(itertools.izip(xrange(4000000), itertools.repeat(None)))
# 634ms
dict.fromkeys(xrange(4000000))
# 558ms
s = set(xrange(4000000))
dict.fromkeys(s)
# Not including set construction 353ms
The last option doesn’t do any resizing, it just copies the hashes from the set and increments references. As you can see, the resizing isn’t taking a lot of time. It’s probably your object creation that is slow.
Do you initialize all keys with new “empty” instances of the same type? Is it not possible to write a defaultdict or something that will create the object when it is accessed?