What is the difference between native int type and the numpy.int types?

Question:

Can you please help understand what are the main differences (if any) between the native int type and the numpy.int32 or numpy.int64 types?

Asked By: Aguy

||

Answers:

I think that the biggest difference is that the numpy types are compatible with their C counterparts. For one thing, this means that numpy ints can overflow…

>>> np.int32(2**32)
0

This is why you can create an array of integers and specify the datatype as np.int32 for example. Numpy will then allocate an array that is large enough to hold the specified number of 32 bit integers and then when you need the values, it’ll convert the C-integers to np.int32 (which is very quick). The benefits of being able to convert back and forth from np.int32 and a C-int also include huge memory savings. Python objects are generally pretty big:

>>> sys.getsizeof(1)
24

A np.int32 isn’t any smaller:

>>> sys.getsizeof(np.int32(1))
28

but remember, most of the time when we’re working with numpy arrays, we’re only working with the C integers which only take 4 bytes (instead of 24). We only need to work with the np.int32 when dealing with scalar values from an array.

Answered By: mgilson

There are several major differences. The first is that python integers are flexible-sized (at least in python 3.x). This means they can grow to accommodate any number of any size (within memory constraints, of course). The numpy integers, on the other hand, are fixed-sized. This means there is a maximum value they can hold. This is defined by the number of bytes in the integer (int32 vs. int64), with more bytes holding larger numbers, as well as whether the number is signed or unsigned (int32 vs. uint32), with unsigned being able to hold larger numbers but not able to hold negative number.

So, you might ask, why use the fixed-sized integers? The reason is that modern processors have built-in tools for doing math on fixed-size integers, so calculations on those are much, much, much faster. In fact, python uses fixed-sized integers behind-the-scenes when the number is small enough, only switching to the slower, flexible-sized integers when the number gets too large.

Another advantage of fixed-sized values is that they can be placed into consistently-sized adjacent memory blocks of the same type. This is the format that numpy arrays use to store data. The libraries that numpy relies on are able to do extremely fast computations on data in this format, in fact modern CPUs have built-in features for accelerating this sort of computation. With the variable-sized python integers, this sort of computation is impossible because there is no way to say how big the blocks should be and no consistentcy in the data format.

That being said, numpy is actually able to make arrays of python integers. But rather than arrays containing the values, instead they are arrays containing references to other pieces of memory holding the actual python integers. This cannot be accelerated in the same way, so even if all the python integers fit within the fixed integer size, it still won’t be accelerated.

None of this is the case with Python 2. In Python 2, Python integers are fixed integers and thus can be directly translated into numpy integers. For variable-length integers, Python 2 had the long type. But this was confusing and it was decided this confusion wasn’t worth the performance gains, especially when people who need performance would be using numpy or something like it anyway.

Answered By: TheBlackCat

Another way to look at the differences is to ask what methods do the 2 kinds of objects have.

In Ipython I can use tab complete to look at methods:

In [1277]: x=123; y=np.int32(123)

int methods and attributes:

In [1278]: x.<tab>
x.bit_length   x.denominator  x.imag         x.numerator    x.to_bytes
x.conjugate    x.from_bytes   x.real         

int ‘operators’

In [1278]: x.__<tab>
x.__abs__           x.__init__          x.__rlshift__
x.__add__           x.__int__           x.__rmod__
x.__and__           x.__invert__        x.__rmul__
x.__bool__          x.__le__            x.__ror__
...
x.__gt__            x.__reduce_ex__     x.__xor__
x.__hash__          x.__repr__          
x.__index__         x.__rfloordiv__     

np.int32 methods and attributes (or properties). Some of the same, but a lot more, basically all the ndarray ones:

In [1278]: y.<tab>
y.T             y.denominator   y.ndim          y.size
y.all           y.diagonal      y.newbyteorder  y.sort
y.any           y.dtype         y.nonzero       y.squeeze   
...
y.cumsum        y.min           y.setflags      
y.data          y.nbytes        y.shape   

the y.__ methods look a lot like the int ones. They can do the same math.

In [1278]: y.__<tab>
y.__abs__              y.__getitem__          y.__reduce_ex__
y.__add__              y.__gt__               y.__repr__
...
y.__format__           y.__rand__             y.__subclasshook__
y.__ge__               y.__rdivmod__          y.__truediv__
y.__getattribute__     y.__reduce__           y.__xor__

y is in many ways the same as a 0d array. Not identical, but close.

In [1281]: z=np.array(123,dtype=np.int32)

np.int32 is what I get when I index an array of that type:

In [1300]: A=np.array([0,123,3])

In [1301]: A[1]
Out[1301]: 123

In [1302]: type(A[1])
Out[1302]: numpy.int32

I have to use item to remove all of the numpy wrapping.

In [1303]: type(A[1].item())
Out[1303]: int

As a numpy user, an np.int32 is an int with a numpy wrapper. Or conversely a single element of an ndarray. Usually I don’t pay attention as to whether A[0] is giving me the ‘native’ int or the numpy equivalent. In contrast to some new users, I rarely use np.int32(123); I would use np.array(123) instead.

A = np.array([1,123,0], np.int32)

does not contain 3 np.int32 objects. Rather its data buffer is 3*4=12 bytes long. It’s the array overhead that interprets it as 3 ints in a 1d. And view shows me the same databuffer with different interpretations:

In [1307]: A.view(np.int16)
Out[1307]: array([  1,   0, 123,   0,   0,   0], dtype=int16)

In [1310]: A.view('S4')
Out[1310]: array([b'x01', b'{', b''],   dtype='|S4')

It’s only when I index a single element that I get a np.int32 object.

The list L=[1, 123, 0] is different; it’s a list of pointers – pointers to int objects else where in memory. Similarly for a dtype=object array.

Answered By: hpaulj
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.