How do I convert a PIL Image into a NumPy array?


How do I convert a PIL Image back and forth to a NumPy array so that I can do faster pixel-wise transformations than PIL’s PixelAccess allows? I can convert it to a NumPy array via:

pic ="foo.jpg")
pix = numpy.array(pic.getdata()).reshape(pic.size[0], pic.size[1], 3)

But how do I load it back into the PIL Image after I’ve modified the array? pic.putdata() isn’t working well.

Asked By: akdom



You’re not saying how exactly putdata() is not behaving. I’m assuming you’re doing

>>> pic.putdata(a)
Traceback (most recent call last):
  File "...blablabla.../PIL/", line 1185, in putdata, scale, offset)
SystemError: new style getargs format but argument is not a tuple

This is because putdata expects a sequence of tuples and you’re giving it a numpy array. This

>>> data = list(tuple(pixel) for pixel in pix)
>>> pic.putdata(data)

will work but it is very slow.

As of PIL 1.1.6, the "proper" way to convert between images and numpy arrays is simply

>>> pix = numpy.array(pic)

although the resulting array is in a different format than yours (3-d array or rows/columns/rgb in this case).

Then, after you make your changes to the array, you should be able to do either pic.putdata(pix) or create a new image with Image.fromarray(pix).

Answered By: dF.

Open I as an array:

>>> I = numpy.asarray('test.jpg'))

Do some stuff to I, then, convert it back to an image:

>>> im = PIL.Image.fromarray(numpy.uint8(I))

Source: Filter numpy images with FFT, Python

If you want to do it explicitly for some reason, there are pil2array() and array2pil() functions using getdata() on this page in

Answered By: endolith

You need to convert your image to a numpy array this way:

import numpy
import PIL

img ="foo.jpg").convert("L")
imgarr = numpy.array(img) 
Answered By: Billal Begueradj

The example, I have used today:

import PIL
import numpy
from PIL import Image

def resize_image(numpy_array_image, new_height):
    # convert nympy array image to PIL.Image
    image = Image.fromarray(numpy.uint8(numpy_array_image))
    old_width = float(image.size[0])
    old_height = float(image.size[1])
    ratio = float( new_height / old_height)
    new_width = int(old_width * ratio)
    image = image.resize((new_width, new_height), PIL.Image.ANTIALIAS)
    # convert PIL.Image into nympy array back again
    return array(image)
Answered By: Uki D. Lucas

I am using Pillow 4.1.1 (the successor of PIL) in Python 3.5. The conversion between Pillow and numpy is straightforward.

from PIL import Image
import numpy as np
im ='1.jpg')
im2arr = np.array(im) # im2arr.shape: height x width x channel
arr2im = Image.fromarray(im2arr)

One thing that needs noticing is that Pillow-style im is column-major while numpy-style im2arr is row-major. However, the function Image.fromarray already takes this into consideration. That is, arr2im.size == im.size and arr2im.mode == im.mode in the above example.

We should take care of the HxWxC data format when processing the transformed numpy arrays, e.g. do the transform im2arr = np.rollaxis(im2arr, 2, 0) or im2arr = np.transpose(im2arr, (2, 0, 1)) into CxHxW format.

Answered By: Daniel

If your image is stored in a Blob format (i.e. in a database) you can use the same technique explained by Billal Begueradj to convert your image from Blobs to a byte array.

In my case, I needed my images where stored in a blob column in a db table:

def select_all_X_values(conn):
    cur = conn.cursor()
    cur.execute("SELECT ImageData from PiecesTable")    
    rows = cur.fetchall()    
    return rows

I then created a helper function to change my dataset into np.array:

X_dataset = select_all_X_values(conn)
imagesList = convertToByteIO(np.array(X_dataset))

def convertToByteIO(imagesArray):
    # Converts an array of images into an array of Bytes
    imagesList = []

    for i in range(len(imagesArray)):  
        img =[i])).convert("RGB")
        imagesList.insert(i, np.array(img))

    return imagesList

After this, I was able to use the byteArrays in my Neural Network.

Answered By: Charles Vogt
def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))

You can transform the image into numpy
by parsing the image into numpy() function after squishing out the features( unnormalization)

Answered By: Thiyagu

Convert Numpy to PIL image and PIL to Numpy

import numpy as np
from PIL import Image

def pilToNumpy(img):
    return np.array(img)

def NumpyToPil(img):
    return Image.fromarray(img)
Answered By: Kamran Gasimov

I can vouch for svgtrace, I found it both super simple and relatively fast. Find it here:

This is how I used it:

from svgtrace import trace

asset_path = 'image.png'
save_path = 'traced_image.svg'

Path(save_path).write_text(trace(asset_path), encoding='utf-8')

It took an average of 3 seconds for a 1080x1080px image on my machine. (MacBook Pro 2017)

Answered By: Carl J. Kurtz