Python serialization – Why pickle?

Question:

I understood that Python pickling is a way to ‘store’ a Python Object in a way that does respect Object programming – different from an output written in txt file or DB.

Do you have more details or references on the following points:

  • where are pickled objects ‘stored’?
  • why is pickling preserving object representation more than, say, storing in DB?
  • can I retrieve pickled objects from one Python shell session to another?
  • do you have significant examples when serialization is useful?
  • does serialization with pickle imply data ‘compression’?

In other words, I am looking for a doc on pickling – Python.doc explains how to implement pickle but seems not dive into details about use and necessity of serialization.

Asked By: kiriloff

||

Answers:

Pickling is a way to convert a python object (list, dict, etc.) into a character stream. The idea is that this character stream contains all the information necessary to reconstruct the object in another python script.

As for where the pickled information is stored, usually one would do:

with open('filename', 'wb') as f:
    var = {1 : 'a' , 2 : 'b'}
    pickle.dump(var, f)

That would store the pickled version of our var dict in the ‘filename’ file. Then, in another script, you could load from this file into a variable and the dictionary would be recreated:

with open('filename','rb') as f:
    var = pickle.load(f)

Another use for pickling is if you need to transmit this dictionary over a network (perhaps with sockets or something.) You first need to convert it into a character stream, then you can send it over a socket connection.

Also, there is no “compression” to speak of here…it’s just a way to convert from one representation (in RAM) to another (in “text”).

About.com has a nice introduction of pickling here.

Answered By: austin1howard

Pickling is absolutely necessary for distributed and parallel computing.

Say you wanted to do a parallel map-reduce with multiprocessing (or across cluster nodes with pyina), then you need to make sure the function you want to have mapped across the parallel resources will pickle. If it doesn’t pickle, you can’t send it to the other resources on another process, computer, etc. Also see here for a good example.

To do this, I use dill, which can serialize almost anything in python. Dill also has some good tools for helping you understand what is causing your pickling to fail when your code fails.

And, yes, people use picking to save the state of a calculation, or your ipython session, or whatever. You can also extend pickle’s Pickler and UnPickler to do compression with bz2 or gzip if you’d like.

Answered By: Mike McKerns

it is kind of serialization. use cPickle it is much faster than pickle.

import pickle
##make Pickle File
with open('pickles/corups.pickle', 'wb') as handle:
    pickle.dump(corpus, handle)

#read pickle file
with open('pickles/corups.pickle', 'rb') as handle:
    corpus = pickle.load(handle)
Answered By: Paritosh Yadav

I find it to be particularly useful with large and complex custom classes. In a particular example I’m thinking of, “Gathering” the information (from a database) to create the class was already half the battle. Then that information stored in the class might be altered at runtime by the user.

You could have another group of tables in the database and write another function to go through everything stored and write it to the new database tables. Then you would need to write another function to be able to load something saved by reading all of that info back in.

Alternatively, you could pickle the whole class as is and then store that to a single field in the database. Then when you go to load it back, it will all load back in at once as it was before. This can end up saving a lot of time and code when saving and retrieving complicated classes.

Answered By: Chicken Max
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.