Mailing List Archive

Pickling w/ low overhead
An issue which has dogged the NumPy project is that there is (to my
knowledge) no way to pickle very large arrays without creating strings
which contain all of the data. This can be a problem given that NumPy
arrays tend to be very large -- often several megabytes, sometimes much
bigger. This slows things down, sometimes a lot, depending on the
platform. It seems that it should be possible to do something more
efficient.

Two alternatives come to mind:

-- define a new pickling protocol which passes a file-like object to the
instance and have the instance write itself to that file, being as
efficient or inefficient as it cares to. This protocol is used only
if the instance/type defines the appropriate slot. Alternatively,
enrich the semantics of the getstate interaction, so that an object
can return partial data and tell the pickling mechanism to come back
for more.

-- make pickling of objects which support the buffer interface use that
inteface's notion of segments and use that 'chunk' size to do
something more efficient if not necessarily most efficient. (oh, and
make NumPy arrays support the buffer interface =). This is simple
for NumPy arrays since we want to pickle "everything", but may not be
what other buffer-supporting objects want.

Thoughts? Alternatives?

--david
Re: Pickling w/ low overhead [ In reply to ]
David Ascher wrote:
>
> An issue which has dogged the NumPy project is that there is (to my
> knowledge) no way to pickle very large arrays without creating strings
> which contain all of the data. This can be a problem given that NumPy
> arrays tend to be very large -- often several megabytes, sometimes much
> bigger. This slows things down, sometimes a lot, depending on the
> platform. It seems that it should be possible to do something more
> efficient.
>
> Two alternatives come to mind:
>
> -- define a new pickling protocol which passes a file-like object to the
> instance and have the instance write itself to that file, being as
> efficient or inefficient as it cares to. This protocol is used only
> if the instance/type defines the appropriate slot. Alternatively,
> enrich the semantics of the getstate interaction, so that an object
> can return partial data and tell the pickling mechanism to come back
> for more.
>
> -- make pickling of objects which support the buffer interface use that
> inteface's notion of segments and use that 'chunk' size to do
> something more efficient if not necessarily most efficient. (oh, and
> make NumPy arrays support the buffer interface =). This is simple
> for NumPy arrays since we want to pickle "everything", but may not be
> what other buffer-supporting objects want.
>
> Thoughts? Alternatives?

Hmm, types can register their own pickling/unpickling functions
via copy_reg, so they can access the self.write method in pickle.py
to implement the write to file interface. Don't know how this
would be done for cPickle.c though.

For instances the situation is different since there is no
dispatching done on a per-class basis. I guess an optional argument
could help here.

Perhaps some lazy pickling wrapper would help fix this in general:
an object which calls back into the to-be-pickled object to
access the data rather than store the data in a huge string.

Yet another idea would be using memory mapped files instead
of strings as temporary storage (but this is probably hard to implement
right and not as portable).

Dunno... just some thoughts.

--
Marc-Andre Lemburg
______________________________________________________________________
Y2000: 150 days left
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
Re: Pickling w/ low overhead [ In reply to ]
On Tue, 3 Aug 1999, M.-A. Lemburg wrote:

> Hmm, types can register their own pickling/unpickling functions
> via copy_reg, so they can access the self.write method in pickle.py
> to implement the write to file interface.

Are you sure? My understanding of copy_reg is, as stated in the doc:

pickle (type, function[, constructor])
Declares that function should be used as a ``reduction'' function for
objects of type or class type. function should return either a string
or a tuple. The optional constructor parameter, if provided, is a
callable object which can be used to reconstruct the object when
called with the tuple of arguments returned by function at pickling
time.

How does one access the 'self.write method in pickle.py'?

> Perhaps some lazy pickling wrapper would help fix this in general:
> an object which calls back into the to-be-pickled object to
> access the data rather than store the data in a huge string.

Right. That's an idea.

> Yet another idea would be using memory mapped files instead
> of strings as temporary storage (but this is probably hard to implement
> right and not as portable).

That's a very interesting idea! I'll try that -- it might just be the
easiest way to do this. I think that portability isn't a huge concern --
the folks who are coming up with the speed issue are on platforms which
have mmap support.

Thanks for the suggestions.

--david
Re: Pickling w/ low overhead [ In reply to ]
David Ascher wrote:
>
> On Tue, 3 Aug 1999, M.-A. Lemburg wrote:
>
> > Hmm, types can register their own pickling/unpickling functions
> > via copy_reg, so they can access the self.write method in pickle.py
> > to implement the write to file interface.
>
> Are you sure? My understanding of copy_reg is, as stated in the doc:
>
> pickle (type, function[, constructor])
> Declares that function should be used as a ``reduction'' function for
> objects of type or class type. function should return either a string
> or a tuple. The optional constructor parameter, if provided, is a
> callable object which can be used to reconstruct the object when
> called with the tuple of arguments returned by function at pickling
> time.
>
> How does one access the 'self.write method in pickle.py'?

Ooops. Sorry, that doesn't work... well at least not using "normal"
Python ;-) You could of course simply go up one stack frame and
then grab the self object and then... well, you know...

--
Marc-Andre Lemburg
______________________________________________________________________
Y2000: 150 days left
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
Re: Pickling w/ low overhead [ In reply to ]
David> An issue which has dogged the NumPy project is that there is (to
David> my knowledge) no way to pickle very large arrays without creating
David> strings which contain all of the data. This can be a problem
David> given that NumPy arrays tend to be very large -- often several
David> megabytes, sometimes much bigger. This slows things down,
David> sometimes a lot, depending on the platform. It seems that it
David> should be possible to do something more efficient.

David,

Using __getstate__/__setstate__, could you create a compressed
representation using zlib or some other scheme? I don't know how well
numeric data compresses in general, but that might help. Also, I trust you
use cPickle when it's available, yes?

Skip Montanaro | http://www.mojam.com/
skip@mojam.com | http://www.musi-cal.com/~skip/
847-475-3758
Re: Pickling w/ low overhead [ In reply to ]
On Tue, 3 Aug 1999, Skip Montanaro wrote:

> Using __getstate__/__setstate__, could you create a compressed
> representation using zlib or some other scheme? I don't know how well
> numeric data compresses in general, but that might help. Also, I trust you
> use cPickle when it's available, yes?

I *really* hate to admit it, but I've found the source of the most massive
problem in the pickling process that I was using. I didn't use binary
mode, which meant that the huge strings were written & read
one-character-at-a-time.

I think I'll put a big fat note in the NumPy doc to that effect.

(note that luckily this just affected my usage, not all NumPy users).

<embarassed sheepish grin>

--da