Saving dictionaries to file (numpy and Python 2/3 friendly)

感情迁移 提交于 2019-12-04 18:07:28

问题


I want to do hierarchical key-value storage in Python, which basically boils down to storing dictionaries to files. By that I mean any type of dictionary structure, that may contain other dictionaries, numpy arrays, serializable Python objects, and so forth. Not only that, I want it to store numpy arrays space-optimized and play nice between Python 2 and 3.

Below are methods I know are out there. My question is what is missing from this list and is there an alternative that dodges all my deal-breakers?

  • Python's pickle module (deal-breaker: inflates the size of numpy arrays a lot)
  • Numpy's save/savez/load (deal-breaker: Incompatible format across Python 2/3)
  • PyTables replacement for numpy.savez (deal-breaker: only handles numpy arrays)
  • Using PyTables manually (deal-breaker: I want this for constantly changing research code, so it's really convenient to be able to dump dictionaries to files by calling a single function)

The PyTables replacement of numpy.savez is promising, since I like the idea of using hdf5 and it compresses the numpy arrays really efficiently, which is a big plus. However, it does not take any type of dictionary structure.

Lately, what I've been doing is to use something similar to the PyTables replacement, but enhancing it to be able to store any type of entries. This actually works pretty well, but I find myself storing primitive data types in length-1 CArrays, which is a bit awkward (and ambiguous to actual length-1 arrays), even though I set chunksize to 1 so it doesn't take up that much space.

Is there something like that already out there?

Thanks!


回答1:


I recently found myself with a similar problem, for which I wrote a couple of functions for saving the contents of dicts to a group in a PyTables file, and loading them back into dicts.

They process nested dictionary and group structures recursively, and handle objects with types that are not natively supported by PyTables by pickling them and storing them as string arrays. It's not perfect, but at least things like numpy arrays will be stored efficiently. There's also a check included to avoid inadvertently loading enormous structures into memory when reading the group contents back into a dict.

import tables
import cPickle

def dict2group(f, parent, groupname, dictin, force=False, recursive=True):
    """
    Take a dict, shove it into a PyTables HDF5 file as a group. Each item in
    the dict must have a type and shape compatible with PyTables Array.

    If 'force == True', any existing child group of the parent node with the
    same name as the new group will be overwritten.

    If 'recursive == True' (default), new groups will be created recursively
    for any items in the dict that are also dicts.
    """
    try:
        g = f.create_group(parent, groupname)
    except tables.NodeError as ne:
        if force:
            pathstr = parent._v_pathname + '/' + groupname
            f.removeNode(pathstr, recursive=True)
            g = f.create_group(parent, groupname)
        else:
            raise ne
    for key, item in dictin.iteritems():
        if isinstance(item, dict):
            if recursive:
                dict2group(f, g, key, item, recursive=True)
        else:
            if item is None:
                item = '_None'
            f.create_array(g, key, item)
    return g


def group2dict(f, g, recursive=True, warn=True, warn_if_bigger_than_nbytes=100E6):
    """
    Traverse a group, pull the contents of its children and return them as
    a Python dictionary, with the node names as the dictionary keys.

    If 'recursive == True' (default), we will recursively traverse child
    groups and put their children into sub-dictionaries, otherwise sub-
    groups will be skipped.

    Since this might potentially result in huge arrays being loaded into
    system memory, the 'warn' option will prompt the user to confirm before
    loading any individual array that is bigger than some threshold (default
    is 100MB)
    """

    def memtest(child, threshold=warn_if_bigger_than_nbytes):
        mem = child.size_in_memory
        if mem > threshold:
            print '[!] "%s" is %iMB in size [!]' % (child._v_pathname, mem / 1E6)
            confirm = raw_input('Load it anyway? [y/N] >>')
            if confirm.lower() == 'y':
                return True
            else:
                print "Skipping item \"%s\"..." % g._v_pathname
        else:
            return True
    outdict = {}
    for child in g:
        try:
            if isinstance(child, tables.group.Group):
                if recursive:
                    item = group2dict(f, child)
                else:
                    continue
            else:
                if memtest(child):
                    item = child.read()
                    if isinstance(item, str):
                        if item == '_None':
                            item = None
                else:
                    continue
            outdict.update({child._v_name: item})
        except tables.NoSuchNodeError:
            warnings.warn('No such node: "%s", skipping...' % repr(child))
            pass
    return outdict

It's also worth mentioning joblib.dump and joblib.load, which tick all of your boxes apart from Python 2/3 cross-compatibility. Under the hood they use np.save for numpy arrays and cPickle for everything else.




回答2:


After asking this two years ago, I starting coding my own HDF5-based replacement of pickle/np.save. Ever since, it has matured into a stable package, so I thought I would finally answer and accept my own question because it is by design exactly what I was looking for:

  • https://github.com/uchicago-cs/deepdish



回答3:


I tried playing with np.memmap for saving an array of dictionaries. Say we have the dictionary:

a = np.array([str({'a':1, 'b':2, 'c':[1,2,3,{'d':4}]}])

first I tried to directly save it to a memmap:

f = np.memmap('stack.array', dtype=dict, mode='w+', shape=(100,))
f[0] = d
# CRASHES when reopening since it looses the memory pointer

f = np.memmap('stack.array', dtype=object, mode='w+', shape=(100,))
f[0] = d
# CRASHES when reopening for the same reason

the way it worked is converting the dictionary to a string:

f = np.memmap('stack.array', dtype='|S1000', mode='w+', shape=(100,))
f[0] = str(a)

this works and afterwards you can eval(f[0]) to get the value back.

I do not know the advantage of this approach over the others, but it deserves a closer look.




回答4:


I absolutely recommend a python object database like ZODB. It seems pretty well suited for your situation, considering you store objects (literally whatever you like) to a dictionary - this means you can store dictionaries inside dictionaries. I've used it in a wide range of problems, and the nice thing is that you can just hand somebody the database file (the one with a .fs extension). With this, they'll be able to read it in, and perform any queries they wish, and modify their own local copies. If you wish to have multiple programs simultaneously accessing the same database, I'd make sure to look at ZEO.

Just a silly example of how to get started:

from ZODB import DB
from ZODB.FileStorage import FileStorage
from ZODB.PersistentMapping import PersistentMapping
import transaction
from persistent import Persistent
from persistent.dict import PersistentDict
from persistent.list import PersistentList

# Defining database type and creating connection.
storage = FileStorage('/path/to/database/zodbname.fs') 
db = DB(storage)
connection = db.open()
root = connection.root()

# Define and populate the structure.
root['Vehicle'] = PersistentDict() # Upper-most dictionary
root['Vehicle']['Tesla Model S'] = PersistentDict() # Object 1 - also a dictionary
root['Vehicle']['Tesla Model S']['range'] = "208 miles"
root['Vehicle']['Tesla Model S']['acceleration'] = 5.9
root['Vehicle']['Tesla Model S']['base_price'] = "$71,070"
root['Vehicle']['Tesla Model S']['battery_options'] = ["60kWh","85kWh","85kWh Performance"]
# more attributes here

root['Vehicle']['Mercedes-Benz SLS AMG E-Cell'] = PersistentDict() # Object 2 - also a dictionary
# more attributes here

# add as many objects with as many characteristics as you like.

# commiting changes; up until this point things can be rolled back
transaction.get().commit()
transaction.get().abort()
connection.close()
db.close()
storage.close()

Once the database is created it's very easy use. Since it's an object database (a dictionary), you can access objects very easily:

#after it's opened (lines from the very beginning, up to and including root = connection.root() )
>> root['Vehicles']['Tesla Model S']['range'] 
'208 miles'

You can also display all of the keys (and do all other standard dictionary things you might want to do):

>> root['Vehicles']['Tesla Model S'].keys()
['acceleration', 'range', 'battery_options', 'base_price']

Last thing I want to mention is that keys can be changed: Changing the key value in python dictionary. Values can also be changed - so if your research results change because you change your method or something you don't have to start the entire database from scratch (especially if everything else is still okay). Be careful with doing both of these. I put in safety measures in my database code to make sure I'm aware of my attempts to overwrite keys or values.

** ADDED **

# added imports
import numpy as np
from tempfile import TemporaryFile
outfile = TemporaryFile()

# insert into definition/population section
np.save(outfile,np.linspace(-1,1,10000))
root['Vehicle']['Tesla Model S']['arraydata'] = outfile

# check to see if it worked
>>> root['Vehicle']['Tesla Model S']['arraydata']
<open file '<fdopen>', mode 'w+b' at 0x2693db0>

outfile.seek(0)# simulate closing and re-opening
A = np.load(root['Vehicle']['Tesla Model S']['arraydata'])

>>> print A
array([-1.        , -0.99979998, -0.99959996, ...,  0.99959996,
    0.99979998,  1.        ])

You could also use numpy.savez() for compressed saving of multiple numpy arrays in this exact same way.




回答5:


This is not a direct answer. Anyway, you may be interested also in JSON. Have a look at the 13.10. Serializing Datatypes Unsupported by JSON. It shows how to extend the format for unsuported types.

The whole chapter from "Dive into Python 3" by Mark Pilgrim is definitely a good read for at least to know...

Update: Possibly an unrelated idea, but... I have read somewhere, that one of the reasons why XML was finally adopted for data exchange in heterogeneous environment was some study that compared specialized binary format with zipped XML. The conclusion for you could be to use possibly not so space efficient solution and compress it via zip or another well known algorithm. Using the known algorithm helps when you need to debug (to unzip and then look at the text file by eye).



来源:https://stackoverflow.com/questions/18071075/saving-dictionaries-to-file-numpy-and-python-2-3-friendly

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!