Data structure/algorithm for variable length record storage and lookup on disk withsearch only on primary keys

后端 未结 5 1818
感动是毒
感动是毒 2021-02-10 01:05

I am looking for an algorithm / data structure that works well for large block based devices (eg a mechanical hard drive) which is optimised for insert, get, update and delete w

相关标签:
5条回答
  • 2021-02-10 01:48

    Indexing variable-length-record files.

    Indexing variable-length record files may look at first like a daunting task but it is really pretty straightforward once you identify which are the moving parts.

    To do this you should read your file in blocks of fixed size (ie. 128, 256 or 512 etc). Your records also should have an easily identifiable end-of-record character. Meaning this character cannot appear as a regular character inside your records.

    Next thing is you scan your file looking for the beginning of each record creating an index file with the following structure:

    key, 0, 0
    ........
    ........
    key, block, offset
    

    Here key is the key (field) you are indexing your file on (can be a composite one). block is the (0-based) block number the record starts at, and offset is the (0-based) offset of the beginning of the record from the beginning of the block. Depending of the block size you use, your records may span more than one block. So once you locate the beginning of a record you need to fetch as many consecutive blocks as necessary to retrieve the whole record.

    It is perfectly possible to create multiple index files at the same time if you need to search for different criteria.

    Once you have created this index file, the next step is to sort it by the key field. Alternatively you can device an insertion sort mechanism which keeps your index file sorted as it is been created.

    Retrieve your record sorted by that key using a file seek-like function, looking up the key on the index file and retrieving its record-offset pair. Binary search seems to perform pretty well in this scenario but you can use any type.

    This structure allows your database to accept record additions, deletions and updates. Additions are made at the end of the file, adding its key to the index file. To delete a record, just change the first character of the record with a unique character like 0x0 and delete the entry from the index file. Updates can be achieved by deleting and then adding the updated record at the end of the file.

    If you plan to deal with very large files then using a B-Tree like structure for your indexes may be of a great help as B-Trees indexes do not need to be loaded complete in memory. The B-Tree algorithm further divides the index file into pages which are then loaded in memory as needed.

    0 讨论(0)
  • 2021-02-10 01:51

    Best might be to use a commercial database engine.

    You might get rid of any O(log m) lookup of a B-tree by storing the index, i.e. the {"logical ID" maps to "physical location"} value pairs, in a hash map (hashing on the logical ID) ... or, even, storing the index in a contiguous vector (with the logical ID used as an index into the vector of offset values), as bdonlan suggested, if the ID values aren't sparse.

    An impotant implementation detail might be what the API you use to access the index: whether you store it in RAM (which the O/S backs with the system page file) and access it in-process using pointers, and/or store it on disk (which the O/S caches in the file system cache) and access it using file I/O APIs.

    0 讨论(0)
  • 2021-02-10 01:55

    The easy way: Use something like Berkeley DB. It provides a key-value store for arbitrary byte strings, and does all the hard work for you. It even provides 'secondary databases' for indexing, if you want it.

    The do-it-yourself way: Use Protocol Buffers (or the binary format of your choice) to define B-Tree node and data item structures. Use an append-only file for your database. To write a new record or modify an existing record, you simply write the record itself to the end of the file, then write any modified B-Tree nodes (eg, the record's parent node, its parent node, and so forth up to the root). Then, write the location of the new root of the tree to the header block at the beginning of the file. To read the file, you simply find the most recent root node and read the B-Tree like you would in any other file. This approach has several advantages:

    • Since written data is never modified, readers don't need to take locks, and get a 'snapshot' view of the DB based on the root node at the time they started reading.
    • By adding 'previous version' fields to your nodes and records, you get the ability to access previous versions of the DB essentially for free.
    • It's really easy to implement and debug compared to most on-disk file formats that support modification.
    • Compacting the database consists of simply reading out the latest version of the data and B-Tree and writing it to a new file.
    0 讨论(0)
  • 2021-02-10 01:58

    If a database is to heavy weight for you, consider a key-value store.

    If you really what to implement it yourself, use a disk-based hash table or a B-tree. To avoid the problems with the variable-length values, store the values in an separate file and use the B-tree as index for the data file. Space-reclaimation after deletion of values will be tricky, but it is possible (e.g. by a bitset for freed space in the data file).

    0 讨论(0)
  • 2021-02-10 02:03

    If your IDs are numbers and not very sparse, one option would be to use a simple table of (offset, length) in one file, referencing the data in another file. This would get you O(1) lookup, and updates/inserts/deletes bound only by your free-space tracking mechanism.

    0 讨论(0)
提交回复
热议问题