Dumping memory to file

前端 未结 5 561
囚心锁ツ
囚心锁ツ 2021-01-05 11:57

I\'ve got a part of my memory that I want to dump to a file. One reason is to save the information somewhere, and another is to read it again when my program restarts.

相关标签:
5条回答
  • 2021-01-05 12:10

    This problem is called "serializing" and can range from trivial to really complicated. If your data structure is self contained, for instance a bunch of pixels in an array and you know the array dimensions, you can just dump the data out and then read it back.

    If you have for instance linked lists, or pointers of any kind, in your data, then those pointers will not point to anything valid once you read them back. This is where a more formal approach to serializing starts to make sense.

    This can range from saving as file formats, use databases, convert to XML or other hierarchical format and so on. What is an OK solution depends completely on what kind of data you have, and what types of operations you are doing on it plus how often you plan to write and then read back from disk. (Or network. Or whatever you are doing.)

    If what you have is a trivial blob of data, and you just want to write it out the simplest way possible, use fwrite():

    fwrite(my_pointer, MEMORY_SIZE, 1, fp);
    

    and then fread() to read the data back. Also see a related (more or less related depending on how advanced your needs are) serializing question on StackOverflow.

    Proper serialization also solves the problems that appear when different kinds of CPUs are supposed to be able to read data from each other. Proper serialization in C is lot more complicated than in other languages. In Lisp for instance, all data and code is already serialized. In Java, there are methods to help you serialize your data. The properties of C that makes it a suitable language for high performance and systems programming also makes it tougher to use for some other things.

    0 讨论(0)
  • 2021-01-05 12:11

    You can use

    size_t fwrite ( const void * ptr, size_t size, size_t count, FILE * stream );
    

    function.

    ptr - pointer to you memory segment.
    size - size of memory to write.
    stream - file you writing to.
    

    Will my "datastructures" survive as "charachters", or do I need to write it in some sort of binary / hexa data-mode? Or is it a stadard way of doing this?

    when you open file - use 'b' character in "mode" param

    0 讨论(0)
  • 2021-01-05 12:12

    If you're on a unixy style system memmap and memcpy might give you neat solution.

    0 讨论(0)
  • 2021-01-05 12:12

    The proper way of doing that is to use a serialisation library.

    Whether you really need that depends on the complexity of your data. If the data you need to write out does not contain any pointers of any kind, then you could just use fwrite to write the data out and fread to read it back in. Just make sure that you have opened the file with the data in binary mode.

    If the data to serialise contains pointers, you are way better of using an external library written for this purpose, as the library will ensure that the pointers get written in such a way that they can be properly reconstructed later.

    0 讨论(0)
  • 2021-01-05 12:36

    As long as the data you are dumping out contains no pointers, just dumping it out like that will work. (HINT: Use the calls that can write long sequences of data all in one go to cut time.) The only thing to watch out for is if you're writing out integers or floating point numbers and reading them back in on a machine with a different architecture (e.g., big endian instead of little endian). That might or might not be a concern for you.

    But if you've got pointers inside, you've got a problem. The problem is that you cannot (well, cannot easily) guarantee that you'll get the data loaded back at the same position in the receiving process's virtual memory space. What's more, if you have data that has pointers to things that you're not saving (e.g., a stray FILE*) then you've got to think about what to do to resynthesize a valid replacement at that point. Such serialization is deeply non-trivial, and requires writing code that has knowledge of exactly what you're saving and loading.

    There is a way to simplify serialization a little when you've only got pointers within the contiguous data being saved and are always going to restore on the same architecture. Dump out the memory as before, but put a prefix descriptor on that says at least the length of the data and the number of pointers within, then also save (at the end) a table of exactly where (as offsets within the data) the pointers are and where the start of all the data was. You can then restore by reading the data in and performing address arithmetic to correct all the pointers, i.e., you can work out what offset relative to the start of the original data they were pointing to – as a char*, not the original type – and make sure that they point to the same offset relative to the address of the whole data after reloading. This is a somewhat gross hack and is formally not the most portable thing ever, but within the constraints outlined at the beginning of this paragraph I'd expect it to work. However you'll also have a really non-portable serialization format; do not count on it at all for any sort of persistent archival use!

    0 讨论(0)
提交回复
热议问题