How to read a binary file into a vector of unsigned chars

后端 未结 4 1065
不思量自难忘°
不思量自难忘° 2020-11-27 12:34

Lately I\'ve been asked to write a function that reads the binary file into the std::vector where BYTE is an unsigned char

相关标签:
4条回答
  • 2020-11-27 13:05
    std::ifstream stream("mona-lisa.raw", std::ios::in | std::ios::binary);
    std::vector<uint8_t> contents((std::istreambuf_iterator<char>(stream)), std::istreambuf_iterator<char>());
    
    for(auto i: contents) {
        int value = i;
        std::cout << "data: " << value << std::endl;
    }
    
    std::cout << "file size: " << contents.size() << std::endl;
    
    0 讨论(0)
  • 2020-11-27 13:08

    Since you are loading the entire file into memory the most optimal version is to map the file into memory. This is because the kernel loads the file into kernel page cache anyway and by mapping the file you just expose those pages in the cache into your process. Also known as zero-copy.

    When you use std::vector<> it copies the data from the kernel page cache into std::vector<> which is unnecessary when you just want to read the file.

    Also, when passing two input iterators to std::vector<> it grows its buffer while reading because it does not know the file size. When resizing std::vector<> to the file size first it needlessly zeroes out its contents because it is going to be overwritten with file data anyway. Both of the methods are sub-optimal in terms of space and time.

    0 讨论(0)
  • 2020-11-27 13:13

    I would have thought that the first method, using the size and using stream::read() would be the most efficient. The "cost" of casting to char * is most likely zero - casts of this kind simply tell the compiler that "Hey, I know you think this is a different type, but I really want this type here...", and does not add any extra instrucitons - if you wish to confirm this, try reading the file into a char array, and compare the actual assembler code. Aside from a little bit of extra work to figure out the address of the buffer inside the vector, there shouldn't be any difference.

    As always, the only way to tell for sure IN YOUR CASE what is the most efficient is to measure it. "Asking on the internet" is not proof.

    0 讨论(0)
  • 2020-11-27 13:14

    When testing for performance, I would include a test case for:

    std::vector<BYTE> readFile(const char* filename)
    {
        // open the file:
        std::ifstream file(filename, std::ios::binary);
    
        // Stop eating new lines in binary mode!!!
        file.unsetf(std::ios::skipws);
    
        // get its size:
        std::streampos fileSize;
    
        file.seekg(0, std::ios::end);
        fileSize = file.tellg();
        file.seekg(0, std::ios::beg);
    
        // reserve capacity
        std::vector<BYTE> vec;
        vec.reserve(fileSize);
    
        // read the data:
        vec.insert(vec.begin(),
                   std::istream_iterator<BYTE>(file),
                   std::istream_iterator<BYTE>());
    
        return vec;
    }
    

    My thinking is that the constructor of Method 1 touches the elements in the vector, and then the read touches each element again.

    Method 2 and Method 3 look most promising, but could suffer one or more resize's. Hence the reason to reserve before reading or inserting.

    I would also test with std::copy:

    ...
    std::vector<byte> vec;
    vec.reserve(fileSize);
    
    std::copy(std::istream_iterator<BYTE>(file),
              std::istream_iterator<BYTE>(),
              std::back_inserter(vec));
    

    In the end, I think the best solution will avoid operator >> from istream_iterator (and all the overhead and goodness from operator >> trying to interpret binary data). But I don't know what to use that allows you to directly copy the data into the vector.

    Finally, my testing with binary data is showing ios::binary is not being honored. Hence the reason for noskipws from <iomanip>.

    0 讨论(0)
提交回复
热议问题