iostream and large file support

倖福魔咒の 提交于 2019-12-30 04:38:06

问题


I'm trying to find a definitive answer and can't, so I'm hoping someone might know.

I'm developing a C++ app using GCC 4.x on Linux (32-bit OS). This app needs to be able to read files > 2GB in size.

I would really like to use iostream stuff vs. FILE pointers, but I can't find if the large file #defines (_LARGEFILE_SOURCE, _LARGEFILE64_SOURCE, _FILE_OFFSET_BITS=64) have any effect on the iostream headers.

I'm compiling on a 32-bit system. Any pointers would be helpful.


回答1:


This has already been decided for you when libstdc++ was compiled, and normally depends on whether or not _GLIBCXX_USE_LFS was defined in c++config.h.

If in doubt, pass your executable (or libstdc++.so, if linking against it dynamically) through readelf -r (or through strings) and see if your binary/libstdc++ linked against fopen/fseek/etc. or fopen64/fseek64/etc.

UPDATE

You don't have to worry about the 2GB limit as long as you don't need/attempt to fseek or ftell (you just read from or write to the stream.)




回答2:


If you are using GCC, you can take advantage of a GCC extension called __gnu_cxx::stdio_filebuf, which ties an IOStream to a standard C FILE descriptor.

You need to define the following two things:

_LARGEFILE_SOURCE

_FILE_OFFSET_BITS=64

For example:

#include <cstdio>
#include <fstream>
#include <ext/stdio_filebuf.h>

int main()
{
  std::ofstream outstream;
  FILE* outfile;

  outfile = fopen("bigfile", "w");

  __gnu_cxx::stdio_filebuf<char> fdbuf(outfile, std::ios::out |
                                       std::ios::binary);
  outstream.std::ios::rdbuf(&fdbuf);

  for(double i = 0; i <= 786432000000.0; i++) {
    outstream << "some data";

  fclose(outfile);
  return 0;

}



来源:https://stackoverflow.com/questions/660667/iostream-and-large-file-support

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!