iostream

Check if all values were successfully read from std::istream

若如初见. 提交于 2019-12-05 06:43:43
Let's say I have a file that has 100 text If I try reading 2 numbers using ifstream, it would fail because text is not a number. Using fscanf I'll know it failed by checking its return code: if (2 != fscanf(f, "%d %d", &a, &b)) printf("failed"); But when using iostream instead of stdio, how do I know it failed? Its actually as (if not more) simple: ifstream ifs(filename); int a, b; if (!(ifs >> a >> b)) cerr << "failed"; Get used to that format, by the way. as it comes in very handy (even more-so for continuing positive progression through loops). If one' using GCC with -std=c++11 or -std=c+

Change or check the openmode of a std::ofstream

心已入冬 提交于 2019-12-05 06:40:14
In some code that does a lot of file i/o using std::ofstream , I'm caching the stream for efficiency. However, sometimes I need to change the openmode of the file (e.g. append vs truncate). Here is some similar mock code: class Logger { public: void write(const std::string& str, std::ios_base::openmode mode) { if (!myStream.is_open) myStream.open(path.c_str(), mode); /* Want: if (myStream.mode != mode) { myStream.close(); myStream.open(path.c_str(), mode); } */ myStream << str; } private: std::ofstream myStream; std::string path = "/foo/bar/baz"; } Does anyone know if: There is a way to change

Inserters and Extractors reading/writing binary data vs text

≡放荡痞女 提交于 2019-12-05 06:14:51
I've been trying to read up on iostreams and understand them better. Occasionally I find it stressed that inserters ( << ) and extractors ( >> ) are meant to be used in textual serialization. It's a few places, but this article is a good example: http://spec.winprog.org/streams/ Outside of the <iostream> universe, there are cases where the << and >> are used in a stream-like way yet do not obey any textual convention. For instance, they write binary encoded data when used by Qt's QDataStream : http://doc.qt.nokia.com/latest/qdatastream.html#details At the language level, the << and >>

Why is std::endl generating this cryptic error message?

和自甴很熟 提交于 2019-12-05 03:30:09
If I try to compile the following code I get the following compiler error (see code.) It compiles without error if std::endl is removed. #include <iostream> #include <sstream> #include <utility> namespace detail { template <class T> void print(std::ostream& stream, const T& item) { stream << item; } template <class Head, class... Tail> void print(std::ostream& stream, const Head& head, Tail&&... tail) { detail::print(stream, head); detail::print(stream, std::forward<Tail>(tail)...); } } template <class... Args> void print(std::ostream& stream, Args&&... args) //note: candidate function not

How to set maximum read length for a stream in C++?

六眼飞鱼酱① 提交于 2019-12-05 01:19:37
问题 I'm reading data from a stream into a char array of a given length, and I'd like to make the maximum width of read to be large enough to fit in that char array. The reason I use a char array is that part of my specification is that the length of any individual token cannot exceed a certain value, so I'm saving myself some constructor calls. I thought width() did what I wanted, but I was apparently wrong... EDIT: I'm using the stream extraction operators to perform the extraction, since these

Fastest way to create large file in c++?

北城余情 提交于 2019-12-05 00:48:52
问题 Create a flat text file in c++ around 50 - 100 MB with the content 'Added first line' should be inserted in to the file for 4 million times 回答1: using old style file io fopen the file for write. fseek to the desired file size - 1. fwrite a single byte fclose the file 回答2: The fastest way to create a file of a certain size is to simply create a zero-length file using creat() or open() and then change the size using chsize() . This will simply allocate blocks on the disk for the file, the

gcc: Strip unused functions

↘锁芯ラ 提交于 2019-12-04 22:43:43
I noticed that sometimes even if I don't use iostream and related I/O libraries, my binaries produced by Mingw were still unreasonably large. For example, I wrote a code to use vector and cstdio only and compiled it with -O2 -flto , my program can go as large as 2MB! I run nm main.exe > e.txt and was shocked to see all the iostream related functions in it. After some googling, I learnt to use -ffunction-sections -Wl,-gc-sections , that reduces the program size from 2MB to ~300KB (if with -s , 100+KB). Excellent! To further test the effect of -ffunction-sections -Wl,-gc-sections , here is

seekg() failing mysteriously

五迷三道 提交于 2019-12-04 21:38:44
I have a 2884765579 bytes file. This is double checked with this function, that returns that number: size_t GetSize() { const size_t current_position = mFile.tellg(); mFile.seekg(0, std::ios::end); const size_t ret = mFile.tellg(); mFile.seekg(current_position); return ret; } I then do: mFile.seekg(pos, std::ios::beg); // pos = 2883426827, which is < than the file size, 2884765579 This sets the failbit. errno is not changed. What steps can I take to troubleshoot this? I am absolutely sure that: The file size is really 2884765579 pos is really 2884765579 The failbit is not set before .seekg()

read part of a file with iostreams

谁说胖子不能爱 提交于 2019-12-04 21:00:30
问题 Can I open an ifstream (or set an existing one in any way) to only read part of a file? For example, I would like to have my ifstream read a file from byte 10 to 50. Seeking to position 0 would be position 10 in reality, reading past position 40 (50 in reality) would resualt in an EOF , etc. Is this possible in any way? 回答1: It definetly can be done by implementing a filtering stream buffer: you would derive from std::streambuf and take the range you want to expose and the underlying stream

Cross-platform (linux/Win32) nonblocking C++ IO on stdin/stdout/stderr

我的未来我决定 提交于 2019-12-04 20:28:57
问题 I'm trying to find the best solution for nonblocking IO via stdin/stdout with the following characteristics: As long as there is enough data, read in n -sized chunks. If there's not enough data, read in a partial chunk. If there is no data available, block until there is some (even though it may be smaller than n ). The goal is to allow efficient transfer for large datasets while processing 'control' codes immediately (instead of having them linger in some partially-filled buffer somewhere).