This has some lengthy background before the actual question, however, it bears some explaining to hopefully weed out some red herrings.
Our application, developed in
According to http://msdn.microsoft.com/en-us/library/1570wh78(v=VS.90).aspx errno can take the values:
- EBADF
- ENOSPC
- EINVAL.
There is not EINTR on windows. Random system interrupts cause this error and are not caught by the test while (-1==nbytes && EINTR==errno);
We had a very similar problem which we managed to reproduce quite easily. We first compiled the following program:
#include <stdlib.h>
#include <stdio.h>
#include <io.h>
#include <sys/stat.h>
#include <fcntl.h>
int main(int argc, char *argv[])
{ int len = 70000000;
int handle= creat(argv[1], S_IWRITE | S_IREAD);
setmode (handle, _O_BINARY);
void *buf = malloc(len);
int byteswritten = write(handle, buf, len);
if (byteswritten == len)
printf("Write successful.\n");
else
printf("Write failed.\n");
close(handle);
return 0;
}
Now, let's say you are working on the computer mycomputer and that C:\inbox maps to a shared folder \\mycomputer\inbox. Then the observe the following effect:
C:\>a.exe C:\inbox\x
Write successful.
C:\>a.exe \\mycomputer\inbox\x
Write failed.
Note that if len is changed to 60000000, there is no problem...
Based on this web page support.microsoft.com/kb/899149, we think it is a "limitation of the operating system" (the same effect has been observed with fwrite). Our work around is to try to cut the write in 63 MB pieces if it fails. This problem has apparently been corrected on Windows Vista.
I hope this helps! Simon
Two thoughts come to mind.. Either you are walking past the end of the buffer, and trying to write that data out, or the allocation of the buffer failed. Problems that, in debug mode, will not be as visible as they are in release mode.
It's probably a bad idea to allocate 250 meg of memory anyways. You'd do better to allocate a fixed size buffer, and do your writing in chunks.
Have you looked for things like Virus Scanners that might have a hold on the file in between your write operations, thus making the write fail?
I know of no limit to the amount of data you can pass to write in a single call, unless (like I said), you are writing data (as part of the buffer) that does not belong to you...
Since most of these functions wrap the Kernel call WriteFile(), (Or NtWriteFile()), there COULD be the condition that there isn't enough Kernel memory to handle the buffer to write. But, THAT I'm not certain of, since I don't know WHEN exactly the code makes the jump from UM to KM.
Don't know if any of this will help, but hope it does...
If you can provide any more details, please do. Sometimes just telling someone about the problem will trigger your brain to go "Wait a minute!", and you'll figure it out. heh..
Did you look at the implementation of _write()
in the CRT (C runtime) source that was installed with Visual Studio (C:\Program Files\Microsoft Visual Studio 8\VC\crt\src\write.c
)?
There are at least two conditions that cause _write()
to set errno
to EINVAL
:
buffer
is NULL, as you mentioned.count
parameter is odd when the file is opened in text mode in UTF-16 format (or UTF-8? the comments don't match the code). Is this a text or binary file? If it's text, does it have a byte order mark?_write()
calls also sets errno
to EINVAL
?If you can reliably reproduce this problem, you should be able to narrow down the source of the error by putting breakpoints in the parts of the CRT source that set the error code. It also appears that the debug version of the CRT is capable of asserting when the error occurs, but it might require tweaking some options (I haven't tried it).
You could be trashing your own stack by accidentally misusing a pointer somewhere else - if you can find a repro machine, try running your app under Application Verifier with all the memory checks turned on