I don\'t know what to think here...
We have a component that runs as a service. It runs perfectly well on my local machine, but on some other machine (on both machine RA
Another long shot: you don't say in which of the three operations the error occurs (construction, <<
or log), but the problem may be memory fragmentation, rather than memory consumption. Maybe stringstream can't find a contiguous memory block long enough to hold a couple of Kb.
If this is the case, and if you exercise that function on the first day (without mishap) then you could make the stringstream a static variable and reuse it. As far as I know stringstream does not deallocate it's buffer space during its lifetime, so if it establishes a big buffer on the first day it will continue to have it from then on (for added safety you could run a 5Kb dummy string through it when it is first constructed).
I fail to see why a stream would throw. Don't you have a dump of the failed process? Or perhaps attach a debugger to it to see what the allocator is trying to allocate?
But if you did overload the operator <<
, then perhaps your code does have a bug.
Just my 2 (euro) cts...
The memory could be fragmented.
At one moment, you try to allocate SIZE bytes, but the allocator finds no contiguous chunk of SIZE bytes in memory, and then throw a bad_alloc.
Note: This answer was written before I read this possibility was ruled out.
Another possibility would be the use of a signed value for the size to be allocated:
char * p = new char[i] ;
If the value of i
is negative (e.g. -1), the cast into the unsigned integral size_t
will make it go beyond what is available to the memory allocator.
As the use of signed integral is quite common in user code, if only to be used as a negative value for an invalid value (e.g. -1 for a failed search), this is a possibility.
bad_alloc
doesn't necessarily mean there is not enough memory. The allocation functions might also fail because the heap is corrupted. You might have some buffer overrun or code writing into deleted memory, etc.
You could also use Valgrind or one of its Windows replacements to find the leak/overrun.
Just a hunch,
But I have had trouble in the past when allocating arrays as so
int array1[SIZE]; // SIZE limited by COMPILER to the size of the stack frame
when SIZE is a large number.
The solution was to allocate with the new operator
int* array2 = new int[SIZE]; // SIZE limited only by OS/Hardware
I found this very confusing, the reason turned out to be the stack frame as discussed here in the solution by Martin York: Is there a max array length limit in C++?
All the best,
Tom
One likely reason within your description is that you try to allocate a block of some unreasonably big size because of an error in your code. Something like this;
size_t numberOfElements;//uninitialized
if( .... ) {
numberOfElements = obtain();
}
elements = new Element[numberOfElements];
now if numberOfElements
is left uninitialized it can contain some unreasonably big number and so you effectively try to allocate a block of say 3GB which the memory manager refuses to do.
So it can be not that your program is short on memory, but that it tries to allocate more memory than it could possibly be allowed to under even the best condition.
By way of example, Memory leaks can occur when you use the new
operator in c++ and forget to use the delete
operator.
Or, in other words, when you allocate a block of memory and you forget to deallocate it.