There are certain conditions that can cause stack overflows on an x86 Linux system:
struct my_big_object[HUGE_NUMBER]
on the stack. Walking throu
You can determine the stack space the process has available by finding the size of a process' stack space and then subtracting the amount used.
ulimit -s
shows the stack size on a linux system. For a programmatic approach, check out getrlimit(). Then, to determine the current stack depth, subtract a pointer to the top of the stack from one to the bottom. For example (code untested):
unsigned char *bottom_of_stack_ptr;
void call_function(int argc, char *argv) {
unsigned char top_of_stack;
unsigned int depth = (&top_of_stack > bottom_of_stack_ptr) ?
&top_of_stack-bottom_of_stack_ptr :
bottom_of_stack_ptr-&top_of_stack;
if( depth+100 < PROGRAMMATICALLY_DETERMINED_STACK_SIZE ) {
...
}
}
int main(int argc, char *argv) {
unsigned char bottom_of_stack;
bottom_of_stack_ptr = &bottom_of_stack;
my_function();
return 0;
}
Apologies if this is stating the obvious, but you could easily write a function to test for a specific stack allocation size by just trying the alloca (of that size) and catching a stack overflow exception. If you wanted you could put it into a function, with some pre-determined math for the function stack overhead. Eg:
bool CanFitOnStack( size_t num_bytes )
{
int stack_offset_for_function = 4; // <- Determine this
try
{
alloca( num_bytes - stack_offset_for_function );
}
catch ( ... )
{
return false;
}
return true;
}
alloca() is going to return NULL on failure, I believe the behavior of alloca(0) is undefined and platform variant. If you check for that prior to do_something(), you should never be hit with a SEGV.
I have a couple of questions:
The question is interesting but raises an eyebrow. It raises the needle on my square-peg-round-hole-o-meter.
You can use GNU libsigsegv to handle a page fault, including cases where a stack overflow occurs (from its website):
In some applications, the stack overflow handler performs some cleanup or notifies the user and then immediately terminates the application. In other applications, the stack overflow handler longjmps back to a central point in the application. This library supports both uses. In the second case, the handler must ensure to restore the normal signal mask (because many signals are blocked while the handler is executed), and must also call sigsegv_leave_handler() to transfer control; then only it can longjmp away.
Not sure if this applies on Linux, but on Windows it's possible to run into access violations with large stack allocations even if they succeed!
This is because by default, Windows' VMM only actually marks the top few (not sure how many exactly) 4096-byte pages of stack RAM as pageable (i.e. backed by the pagefile), since it believes that stack accesses will generally march downwards from the top; as accesses get closer and closer to the current "boundary", lower and lower pages are marked as pageable. But this means that an early memory read/write far below the top of the stack will trigger an access violation as that memory is not actually allocated yet!
The deprecated alloca() routine (like malloc(), but uses the stack, automatically frees itself, and also blows up with SIGSEGV if it's too big).
Why is alloca deprecated?
Anyhow, how much faster in your case is alloca vs malloc? (Is it worth it?)
And don't you get null back from alloca if there is not enought space left? (the same way as malloc?)
And when your code crash, where does it crash? is it in alloca or is in doStuff()?
/Johan