How much footprint does C++ exception handling add

后端 未结 8 1189
有刺的猬
有刺的猬 2020-11-28 05:50

This issue is important especially for embedded development. Exception handling adds some footprint to generated binary output. On the other hand, without exceptions the err

相关标签:
8条回答
  • 2020-11-28 05:55

    Define 'embedded'. On an 8-bit processor I would not certainly not work with exceptions (I would certainly not work with C++ on an 8-bit processor). If you're working with a PC104 type board that is powerful enough to have been someone's desktop a few years back then you might get away with it. But I have to ask - why are there exceptions? Usually in embedded applications anything like an exception occurring is unthinkable - why didn't that problem get sorted out in testing?

    For instance, is this in a medical device? Sloppy software in medical devices has killed people. It is unacceptable for anything unplanned to occur, period. All failure modes must be accounted for and, as Joel Spolsky said, exceptions are like GOTO statements except you don't know where they're called from. So when you handle your exception, what failed and what state is your device in? Due to your exception is your radiation therapy machine stuck at FULL and is cooking someone alive (this has happened IRL)? At just what point did the exception happen in your 10,000+ lines of code. Sure you may be able to cut that down to perhaps 100 lines of code but do you know the significance of each of those lines causing an exception is?

    Without more information I would say do NOT plan for exceptions in your embedded system. If you add them then be prepared to plan the failure modes of EVERY LINE OF CODE that could cause an exception. If you're making a medical device then people die if you don't. If you're making a portable DVD player, well, you've made a bad portable DVD player. Which is it?

    0 讨论(0)
  • 2020-11-28 05:57

    It's easy to see the impact on binary size, just turn off RTTI and exceptions in your compiler. You'll get complaints about dynamic_cast<>, if you're using it... but we generally avoid using code that depends on dynamic_cast<> in our environments.

    We've always found it to be a win to turn off exception handling and RTTI in terms of binary size. I've seen many different error handling methods in the absence of exception handling. The most popular seems to be passing failure codes up the callstack. In our current project we use setjmp/longjmp but I'd advise against this in a C++ project as they won't run destructors when exiting a scope in many implementations. If I'm honest I think this was a poor choice made by the original architects of the code, especially considering that our project is C++.

    0 讨论(0)
  • 2020-11-28 05:59

    One thing to consider: If you're working in an embedded environment, you want to get the application as small as possible. The Microsoft C Runtime adds quite a bit of overhead to programs. By removing the C runtime as a requirement, I was able to get a simple program to be a 2KB exe file instead of a 70-something kilobyte file, and that's with all the optimizations for size turned on.

    C++ exception handling requires compiler support, which is provided by the C runtime. The specifics are shrouded in mystery and are not documented at all. By avoiding C++ exceptions I could cut out the entire C runtime library.

    You might argue to just dynamically link, but in my case that wasn't practical.

    Another concern is that C++ exceptions need limited RTTI (runtime type information) at least on MSVC, which means that the type names of your exceptions are stored in the executable. Space-wise, it's not an issue, but it just 'feels' cleaner to me to not have this information in the file.

    0 讨论(0)
  • 2020-11-28 06:01

    In my opinion exception handling is not something that's generally acceptable for embedded development.

    Neither GCC nor Microsoft have "zero-overhead" exception handling. Both compilers insert prologue and epilogue statements into each function that track the scope of execution. This leads to a measurable increase in performance and memory footprint.

    The performance difference is something like 10% in my experience, which for my area of work (realtime graphics) is a huge amount. The memory overhead was far less but still significant - I can't remember the figure off-hand but with GCC/MSVC it's easy to compile your program both ways and measure the difference.

    I've seen some people talk about exception handling as an "only if you use it" cost. Based on what I've observed this just isn't true. When you enable exception handling it affects all code, whether a code path can throw exceptions or not (which makes total sense when you consider how a compiler works).

    I would also stay away from RTTI for embedded development, although we do use it in debug builds to sanity check downcasting results.

    0 讨论(0)
  • 2020-11-28 06:02

    I work in a low latency environment. (sub 300 microseconds for my application in the "chain" of production) Exception handling, in my experience, adds 5-25% execution time depending on the amount you do!

    We don't generally care about binary bloat, but if you get too much bloat then you thrash like crazy, so you need to be careful.

    Just keep the binary reasonable (depends on your setup).

    I do pretty extensive profiling of my systems.
    Other nasty areas:

    Logging

    Persisting (we just don't do this one, or if we do it's in parallel)

    0 讨论(0)
  • 2020-11-28 06:06

    Measuring things, part 2. I have now got two programs. The first is in C and is compiled with gcc -O2:

    #include <stdio.h>
    #include <time.h>
    
    #define BIG 1000000
    
    int f( int n ) {
        int r = 0, i = 0;
        for ( i = 0; i < 1000; i++ ) {
            r += i;
            if ( n == BIG - 1 ) {
                return -1;
            }
        }
        return r;
    }
    
    int main() { 
        clock_t start = clock();
        int i = 0, z = 0;
        for ( i = 0; i < BIG; i++ ) {
            if ( (z = f(i)) == -1 ) { 
                break;
            }
        }
        double t  = (double)(clock() - start) / CLOCKS_PER_SEC;
        printf( "%f\n", t );
        printf( "%d\n", z );
    }
    

    The second is C++, with exception handling, compiled with g++ -O2:

    #include <stdio.h>
    #include <time.h>
    
    #define BIG 1000000
    
    int f( int n ) {
        int r = 0, i = 0;
        for ( i = 0; i < 1000; i++ ) {
            r += i;
            if ( n == BIG - 1 ) {
                throw -1;
            }
        }
        return r;
    }
    
    int main() { 
        clock_t start = clock();
        int i = 0, z = 0;
        for ( i = 0; i < BIG; i++ ) {
            try {
             z += f(i); 
            }
            catch( ... ) {
                break;
            }
    
        }
        double t  = (double)(clock() - start) / CLOCKS_PER_SEC;
        printf( "%f\n", t );
        printf( "%d\n", z );
    }
    

    I think these answer all the criticisms made of my last post.

    Result: Execution times give the C version a 0.5% edge over the C++ version with exceptions, not the 10% that others have talked about (but not demonstrated)

    I'd be very grateful if others could try compiling and running the code (should only take a few minutes) in order to check that I have not made a horrible and obvious mistake anywhere. This is knownas "the scientific method"!

    0 讨论(0)
提交回复
热议问题