Symbol visibility, exceptions, runtime error

前端 未结 1 878
谎友^
谎友^ 2021-02-01 05:57

I try to understand symbol visibility better. The GCC Wiki (http://gcc.gnu.org/wiki/Visibility) has a section about \"Problems with C++ exceptions\". According to GCC Wiki it is

1条回答
  •  离开以前
    2021-02-01 06:11

    I'm the author of the original patch to GCC adding class visibility support, and my original howto which GCC cloned is at http://www.nedprod.com/programs/gccvisibility.html. My thanks to VargaD for emailing me personally to tell me about this SO question.

    The behaviour you observe is valid for recent GCCs, however it was not always so. When I originally patched GCC back in 2004 I submitted a request to GCC bugzilla for the GCC exception handling runtime to compare thrown types by string comparison of their mangled symbols instead of comparing the addresses of those strings - this was rejected at that time by the GCC maintainers as an unacceptable runtime cost, despite that this behaviour is what MSVC does, and despite that performance during exception throws is generally not considered important given they're supposed to be rare. Hence I had to add a specific exception to my visibility guide to say that any thrown type must never be hidden, not once, in a binary as "hiddenness" trumps "default" so just a single hidden symbol declaration guarantees to override all cases of the same symbol in a given binary.

    What happened next I suppose none of us expected - KDE very publicly embraced my contributed feature. That cascaded into almost every large GCC-using project in an amazingly short time. Suddenly symbol hiding was the norm, not the exception.

    Unfortunately, a small number of people didn't apply my guide correctly for exception thrown types, and the constant bug reports about incorrect cross-shared object exception handling in GCC eventually caused the GCC maintainers to give up and many years later patch in string comparison for thrown type matching, as I had originally requested. Hence in newer GCCs the situation is somewhat better. I haven't changed my guide nor the instructions because that approach is still safest on every GCC since v4.0, and while newer GCCs are more reliable in handling exception throws due to now using string comparison, following the guide's rules doesn't hurt that.

    This brings us onto the typeinfo problem. A big problem is that best practice C++ requires you to always inherit virtually in throwable types, because if you compose two exception types both inheriting (let's say) from std::exception, having two equidistant std::exception base classes will cause a catch(std::exception&) to auto call terminate() because it can't resolve which base class to match, so you must only ever have one std::exception base class ever, and the same rationale applies to any possible composition of throwable type. This best practice is especially required in any C++ library, because you can't know what third party users will do with your exception types.

    In other words, this means that all thrown exception types in best practice will always come with a chain of successive RTTI for each base class, and that exception matching is now a case of internally doing a successful dynamic_cast<> to the type being matched, an O(number of base classes) operation. And for dynamic_cast<> to work over a chain of virtually inherited types, you guessed it, you need every single one of this chain to have default visibility. If even one is hidden from the code executing the catch(), the whole caboodle goes belly up and you get a terminate(). I'd be very interested if you reworked your example code above to virtually inherit and see what happens - one of your comments says it refuses to link, which is great. But let's say DLL A defines type A, DLL B subclasses type A into B, DLL C subclasses type B into C, and program D tries to catch an exception of type A when type C was thrown. Program D will have the type info of A available, but should fault when trying to fetch RTTI for types B and C. Maybe, though, recent GCCs have fixed this too? I don't know, my attention in recent years is on clang as that's the future for all C++ compilers.

    Obviously, this is a mess, but it's an ELF-specific mess - none of this affects PE or MachO, both of which get all of the above right by not using process-global symbol tables in the first place. However the WG21 SG2 Modules study group working towards C++17 must effectively implement exported templates for modules to work in order to resolve ODR violations, and C++17 is the first proposed standard I've seen to be written with a LLVM in mind. In other words, C++17 compilers will have to dump a complex AST onto disc like clang does. And that implies a huge increase in the guarantees of what RTTI is available - indeed, that's why we have the SG7 Reflection study group, because the AST from C++ Modules enables a huge increase in possible self-reflection opportunities. In other words, expect the above problems to go away soon with C++17 adoption.

    So, in short, keep following my original guide for now. And things will hopefully get vastly better in the next decade. And give thanks to Apple for funding that solution, it's been a very long time in coming due to how wicked hard it is.

    Niall

    0 讨论(0)
提交回复
热议问题