I am new to C++ style casts and I am worried that using C++ style casts will ruin the performance of my application because I have a real-time-critical dead
If the C++ style cast can be conceptualy replaced by a C-style cast there will be no overhead. If it can't, as in the case of dynamic_cast
, for which there is no C equivalent, you have to pay the cost one way or another.
As an example, the following code:
int x;
float f = 123.456;
x = (int) f;
x = static_cast<int>(f);
generates identical code for both casts with VC++ - code is:
00401041 fld dword ptr [ebp-8]
00401044 call __ftol (0040110c)
00401049 mov dword ptr [ebp-4],eax
The only C++ cast that can throw is dynamic_cast
when casting to a reference. To avoid this, cast to a pointer, which will return 0 if the cast fails.
There are four C++ style casts:
const_cast
static_cast
reinterpret_cast
dynamic_cast
As already mentioned, the first three are compile-time operations. There is no run-time penalty for using them. They are messages to the compiler that data that has been declared one way needs to be accessed a different way. "I said this was an int*
, but let me access it as if it were a char*
pointing to sizeof(int) char
s" or "I said this data was read-only, and now I need to pass it to a function that won't modify it, but doesn't take the parameter as a const reference."
Aside from data corruption by casting to the wrong type and trouncing over data (always a possibility with C-style casts) the most common run-time problem with these casts is data that actually is declared const
may not be castable to non-const. Casting something declared const
to non-const and then modifying it is undefined. Undefined means you're not even guaranteed to get a crash.
dynamic_cast
is a run-time construct and has to have a run-time cost.
The value of these casts is that they specifically say what you're trying to cast from/to, stick out visually, and can be searched for with brain-dead tools. I would recommend using them over using C-style casts.
Although I agree with the statement "the only one with any extra cost at runtime is dynamic_cast
", keep in mind there may be compiler-specific differences.
I've seen a few bugs filed against my current compiler where the code generation or optimization was slightly different depending on whether you use a C-style vs. C++-style static_cast
cast.
So if you're worried, check the disassembly on hotspots. Otherwise just avoid dynamic casts when you don't need them. (If you turn off RTTI, you can't use dynamic_cast
anyway.)
Why would there be a performance hit? They perform exactly the same functionality as C casts. The only difference is that they catch more errors at compile-time, and they're easier to search for in your source code.
static_cast<float>(3)
is exactly equivalent to (float)3
, and will generate exactly the same code.
Given a float f = 42.0f
reinterpret_cast<int*>(&f)
is exactly equivalent to (int*)&f
, and will generate exactly the same code.
And so on. The only cast that differs is dynamic_cast
, which, yes, can throw an exception. But that is because it does things that the C-style cast cannot do. So don't use dynamic_cast
unless you need its functionality.
It is usually safe to assume that compiler writers are intelligent. Given two different expressions that have the same semantics according to the standard, it is usually safe to assume that they will be implemented identically in the compiler.
Oops: The second example should be reinterpret_cast, not dynamic_cast, of course. Fixed it now.
Ok, just to make it absolutely clear, here is what the C++ standard says:
§5.4.5:
The conversions performed by
- a
const_cast
(5.2.11)- a
static_cast
(5.2.9)- a
static_cast
followed by aconst_cast
- a
reinterpret_cast
(5.2.10), or- a
reinterpret_cast
followed by aconst_cast
.can be performed using the cast notation of explicit type conversion. The same semantic restrictions and behaviors apply. If a conversion can be interpreted in more than one of the ways listed above, the interpretation that appears first in the list is used, even if a cast resulting from that interpretation is ill-formed.
So if anything, since the C-style cast is implemented in terms of the C++ casts, C-style casts should be slower. (of course they aren't, because the compiler generates the same code in any case, but it's more plausible than the C++-style casts being slower.)
The only one with any extra cost at runtime is dynamic_cast
, which has capabilities that cannot be reproduced directly with a C style cast anyway. So you have no problem.
The easiest way to reassure yourself of this is to instruct your compiler to generate assembler output, and examine the code it generates. For example, in any sanely implemented compiler, reinterpret_cast
will disappear altogether, because it just means "go blindly ahead and pretend the data is of this type".
The canonical truth is the assembly, so try both and see if you get different logic.
If you get the exact same assembly, there is no difference- there can't be. The only place you really need to stick with the old C casts is in pure C routines and libraries, where it makes no sense to introduce C++ dependence just for type casting.
One thing to be aware of is that casts happen all over the place in a decent sized piece of code. In my entire career I've never searched on "all casts" in a piece of logic- you tend to search for casts to a specific TYPE like 'A', and a search on "(A)" is usually just as efficient as something like "static_cast<A>". Use the newer casts for things like type validation and such, not because they make searches you'll never do anyway easier.