I saw a post on StackOverflow (that I cannot seem to find anymore, so maybe it wasn't posted here) that looked at the relative cost of exceptions vs. error codes. Too often people look at "code with exceptions" vs. "code without error handling", which is not a fair comparison. If you would use exceptions, then by not using them you have to use something else for the same functionality, and that other thing is usually error return codes. They found that even in a simple example with a single level of function calls (so no need to propagate exceptions far up the call stack), exceptions were faster than error codes in cases where the error situation occurred 0.1% - 0.01% of the time or less, while error codes were faster in the opposite situation.
Similar to the above complaint about measuring exceptions vs. no error handling, people do this sort of error in reasoning even more often with regard to virtual functions. And just like you don't use exceptions as a way to return dynamic types from a function (yes, I know, all of your code is exceptional), you don't make functions virtual because you like the way it looks in your syntax highlighter. You make functions virtual because you need a particular type of behavior, and so you can't say that virtualization is slow unless you compare it with something that has the same action, and generally the replacement is either lots of switch statements or lots of code duplication. Those have performance and memory hits as well.
As for the comment that games don't have bugs and other software does, all I can say to that is that I clearly have not played any games made by their software company. I've surfed on the floor of the elite 4 in Pokemon, gotten stuck inside of a mountain in Oblivion, been killed by Gloams that accidentally combine their mana damage with their hp damage instead of doing them separately in Diablo II, and pushed myself through a closed gate with a big rock to fight Goblins with a bird and a slingshot in Twilight Princess. Software has bugs. Using exceptions doesn't make bug-free software buggy.
The standard library's exception mechanisms have two types of exceptions: std::runtime_error
and std::logic_error
. I could see not wanting to use std::logic_error
(I've used it as a temporary thing to help me test, with the goal of removing it eventually, and I've also left it in as a permanent check). std::runtime_error
, however, is not a bug. I throw an exception derived from std::runtime_error
if the server I am connected to sends me invalid data (rule #1 of secure programming: trust no one, even a server that you think you wrote), such as claiming that they are sending me a message of 12 bytes and then they actually send me 15. In such a situation, there are only two possibilities:
1) I am connected to a malicious server, or
2) My connection to the server is corrupted.
In both of these cases, my response is the same: Disconnect (no matter where I am in the code, because my destructors will clean things up for me), wait a couple of seconds, and try connecting to the server again. I cannot do anything else. I could give absolutely everything an error code (which implies passing everything else by reference, which is a performance hit, and severely clutters code), or I could throw an exception that I catch at a point in my code where I determine which servers to connect to (which will probably be very high up in my code).
Is any of what I mentioned a bug in my code? I don't think so; I think it's accepting that all of the other code I have to interface with is imperfect or malicious, and making sure my code remains performant in the face of such ambiguity.
For smart pointers, again, what is the functionality you are trying to implement? If you need the functionality of smart pointers, then not using smart pointers means rewriting their functionality manually. I think it's pretty obvious why this is a bad idea. However, I rarely use smart pointers in my own code. The only time I really do is if I need to store some polymorphic class in a standard container (say, std::map<BattleIds, Battles>
where Battles
is some base class that is derived from based on the type of battle), in which case I used a std::unique_ptr
. I believe that one time I used a std::unique_ptr
in a class to work with some library code. Much of the time that I am using std::unique_ptr
, it's to make a non-copyable, non-movable type movable. In many cases where you would use a smart pointer, however, it seems like a better idea to just create the object on the stack and remove the pointer from the equation entirely.
In my personal coding, I haven't really found many situations where the "C" version of the code is faster than the "C++" version. In fact, it's generally the opposite. For instance, consider the many examples of std::sort
vs. qsort
(a common example used by Bjarne Stroustrup) where std::sort
clobbers qsort
, or my recent comparison of std::copy vs. memcpy, where std::copy
actually has a slight performance advantage.
Too much of the "C++ feature X is too slow" claims seem to be based on comparing it to not having the functionality. The most performant (in terms of speed and memory) and bug-free code is int main() {}
, but we write programs to do things. If you need particular functionality, it would be silly not to use the features of the language that give you that functionality. However, you should start by thinking of what you want your program to do, and then find the best way to do it. Obviously you don't want to begin with "I want to write a program that uses feature X of C++", you want to begin with "I want to write a program that does cool thing Z" and maybe you end up at "...and the best way to implement that is feature X".