I just happened upon the statement in the title. The full quote is:
As a rule of thumb, make all your methods virtual (including the destructor, but
There will be a tiny loss in performance and a few bytes of memory wasted.
The real problem is that it makes the code less maintainable because you are saying something about the function that isn't true. It could cause a lot of confusion.
It can't hurt. If you don't have a subclass which redefines the function, nothing is different. I can see this technique being useful if you have a lot of inheritance going on and you might lose track of what classes inherit which.
Personally, I don't make every class method virtual...but that's just me. Having everything virtual
seems to make things look more confusing, imho.
I don't agree with the principle.
In the past, some were concerned about overuse of virtual
due to performance concerns. This is still somewhat valid, but not overly problematic on today's hardware. (Keep in mind, most other languages incur similar penalties these days. For instance, the 400MHz iPhone 2G used Objective C which incurs a virtual method call on every function call.)
I think you should only use virtual
on methods where it seems useful and reasonable to want to override it in a subclass. To me, it serves as a hint to other programmers (or your future self) as "this is a place where subclasses can sensibly customize behavior." If replacing the method in a subclass would be confusing or weird to implement, don't use virtual
.
Also, for simple setters and getters, it's probably a bad idea as it will inhibit inlining.
Is there anything to it?
The advice is BAD, there is no question about it. Reading something like that would be enough to stay away from the book and its author.
You see, virtual keyword indicates "you can or should override this method - it was designed for this".
For any non-trivial task, I cannot imagine a reasonable system of classes that would allow user (i.e. other programmer) to override every single single method in every derived class. It is normal to have base abstract class with only virtual methods. However, once you start making derived classes, there's no reason for slapping "virtual" onto everything - some methods don't need to be extensible.
Making everything virtual means that at any point of code, no matter which method is called, you can never be sure that the class will do what you want, because somebody could have overriden your method, breaking it in the process (According to Murphy's Law it will happen). This will make your code unreliable, and hard to maintain. Another very interesting thing is the way virtual methods are called in constructors. Basically, by following this advice you sacrifice code readability/reliability in exchange for not doing a quite uncommon typo. In my opinion, it is not worth it.
In comparison, non-virtual method guarantees that no matter what happens, at this point of code, the code will always work as you expect (not counting the bugs you haven't discovered yet). I.e. somebody else won't replace your method with broken alternative.
The advice reminds me a common error some newbie programmers tend to do: instead of developing simple solution that will fix the problem, they get distracted and attempt to make code universal and extensible. As a result, project takes longer to finish or never becomes complete - because universal solution for every possible scenario takes more effort/development time than a localized solution limited only to current problem at hand.
Instead of following this "virtual" advice, I'd recommend to stick with Murphy's Law and KISS principle. They worked well for me. However, they are not guaranteed to work well for everybody else.
IMHO, this is a good rule of thumb for beginners with C++. It's not really harmful except in very specific scenarios (ones faced by programmers who know exactly what the tradeoffs of the virtual keyword are), and is one less thing to think about.
However, like every rule of thumb, it's oversimplifying the situation, and when one grows to think about how to communicate to other programmers what methods may or may not be good candidates for redefining, one has outgrown the rule of thumb.