Almost every C++ resource I\'ve seen that discusses this kind of thing tells me that I should prefer polymorphic approaches to using RTTI (run-time type identification). In
Some compilers don't use it / RTTI is not always enabled
I believe you have misunderstood such arguments.
There are a number of C++ coding places where RTTI is not to be used. Where compiler switches are used to forcibly disable RTTI. If you are coding within such a paradigm... then you almost certainly have already been informed of this restriction.
The problem therefore is with libraries. That is, if you're writing a library that depends on RTTI, then your library cannot be used by users who turn off RTTI. If you want your library to be used by those people, then it cannot use RTTI, even if your library also gets used by people who can use RTTI. Equally importantly, if you can't use RTTI, you have to shop around a little harder for libraries, since RTTI use is a deal-breaker for you.
It costs extra memory / Can be slow
There are many things you don't do in hot loops. You don't allocate memory. You don't go iterating through linked lists. And so forth. RTTI certainly can be another one of those "don't do this here" things.
However, consider all of your RTTI examples. In all cases, you have one or more objects of an indeterminate type, and you want to perform some operation on them which may not be possible for some of them.
That's something you have to work around at a design level. You can write containers that don't allocate memory which fit into the "STL" paradigm. You can avoid linked list data structures, or limit their use. You can reorganize arrays of structs into structs of arrays or whatever. It changes some things, but you can keep it compartmentalized.
Changing a complex RTTI operation into a regular virtual function call? That's a design issue. If you have to change that, then it's something that requires changes to every derived class. It changes how lots of code interacts with various classes. The scope of such a change extends far beyond the performance-critical sections of code.
So... why did you write it the wrong way to begin with?
I don't have to define attributes or methods where I don't need them, the base node class can stay lean and mean.
To what end?
You say that the base class is "lean and mean". But really... it's nonexistent. It doesn't actually do anything.
Just look at your example: node_base
. What is it? It seems to be a thing which has adjacent other things. This is a Java interface (pre-generics Java at that): a class that exists solely to be something that users can cast to the real type. Maybe you add some basic feature like adjacency (Java adds ToString
), but that's it.
There's a difference between "lean and mean" and "transparent".
As Yakk said, such programming styles limit themselves in interoperability, because if all of the functionality is in a derived class, then users outside of that system, with no access to that derived class, cannot interoperate with the system. They can't override virtual functions and add new behaviors. They can't even call those functions.
But what they also do is make it a major pain to actually do new stuff, even within the system. Consider your poke_adjacent_oranges
function. What happens if someone wants a lime_node
type which can be poked just like orange_node
s? Well, we can't derive lime_node
from orange_node
; that makes no sense.
Instead, we have to add a new lime_node
derived from node_base
. Then change the name of poke_adjacent_oranges
to poke_adjacent_pokables
. And then, try casting to orange_node
and lime_node
; whichever cast works is the one we poke.
However, lime_node
needs it's own poke_adjacent_pokables
. And this function needs to do the same casting checks.
And if we add a third type, we have to not only add its own function, but we must change the functions in the other two classes.
Obviously, now you make poke_adjacent_pokables
a free function, so that it works for all of them. But what do you suppose happens if someone adds a fourth type and forgets to add it to that function?
Hello, silent breakage. The program appears to work more or less OK, but it isn't. Had poke
been an actual virtual function, the compiler would have failed when you didn't override the pure virtual function from node_base
.
With your way, you have no such compiler checks. Oh sure, the compiler won't check for non-pure virtuals, but at least you have protection in cases where protection is possible (ie: there is no default operation).
The use of transparent base classes with RTTI leads to a maintenance nightmare. Indeed, most uses of RTTI leads to maintenance headaches. That doesn't mean that RTTI isn't useful (it's vital for making boost::any
work, for example). But it is a very specialized tool for very specialized needs.
In that way, it is "harmful" in the same way as goto
. It's a useful tool that shouldn't be done away with. But it's use should be rare within your code.
So, if you can't use transparent base classes and dynamic casting, how do you avoid fat interfaces? How do you keep from bubbling every function you might want to call on a type from bubbling up to the base class?
The answer depends on what the base class is for.
Transparent base classes like node_base
are just using the wrong tool for the problem. Linked lists are best handled by templates. The node type and adjacency would be provided by a template type. If you want to put a polymorphic type in the list, you can. Just use BaseClass*
as T
in the template argument. Or your preferred smart pointer.
But there are other scenarios. One is a type that does a lot of things, but has some optional parts. A particular instance might implement certain functions, while another wouldn't. However, the design of such types usually offers a proper answer.
The "entity" class is a perfect example of this. This class has long since plagued game developers. Conceptually, it has a gigantic interface, living at the intersection of nearly a dozen, entirely disparate systems. And different entities have different properties. Some entities don't have any visual representation, so their rendering functions do nothing. And this is all determined at runtime.
The modern solution for this is a component-style system. Entity
is merely a container of a set of components, with some glue between them. Some components are optional; an entity that has no visual representation does not have the "graphics" component. An entity with no AI has no "controller" component. And so forth.
Entities in such a system are just pointers to components, with most of their interface being provided by accessing the components directly.
Developing such a component system requires recognizing, at the design stage, that certain functions are conceptually grouped together, such that all types that implement one will implement them all. This allows you to extract the class from the prospective base class and make it a separate component.
This also helps follow the Single Responsibility Principle. Such a componentized class only has the responsibility of being a holder of components.
From Matthew Walton:
I note lots of answers don't note the idea that your example suggests node_base is part of a library and users will make their own node types. Then they can't modify node_base to allow another solution, so maybe RTTI becomes their best option then.
OK, let's explore that.
For this to make sense, what you would have to have is a situation where some library L provides a container or other structured holder of data. The user gets to add data to this container, iterate over its contents, etc. However, the library doesn't really do anything with this data; it simply manages its existence.
But it doesn't even manage its existence so much as its destruction. The reason being that, if you're expected to use RTTI for such purposes, then you are creating classes that L is ignorant of. This means that your code allocates the object and hands it off to L for management.
Now, there are cases where something like this is a legitimate design. Event signaling/message passing, thread-safe work queues, etc. The general pattern here is this: someone is performing a service between two pieces of code that is appropriate for any type, but the service need not be aware of the specific types involved.
In C, this pattern is spelled void*
, and its use requires a great deal of care to avoid being broken. In C++, this pattern is spelled std::experimental::any (soon to be spelled std::any
).
The way this ought to work is that L provides a node_base
class that takes an any
that represents your actual data. When you receive the message, thread queue work item, or whatever you're doing, you then cast that any
to its appropriate type, which both the sender and the receiver know.
So instead of deriving orange_node
from node_data
, you simply stick an orange
inside of node_data
's any
member field. The end-user extracts it and uses any_cast
to convert it to orange
. If the cast fails, then it wasn't orange
.
Now, if you're at all familiar with the implementation of any
, you'll likely say, "hey wait a minute: any
internally uses RTTI to make any_cast
work." To which I answer, "... yes".
That's the point of an abstraction. Deep down in the details, someone is using RTTI. But at the level you ought to be operating at, direct RTTI is not something you should be doing.
You should be using types that provide you the functionality you want. After all, you don't really want RTTI. What you want is a data structure that can store a value of a given type, hide it from everyone except the desired destination, then be converted back into that type, with verification that the stored value actually is of that type.
That's called any
. It uses RTTI, but using any
is far superior to using RTTI directly, since it fits the desired semantics more correctly.
C++ is built on the idea of static type checking.
[1]RTTI, that is, dynamic_cast
and type_id
, is dynamic type checking.
So, essentially you're asking why static type checking is preferable to dynamic type checking. And the simple answer is, whether static type checking is preferable to dynamic type checking, depends. On a lot. But C++ is one of the programming languages that are designed around the idea of static type checking. And this means that e.g. the development process, in particular testing, is typically adapted to static type checking, and then fits that best.
Re
” I wouldn't know a clean way of doing this with templates or other methods
you can do this process-heterogenous-nodes-of-a-graph with static type checking and no casting whatsoever via the visitor pattern, e.g. like this:
#include <iostream>
#include <set>
#include <initializer_list>
namespace graph {
using std::set;
class Red_thing;
class Yellow_thing;
class Orange_thing;
struct Callback
{
virtual void handle( Red_thing& ) {}
virtual void handle( Yellow_thing& ) {}
virtual void handle( Orange_thing& ) {}
};
class Node
{
private:
set<Node*> connected_;
public:
virtual void call( Callback& cb ) = 0;
void connect_to( Node* p_other )
{
connected_.insert( p_other );
}
void call_on_connected( Callback& cb )
{
for( auto const p : connected_ ) { p->call( cb ); }
}
virtual ~Node(){}
};
class Red_thing
: public virtual Node
{
public:
void call( Callback& cb ) override { cb.handle( *this ); }
auto redness() -> int { return 255; }
};
class Yellow_thing
: public virtual Node
{
public:
void call( Callback& cb ) override { cb.handle( *this ); }
};
class Orange_thing
: public Red_thing
, public Yellow_thing
{
public:
void call( Callback& cb ) override { cb.handle( *this ); }
void poke() { std::cout << "Poked!\n"; }
void poke_connected_orange_things()
{
struct Poker: Callback
{
void handle( Orange_thing& obj ) override
{
obj.poke();
}
} poker;
call_on_connected( poker );
}
};
} // namespace graph
auto main() -> int
{
using namespace graph;
Red_thing r;
Yellow_thing y1, y2;
Orange_thing o1, o2, o3;
for( Node* p : std::initializer_list<Node*>{ &y1, &y2, &r, &o2, &o3 } )
{
o1.connect_to( p );
}
o1.poke_connected_orange_things();
}
This assumes that the set of node types is known.
When it isn't, the visitor pattern (there are many variations of it) can be expressed with a few centralized casts, or, just a single one.
For a template-based approach see the Boost Graph library. Sad to say I am not familiar with it, I haven't used it. So I'm not sure exactly what it does and how, and to what degree it uses static type checking instead of RTTI, but since Boost is generally template-based with static type checking as the central idea, I think you'll find that its Graph sub-library is also based on static type checking.
[1] Run Time Type Information.
An interface describes what one needs to know in order to interact in a given situation in code. Once you extend the interface with "your entire type hierarchy", your interface "surface area" becomes huge, which makes reasoning about it harder.
As an example, your "poke adjacent oranges" means that I, as a 3rd party, cannot emulate being an orange! You privately declared an orange type, then use RTTI to make your code behave special when interacting with that type. If I want to "be orange", I must be within your private garden.
Now everyone who couples with "orangeness" couples with your entire orange type, and implicitly with your entire private garden, instead of with a defined interface.
While at first glance this looks like a great way to extend the limited interface without having to change all clients (adding am_I_orange
), what tends to happen instead is it ossifies the code base, and prevents further extension. The special orangeness becomes inherent to the functioning of the system, and prevents you from creating a "tangerine" replacement for orange that is implemented differently and maybe removes a dependency or solves some other problem elegantly.
This does mean your interface has to be sufficient to solve your problem. From that perspective, why do you need to only poke oranges, and if so why was orangeness unavailable in the interface? If you need some fuzzy set of tags that can be added ad-hoc, you could add that to your type:
class node_base {
public:
bool has_tag(tag_name);
This provides a similar massive broadening of your interface from narrowly specified to broad tag-based. Except instead of doing it through RTTI and implementation details (aka, "how are you implemented? With the orange type? Ok you pass."), it does so with something easily emulated through a completely different implementation.
This can even be extended to dynamic methods, if you need that. "Do you support being Foo'd with arguments Baz, Tom and Alice? Ok, Fooing you." In a big sense, this is less intrusive than a dynamic cast to get at the fact the other object is a type you know.
Now tangerine objects can have the orange tag and play along, while being implementation-decoupled.
It can still lead to a huge mess, but it is at least a mess of messages and data, not implementation hierarchies.
Abstraction is a game of decoupling and hiding irrelevancies. It makes code easier to reason about locally. RTTI is boring a hole straight through the abstraction into implementation details. This can make solving a problem easier, but it has the cost of locking you into one specific implementation really easily.
If you call a function, as a rule you don't really care what precise steps it will take, only that some higher-level goal will be achieved within certain constraints (and how the function makes that happen is really it's own problem).
When you use RTTI to make a preselection of special objects that can do a certain job, while others in the same set cannot, you are breaking that comfortable view of the world. All of a sudden the caller is supposed to know who can do what, instead of simply telling his minions to get on with it. Some people are bothered by this, and I suspect this is a large part of the reason why RTTI is considered a little dirty.
Is there a performance issue? Maybe, but I've never experienced it, and it might be wisdom from twenty years ago, or from people who honestly believe that using three assembly instructions instead of two is unacceptable bloat.
So how to deal with it... Depending on your situation it might make sense to have any node-specific properties bundled into separate objects (i.e. the entire 'orange' API could be a separate object). The root object could then have a virtual function to return the 'orange' API, returning nullptr by default for non-orange objects.
While this might be overkill depending on your situation, it would allow you to query on root level whether a specific node supports a specific API, and if it does, execute functions specific to that API.
The most of the moral suasion against this or that feature are typicality originated from the observation that there are a umber of misconceived uses of that feature.
Where moralists fail is that they presume ALL the usages are misconceived, while in fact features exist for a reason.
They have what I used to call the "plumber complex": they think all taps are malfunctioning because all the taps they are called to repair are. The reality is that most taps work well: you simply don't call a plumber for them!
A crazy thing that can happen is when, to avoid using a given feature, programmers write a lot of boilerplate code actually privately re-implementing exactly that feature. (Have you ever met classes that don't use RTTI nor virtual calls, but have a value to track which actual derived type are they? That's no more than RTTI reinvention in disguise.)
There is a general way to think about polymorphism: IF(selection) CALL(something) WITH(parameters)
. (Sorry, but programming, when disregarding abstraction, is all about that)
The use of design-time (concepts) compile-time (template-deduction based), run-time (inheritance and virtual function-based) or data-driven (RTTI and switching) polymorphism, depends on how much of the decisions are known at each of the stages of the production and how variable they are at every context.
The idea is that:
the more you can anticipate, the better the chance of catching errors and avoid bugs affecting the end-user.
If everything is constant (including the data) you can do everything with template meta-programming. After compilation occurred on actualized constants, the entire program boils down to just a return statement that spits out the result.
If there are a number of cases that are all known at compile time, but you don't know about the actual data they have to act on, then compile-time polymorphism (mainly CRTP or similar) can be a solution.
If the selection of the cases depends on the data (not compile-time known values) and the switching is mono-dimensional (what to do can be reduced to one value only) then virtual function based dispatch (or in general "function pointer tables") is needed.
If the switching is multidimensional, since no native multiple runtime dispatch exist in C++, then you have to either:
If not just the switching, but even the actions are not compile time known, then scripting & parsing is required: the data themselves must describe the action to be taken on them.
Now, since each of the cases I enumerated can be seen as a particular case of what follows it, you can solve every problem by abusing the bottom-most solution also for problems affordable with the top-most.
That's what moralization actually pushes to avoid. But that does not means that problems living in the bottom-most domains don't exist!
Bashing RTTI just to bash it, is like bashing goto
just to bash it. Things for parrots, not programmers.
Of course there is a scenario where polymorphism can't help: names. typeid
lets you access the name of the type, although the way this name is encoded is implementation-defined. But usually this is not a problem since you can compare two typeid
-s:
if ( typeid(5) == "int" )
// may be false
if ( typeid(5) == typeid(int) )
// always true
The same holds for hashes.
[...] RTTI is "considered harmful"
harmful is definitely overstating: RTTI has some drawbacks, but it does have advantages too.
You don't truly have to use RTTI. RTTI is a tool to solve OOP problems: should you use another paradigm, these would likely disappear. C doesn't have RTTI, but still works. C++ instead fully supports OOP and gives you multiple tools to overcome some issue that may require runtime information: one of them is indeed RTTI, which though comes with a price. If you can't afford it, thing you'd better state only after a secure performance analysis, there is still the old-school void*
: it's free. Costless. But you get no type safety. So it's all about trades.
- Some compilers don't use / RTTI is not always enabled
I really don't buy this argument. It's like saying I shouldn't use C++14 features, because there are compilers out there that don't support it. And yet, no one would discourage me from using C++14 features.
If you write (possibly strictly) conforming C++ code, you can expect the same behavior regardless of the implementation. Standard-compliant implementations shall support standard C++ features.
But do consider that in some environments C++ defines («freestanding» ones), RTTI need not be provided and neither do exceptions, virtual
and so on. RTTI needs an underlying layer to work correctly that deals with low-level details such as the ABI and the actual type information.
I agree with Yakk regarding RTTI in this case. Yes, it could be used; but is it logically correct? The fact that the language allows you to bypass this check does not mean it should be done.