I understand the mechanics of static polymorphism using the Curiously Recurring Template Pattern. I just do not understand what is it good for.
The declared motivat
The link you provide mentions boost iterators as an example of static polymorphism. STL iterators also exhibit this pattern. Lets take a look at an example and consider why the authors of those types decided this pattern was appropriate:
#include <vector>
#include <iostream>
using namespace std;
void print_ints( vector<int> const& some_ints )
{
for( vector<int>::const_iterator i = some_ints.begin(), end = some_ints.end(); i != end; ++i )
{
cout << *i;
}
}
Now, how would we implement int vector<int>::const_iterator::operator*() const;
Can we use polymprhism for this? Well, no. What would the signature of our virtual function be? void const* operator*() const
? That's useless! The type has been erased (degraded from int to void*). Instead, the curiously recurring template pattern steps in to help us generate the iterator type. Here is a rough approximation of the iterator class we would need to implement the above:
template<typename T>
class const_iterator_base
{
public:
const_iterator_base():{}
T::contained_type const& operator*() const { return Ptr(); }
T::contained_type const& operator->() const { return Ptr(); }
// increment, decrement, etc, can be implemented and forwarded to T
// ....
private:
T::contained_type const* Ptr() const { return static_cast<T>(this)->Ptr(); }
};
Traditional dynamic polymorphism could not provide the above implementation!
A related and important term is parametric polymorphism. This allows you to implement similar APIs in, say, python that you can using the curiously recurring template pattern in C++. Hope this is helpful!
I think it's worth taking a stab at the source of all this complexity, and why languages like Java and C# mostly try to avoid it: type erasure! In c++ there is no useful all containing Object
type with useful information. Instead we have void*
and once you have void*
you truely have nothing! If you have an interface that decays to void*
the only way to recover is by making dangerous assumptions or keeping extra type information around.
What am I missing about static polymorphism? Is it all about good C++ style?
Static polymorphism and runtime polymorphism are different things and accomplish different goals. They are both technically polymorphism, in that they decide which piece of code to execute based on the type of something. Runtime polymorphism defers binding the type of something (and thus the code that runs) until runtime, while static polymorphism is completely resolved at compile time.
This results in pros and cons for each. For instance, static polymorphism can check assumptions at compile time, or select among options which would not compile otherwise. It also provides tons of information to the compiler and optimizer, which can inline knowing fully the target of calls and other information. But static polymorphism requires that implementations be available for the compiler to inspect in each translation unit, can result in binary code size bloat (templates are fancy pants copy paste), and don't allow these determinations to occur at runtime.
For instance, consider something like std::advance
:
template<typename Iterator>
void advance(Iterator& it, ptrdiff_t offset)
{
// If it is a random access iterator:
// it += offset;
// If it is a bidirectional iterator:
// for (; offset < 0; ++offset) --it;
// for (; offset > 0; --offset) ++it;
// Otherwise:
// for (; offset > 0; --offset) ++it;
}
There's no way to get this to compile using runtime polymorphism. You have to make the decision at compile time. (Typically you would do this with tag dispatch e.g.)
template<typename Iterator>
void advance_impl(Iterator& it, ptrdiff_t offset, random_access_iterator_tag)
{
// Won't compile for bidirectional iterators!
it += offset;
}
template<typename Iterator>
void advance_impl(Iterator& it, ptrdiff_t offset, bidirectional_iterator_tag)
{
// Works for random access, but slow
for (; offset < 0; ++offset) --it; // Won't compile for forward iterators
for (; offset > 0; --offset) ++it;
}
template<typename Iterator>
void advance_impl(Iterator& it, ptrdiff_t offset, forward_iterator_tag)
{
// Doesn't allow negative indices! But works for forward iterators...
for (; offset > 0; --offset) ++it;
}
template<typename Iterator>
void advance(Iterator& it, ptrdiff_t offset)
{
// Use overloading to select the right one!
advance_impl(it, offset, typename iterator_traits<Iterator>::iterator_category());
}
Similarly, there are cases where you really don't know the type at compile time. Consider:
void DoAndLog(std::ostream& out, int parameter)
{
out << "Logging!";
}
Here, DoAndLog
doesn't know anything about the actual ostream
implementation it gets -- and it may be impossible to statically determine what type will be passed in. Sure, this can be turned into a template:
template<typename StreamT>
void DoAndLog(StreamT& out, int parameter)
{
out << "Logging!";
}
But this forces DoAndLog
to be implemented in a header file, which may be impractical. It also requires that all possible implementations of StreamT
are visible at compile time, which may not be true -- runtime polymorphism can work (although this is not recommended) across DLL or SO boundaries.
When should it be used? What are some guidelines?
This is like someone coming to you and saying "when I'm writing a sentence, should I use compound sentences or simple sentences"? Or perhaps a painter saying "should I always use red paint or blue paint?" There is no right answer, and there is no set of rules that can be blindly followed here. You have to look at the pros and cons of each approach, and decide which best maps to your particular problem domain.
As for the CRTP, most use cases for that are to allow the base class to provide something in terms of the derived class; e.g. Boost's iterator_facade
. The base class needs to have things like DerivedClass operator++() { /* Increment and return *this */ }
inside -- specified in terms of derived in the member function signatures.
It can be used for polymorphic purposes, but I haven't seen too many of those.
While there may be cases where static polymorphism is useful (the other answers have listed a few), I would generally see it as a bad thing. Why? Because you cannot actually use a pointer to the base class anymore, you always have to provide a template argument providing the exact derived type. And in that case, you could just as well use the derived type directly. And, to put it bluntly, static polymorphism is not what object orientation is about.
The runtime difference between static and dynamic polymorphism is exactly two pointer dereferenciations (iff the compiler really inlines the dispatch method in the base class, if it doesn't for some reason, static polymorphism is slower). That's not really expensive, especially since the second lookup should virtually always hit the cache. All in all, those lookups are usually cheaper than the function call itself, and are certainly worth it to get the real flexibility provided by dynamic polymorphism.