I have a piece of c++11 code similar like below:
switch(var) {
case 1: dosomething(std::get<1>(tuple));
case 2: dosomething(std::get<2>(tup
I know this thread is quite old, but i stumbled across it in my attempt to replace a virtual dispatch through a static one in my code base.
In contrast to all solutions presented so far this one uses a binary search instead of a linear search, so in my understanding it should be O(log(n))
instead of O(n)
. Other than that it is just a modified version of the solution presented by Oktalist
#include <tuple>
#include <cassert>
template <std::size_t L, std::size_t U>
struct visit_impl
{
template <typename T, typename F>
static void visit(T& tup, std::size_t idx, F fun)
{
static constexpr std::size_t MEDIAN = (U - L) / 2 + L;
if (idx > MEDIAN)
visit_impl<MEDIAN, U>::visit(tup, idx, fun);
else if (idx < MEDIAN)
visit_impl<L, MEDIAN>::visit(tup, idx, fun);
else
fun(std::get<MEDIAN>(tup));
}
};
template <typename F, typename... Ts>
void visit_at(const std::tuple<Ts...>& tup, std::size_t idx, F fun)
{
assert(idx <= sizeof...(Ts));
visit_impl<0, sizeof...(Ts)>::visit(tup, idx, fun);
}
template <typename F, typename... Ts>
void visit_at(std::tuple<Ts...>& tup, std::size_t idx, F fun)
{
assert(idx <= sizeof...(Ts));
visit_impl<0, sizeof...(Ts)>::visit(tup, idx, fun);
}
/* example code */
/* dummy template to generate different callbacks */
template <int N>
struct Callback
{
int Call() const
{
return N;
}
};
template <typename T>
struct CallbackTupleImpl;
template <std::size_t... Indx>
struct CallbackTupleImpl<std::index_sequence<Indx...>>
{
using type = std::tuple<Callback<Indx>...>;
};
template <std::size_t N>
using CallbackTuple = typename CallbackTupleImpl<std::make_index_sequence<N>>::type;
int main()
{
CallbackTuple<100> myTuple;
int value{};
visit_at(myTuple, 42, [&value](auto& pc) { value = pc.Call(); });
assert(value == 42);
}
With this solution the number of calls to visit_impl
is 7
. With the linear search approach it would be 58
instead.
Another interesting solution presented here even manages to provide O(1)
access. However at the cost of more storage, since a function map with the size O(n)
is generated.