Assume I have a set of unique_ptr:
std::unordered_set > my_set;
I\'m not sure what\'s the safe way to
You can also use a deleter that optionally doesn't do anything.
template<class T>
struct maybe_deleter{
bool _delete;
explicit maybe_deleter(bool doit = true) : _delete(doit){}
void operator()(T* p) const{
if(_delete) delete p;
}
};
template<class T>
using set_unique_ptr = std::unique_ptr<T, maybe_deleter<T>>;
template<class T>
set_unique_ptr<T> make_find_ptr(T* raw){
return set_unique_ptr<T>(raw, maybe_deleter<T>(false));
}
// ...
int* raw = new int(42);
std::unordered_set<set_unique_ptr<int>> myset;
myset.insert(set_unique_ptr<int>(raw));
auto it = myset.find(make_find_ptr(raw));
Live example.
Note that the ability to do heterogenous lookups on standard containers is subject of some proposals.
http://cplusplus.github.io/LWG/lwg-proposal-status.html lists
Especially the latter looks like it would cover your use case.
For now, here is an IMO not very pretty but working alternative workaround (O(n)):
#include <iterator>
#include <iostream>
#include <algorithm>
#include <unordered_set>
#include <memory>
#include <cassert>
struct MyClass {};
template <typename T>
struct RawEqualTo
{
RawEqualTo(T const* raw) : raw(raw) {}
bool operator()(T const* p) const
{ return raw == p; }
bool operator()(std::unique_ptr<T> const& up) const
{ return raw == up.get(); }
private:
T const* raw;
};
using namespace std;
int main()
{
std::unordered_set <std::unique_ptr <MyClass>> my_set;
my_set.insert(std::unique_ptr<MyClass>(new MyClass));
my_set.insert(std::unique_ptr<MyClass>(new MyClass));
auto raw = my_set.begin()->get();
bool found = end(my_set) != std::find_if(begin(my_set), end(my_set), RawEqualTo<MyClass>(raw));
assert(found);
raw = new MyClass;
found = end(my_set) != std::find_if(begin(my_set), end(my_set), RawEqualTo<MyClass>(raw));
assert(!found);
delete raw;
}
Warning It's also very inefficient, of course.
You can use a std::map<MyClass*, std::unique_ptr<MyClass>>
instead of a set. Then you can add elements like this:
std::unique_ptr<MyClass> instance(new MyClass);
map.emplace(instance.get(), std::move(instance));
If the goal is constant time for the look up, I don't think that
there is a solution.
std::unordered_set<std::unique_ptr<MyClass>>::find
requires an
std::unique_ptr<MyClass>
as argument. You will have to either change
the container, or change the contained type.
One possibility might be to replace std::unique_ptr
with
std::shared_ptr
, and change the rest of the code so that all
MyClass
are put into a shared_ptr as soon as they are created,
and are only manipulated through shared pointers. Logically,
this is probably more coherent anyway: unique_ptr
pretty much
implies (by its name, as well as its semantics) that there
aren't other pointers to the object. On the other hand, you may
not be able to use shared_ptr, if e.g. MyClass
has pointers to
other MyClass
, which may build a cycle.
Otherwise, if you can accept O(lg n) access, rather than
constant access (the difference generally doesn't become
noticeable until the tables are fairly large), you can use an
std::vector<MyClass>
, using std::lower_bound
to keep it
sorted. Unlike std::unordered_set<>::find
, std::lower_bound
does not require the target value to have the same type as the
value_type
of the sequence; all you have to do is to ensure
that they are comparable, say by providing a Compare
object
along the lines of:
class MyClassPtrCompare
{
std::less<MyClass const*> cmp;
public:
bool operator()( std::unique_ptr<MyClass> const& lhs,
std::unique_ptr<MyClass> const& rhs ) const
{
return cmp( lhs.get(), rhs.get() );
}
bool operator()( MyClass const* lhs,
std::unique_ptr<MyClass> const& rhs ) const
{
return cmp( lhs, rhs.get() );
}
bool operator()( std::unique_ptr<MyClass> const& lhs,
MyClass const* rhs ) const
{
return cmp( lhs.get(), rhs );
}
bool operator()( MyClass const* lhs,
MyClass const* rhs ) const
{
return cmp( lhs, rhs );
}
};
Insertion may involve a number of moves, but moving
a std::unique_ptr
should be fairly cheap, and the improved
locality of this solution might offset the additional runtime
costs it otherwise imposes.