I have a collection of std::set
. I want to find the intersection of all the sets in this collection, in the fastest manner. The number of sets in the collection is typically very small (~5-10), and the number of elements in each set is is usually less than 1000, but can occasionally go upto around 10000. But I need to do these intersections tens of thousands of time, as fast as possible. I tried to benchmark a few methods as follows:
- In-place intersection in a
std::set
object which initially copies the first set. Then for subsequent sets, it iterates over all element of itself and the ith set of the collection, and removes items from itself as needed. - Using
std::set_intersection
into a temporarystd::set
, swap contents to a current set, then again find intersection of the current set with the next set and insert into the temp set, and so on. - Manually iterate over all the elements of all sets like in 1), but using a
vector
as the destination container instead ofstd::set
. - Same as in 4, but using a
std::list
instead of avector
, suspecting alist
will provide faster deletions from the middle. - Using hash sets (
std::unordered_set
) and checking for all items in all sets.
As it turned out, using a vector
is marginally faster when the number of elements in each set is small, and list
is marginally faster for larger sets. In-place using set
is a substantially slower than both, followed by set_intersection
and hash sets. Is there a faster algorithm/datastructure/tricks to achieve this? I can post code snippets if required. Thanks!
You might want to try a generalization of std::set_intersection()
: the algorithm is to use iterators for all sets:
- If any iterator has reached the
end()
of its corresponding set, you are done. Thus, it can be assumed that all iterators are valid. - Take the first iterator's value as the next candidate value
x
. - Move through the list of iterators and
std::find_if()
the first element at least as big asx
. - If the value is bigger than
x
make it the new candidate value and search again in the sequence of iterators. - If all iterators are on value
x
you found an element of the intersection: Record it, increment all iterators, start over.
Night is a good adviser and I think I may have an idea ;)
- Memory is much slower than CPU these days, if all data fits in the L1 cache no big deal, but it easily spills over to L2 or L3: 5 sets of 1000 elements is already 5000 elements, meaning 5000 nodes, and a set node contains at least 3 pointers + the object (ie, at least 16 bytes on a 32 bits machine and 32 bytes on a 64 bits machine) => that's at least 80k memory and the recent CPUs only have 32k for the L1D so we are already spilling into L2
- The previous fact is compounded by the problem that sets nodes are probably scattered around memory, and not tightly packed together, meaning that part of the cache line is filled with completely unrelated stuff. This could be alleviated by provided an allocator that keeps nodes close to each others.
- And this is further compounded by the fact that CPUs are much better at sequential reads (where they can prefetch memory before you need it, so you don't wait for it) rather than random reads (and a tree structure unfortunately leads to quite random reads)
This is why where speeds matter, a vector
(or perhaps a deque
) are so great structures: they play very well with memory. As such, I would definitely recommend using vector
as our intermediary structures; although care need be taken to only ever insert/delete from an extremity to avoid relocation.
So I thought about a rather simple approach:
#include <cassert>
#include <algorithm>
#include <set>
#include <vector>
// Do not call this method if you have a single set...
// And the pointers better not be null either!
std::vector<int> intersect(std::vector< std::set<int> const* > const& sets) {
for (auto s: sets) { assert(s && "I said no null pointer"); }
std::vector<int> result; // only return this one, for NRVO to kick in
// 0. Check obvious cases
if (sets.empty()) { return result; }
if (sets.size() == 1) {
result.assign(sets.front()->begin(), sets.front()->end());
return result;
}
// 1. Merge first two sets in the result
std::set_intersection(sets[0]->begin(), sets[0]->end(),
sets[1]->begin(), sets[1]->end(),
std::back_inserter(result));
if (sets.size() == 2) { return result; }
// 2. Merge consecutive sets with result into buffer, then swap them around
// so that the "result" is always in result at the end of the loop.
std::vector<int> buffer; // outside the loop so that we reuse its memory
for (size_t i = 2; i < sets.size(); ++i) {
buffer.clear();
std::set_intersection(result.begin(), result.end(),
sets[i]->begin(), sets[i]->end(),
std::back_inserter(buffer));
swap(result, buffer);
}
return result;
}
It seems correct, I cannot guarantee its speed though, obviously.
来源:https://stackoverflow.com/questions/12875993/efficient-set-intersection-of-a-collection-of-sets-in-c