I have an array of n integers (not necessarily distinct!) and I would like to iterate over all subsets of size k. However I\'d like to exclude all duplicate subsets.
The same (or almost the same) algorithm which is used to generated combinations of a set of unique values in lexicographical order can be used to generate combinations of a multiset in lexicographical order. Doing it this way avoids the necessity to deduplicate, which is horribly expensive, and also avoids the necessity of maintaining all the generated combinations. It does require that the original list of values be sorted.
The following simple implementation finds the next k-combination of a multiset of n values in average (and worst-case) time O(n). It expects two ranges: the first range is a sorted k-combination, and the second range is the sorted multiset. (If either range is unsorted or the values in first range do not constitute a sub(multi)set of the second range, then the behaviour is undefined; no sanity checks are made.)
Only the end iterator from the second range is actually used, but I thought that made the calling convention a bit odd.
template>
int next_comb(BidiIter first, BidiIter last,
CBidiIter /* first_value */, CBidiIter last_value,
Compare comp=Compare()) {
/* 1. Find the rightmost value which could be advanced, if any */
auto p = last;
while (p != first && !comp(*(p - 1), *--last_value)) --p;
if (p == first) return false;
/* 2. Find the smallest value which is greater than the selected value */
for (--p; comp(*p, *(last_value - 1)); --last_value) { }
/* 3. Overwrite the suffix of the subset with the lexicographically smallest
* sequence starting with the new value */
while (p != last) *p++ = *last_value++;
return true;
}
It should be clear that steps 1 and 2 combined make at most O(n) comparisons, because each of the n values is used in at most one comparison. Step 3 copies at most O(k) values, and we know that k≤n.
This could be improved to O(k) in the case where no values are repeated, by maintaining the current combination as a container of iterators into the value list rather than actual values. This would also avoid copying values, at the cost of extra dereferences. If in addition we cache the function which associates each value iterator with an iterator to the first instance of next largest value, we could eliminate Step 2 and reduce the algorithm to O(k) even for repeated values. That might be worthwhile if there are a large number of repeats and comparisons are expensive.
Here's a simple use example:
std::vector values = {1,2,2,3,3,3,3};
/* Since that's sorted, the first subset is just the first k values */
const int k = 2;
std::vector subset{values.cbegin(), values.cbegin() + k};
/* Print each combination */
do {
for (auto const& v : subset) std::cout << v << ' ';
std::cout << '\n';
} while (next_comb(subset.begin(), subset.end(),
values.cbegin(), values.cend()));
Live on coliru