问题
Order-preserving selection from an index table is trivial in serial code, but in multi-threading is less straightforward, in particular if one wants to retain efficiency (the whole point of multi-threading) by avoiding linked lists. Consider the serial code
template<typename T>
std::vector<T> select_in_order(
std::vector<std::size_t> const&keys, // permutation of 0 ... key.size()-1
std::vector<T> const&data) // anything copyable
{ // select data[keys[i]] allowing keys.size() >= data.size()
std::vector<T> result;
for(auto key:keys)
if(key<data.size())
result.push_back(data[key]);
return result;
}
How can I do this multi-threaded (say using TBB or even OpenMP), in particular if data.size() < key.size()
?
回答1:
The parallel computing operation you're looking for is called Stream Compaction.
It can be implemented efficiently in parallel, though the algorithm is non-trivial. Your best bet would be to use a library which implements it already, such as Thrust. If you truly want to implement yourself, though, an explanation of the algorithm can be found in GPU Programming Chapter 39.3.1, or alternatively, in Udacity's Intro to Parallel Programming course, Lesson 4.5.
Essentially, it involves defining a boolean predicate for your array (in your example, key<data.size()
), mapping it to a separate array, taking the Scan over the predicate array, then doing a Scatter.
Map()
and Scatter()
are easy to implement in parallel; the implementation of Scan()
is the non-trivial part. Most parallel libraries will have a Scan()
implementation; if not, the above links both describe several parallel scan algorithms.
This is all assuming you have many cores, like on a GPU. On a CPU, it would probably just be faster to do it serially; or to divide the array into large chunks, process the chunks serially (on different cores in parallel), and merge the results back together. Which approach is best depends on your data (the former works better if most keys are expected to be in the final array).
回答2:
Partition your keys among your threads, e.g. with N threads you would give T1 the keys {0, key.size() / N - 1}, T2 gets the keys {key.size() / N, 2 * key.size() / N - 1}, etc., and TN gets the keys {(N - 1) / N * keys.size(), keys.size() - 1}. Each thread puts its results in a thread-local container, and you merge the containers when the threads have finished. This way you don't need to perform any synchronization between the threads.
The most efficient way to merge the containers is for the containers to be linked lists, since it's easy to append T2's list to T1's list and so on. However, as you said it's a good idea to avoid linked lists since they don't parallelize well.
Another option is to have each thread store its results in a thead-local array, and to then merge these arrays when the threads have completed; you can perform this merge in parallel (the size of each thread's results is T{N}results.size(); given the final merge array final_results
, T1 merges its data to final_results[0, T1results.size()-1]
, T2 merges its data to final_results[T1results.size(), T1results.size() + T2results.size()-1]
, T3 merges its results to final_results[T1results.size() + T2results.size(), T1results.size() + T2results.size + T3results.size()-1]
, etc).
Another option is to use a shared concurrent_hash_map from TBB, with key
as the key and data[key]
as the value.
来源:https://stackoverflow.com/questions/17089535/a-parallel-algorithm-for-order-preserving-selection-from-an-index-table