C OpenMP parallel quickSort

后端 未结 1 791
眼角桃花
眼角桃花 2020-12-31 16:38

Once again I\'m stuck when using openMP in C++. This time I\'m trying to implement a parallel quicksort.

Code:

#include 

        
相关标签:
1条回答
  • 2020-12-31 16:49

    I didn't actually run your code, but I see an immediate mistake on p, which should be private not shared. The parallel invocation of qs: qs(v, p.first, p.second); will have races on p, resulting in unpredictable behavior. The local variables at qs should be okay because all threads have their own stack. However, the overall approach is good. You're on the right track.


    Here are my general comments for the implementation of parallel quicksort. Quicksort itself is embarrassingly parallel, which means no synchronization is needed. The recursive calls of qs on a partitioned array is embarrassingly parallel.

    However, the parallelism is exposed in a recursive form. If you simply use the nested parallelism in OpenMP, you will end up having thousand threads in a second. No speedup will be gained. So, mostly you need to turn the recursive algorithm into an interative one. Then, you need to implement a sort of work-queue. This is your approach. And, it's not easy.

    For your approach, there is a good benchmark: OmpSCR. You can download at http://sourceforge.net/projects/ompscr/

    In the benchmark, there are several versions of OpenMP-based quicksort. Most of them are similar to yours. However, to increase parallelism, one must minimize the contention on a global queue (in your code, it's s). So, there could be a couple of optimizations such as having local queues. Although the algorithm itself is purely parallel, the implementation may require synchronization artifacts. And, most of all, it's very hard to gain speedups.


    However, you still directly use recursive parallelism in OpenMP in two ways: (1) Throttling the total number of the threads, and (2) Using OpenMP 3.0's task.

    Here is pseudo code for the first approach (This is only based on OmpSCR's benchmark):

    void qsort_omp_recursive(int* begin, int* end)
    {
      if (begin != end) {
        // Partition ...
    
        // Throttling
        if (...)  {
          qsort_omp_recursive(begin, middle);
          qsort_omp_recursive(++middle, ++end);
        } else {
    
    #pragma omp parallel sections nowait
          {
    #pragma omp section
            qsort_omp_recursive(begin, middle);
    #pragma omp section
            qsort_omp_recursive(++middle, ++end);
          }
        }
      }
    }
    

    In order to run this code, you need to call omp_set_nested(1) and omp_set_num_threads(2). The code is really simple. We simply spawn two threads on the division of the work. However, we insert a simple throttling logic to prevent excessive threads. Note that my experimentation showed decent speedups for this approach.


    Finally, you may use OpenMP 3.0's task, where a task is a logically concurrent work. In the above all OpenMP's approaches, each parallel construct spawns two physical threads. You may say there is a hard 1-to-1 mapping between a task to a work thread. However, task separates logical tasks and workers.

    Because OpenMP 3.0 is not popular yet, I will use Cilk Plus, which is great to express this kind of nested and recursive parallelisms. In Cilk Plus, the parallelization is extremely easy:

    void qsort(int* begin, int* end)
    {
      if (begin != end) {
        --end;
        int* middle = std::partition(begin, end,
          std::bind2nd(std::less<int>(), *end));
        std::swap(*end, *middle);
    
        cilk_spawn qsort(begin, middle);
        qsort(++middle, ++end);
        // cilk_sync; Only necessay at the final stage.
      }
    }
    

    I copied this code from Cilk Plus' example code. You will see a single keyword cilk_spawn is everything to parallelize quicksort. I'm skipping the explanations of Cilk Plus and spawn keyword. However, it's easy to understand: the two recursive calls are declared as logically concurrent tasks. Whenever the recursion takes place, the logical tasks are created. But, the Cilk Plus runtime (which implements an efficient work-stealing scheduler) will handle all kinds of dirty job. It optimally queues the parallel tasks and maps to the work threads.

    Note that OpenMP 3.0's task is essentially similar to the Cilk Plus's approach. My experimentation shows that pretty nice speedups were feasible. I got a 3~4x speedup on a 8-core machine. And, the speedup was scale. Cilk Plus' absolute speedups are greater than those of OpenMP 3.0's.

    The approach of Cilk Plus (and OpenMP 3.0) and your approach are essentially the same: the separation of parallel task and workload assignment. However, it's very difficult to implement efficiently. For example, you must reduce the contention and use lock-free data structures.

    0 讨论(0)
提交回复
热议问题