No speedup with naive merge sort parallelization in Haskell

徘徊边缘 提交于 2019-12-21 04:20:54

问题


Note: This post was completely rewritten 2011-06-10; thanks to Peter for helping me out. Also, please don't be offended if I don't accept one answer, since this question seems to be rather open-ended. (But, if you solve it, you get the check mark, of course).

Another user had posted a question about parallelizing a merge sort. I thought I'd write a simple solution, but alas, it is not much faster than the sequential version.

Problem statement

Merge sort is a divide-and-conquer algorithm, where the leaves of computation can be parallelized.

The code works as follows: the list is converted into a tree, representing computation nodes. Then, the merging step returns a list for each node. Theoretically, we should see some significant performanc gains, since we're going from an O(n log n) algorithm to an O(n) algorithm with infinite processors.

The first steps of the computation are parallelized, when parameter l (level) is greater than zero below. This is done by [via variable strat] selecting the rpar strategy, which will make sub-computation mergeSort' x occur in parallel with mergeSort' y. Then, we merge the results, and force its evaluation with rdeepseq.

data Tree a = Leaf a | Node (Tree a) (Tree a) deriving (Show)

instance NFData a => NFData (Tree a) where
    rnf (Leaf v) = deepseq v ()
    rnf (Node x y) = deepseq (x, y) ()

listToTree [] = error "listToTree -- empty list"
listToTree [x] = Leaf x
listToTree xs = uncurry Node $ listToTree *** listToTree $
    splitAt (length xs `div` 2) xs

-- mergeSort' :: Ord a => Tree a -> Eval [a]
mergeSort' l (Leaf v) = return [v]
mergeSort' l (Node x y) = do
    xr <- strat $ runEval $ mergeSort' (l - 1) x
    yr <- rseq $ runEval $ mergeSort' (l - 1) y
    rdeepseq (merge xr yr)
    where
        merge [] y = y
        merge x [] = x
        merge (x:xs) (y:ys) | x < y = x : merge xs (y:ys)
                            | otherwise = y : merge (x:xs) ys
        strat | l > 0 = rpar
              | otherwise = rseq

mergeSort = runEval . mergeSort' 10

By only evaluating a few levels of the computation, we should have decent parallel communication complexity as well -- some constant factor order of n.

Results

Obtain the 4th version source code here [ http://pastebin.com/DxYneAaC ], and run it with the following to inspect thread usage, or subsequent command lines for benchmarking,

rm -f ParallelMergeSort; ghc -O2 -O3 -optc-O3 -optc-ffast-math -eventlog --make -rtsopts -threaded ParallelMergeSort.hs
./ParallelMergeSort +RTS -H512m -K512m -ls -N
threadscope ParallelMergeSort.eventlog

Results on a 24-core X5680 @ 3.33GHz show little improvement

> ./ParallelMergeSort 
initialization: 10.461204s sec.
sorting: 6.383197s sec.
> ./ParallelMergeSort +RTS -H512m -K512m -N
initialization: 27.94877s sec.
sorting: 5.228463s sec.

and on my own machine, a quad-core Phenom II,

> ./ParallelMergeSort 
initialization: 18.943919s sec.
sorting: 10.465077s sec.
> ./ParallelMergeSort +RTS -H512m -K512m -ls -N
initialization: 22.92075s sec.
sorting: 7.431716s sec.

Inspecting the result in threadscope shows good utilization for small amounts of data. (though, sadly, no perceptible speedup). However, when I try to run it on larger lists, like the above, it uses about 2 cpus half the time. It seems like a lot of sparks are getting pruned. It's also sensitive to the memory parameters, where 256mb is the sweet spot, 128mb gives 9 seconds, 512 gives 8.4, and 1024 gives 12.3!

Solutions I'm looking for

Finally, if anyone knows some high-power tools to throw at this, I'd appreciate it. (Eden?). My primary interest in Haskell parallelism is to be able to write small supportive tools for research projects, which I can throw on a 24 or 80 core server in our lab's cluster. Since they're not the main point of our group's research, I don't want to spend much time on the parallelization efficiency. So, for me, simpler is better, even if I only end up getting 20% usage.

Further discussion

  • I notice that the second bar in threadscope is sometimes green (c.f. its homepage, where the second bar seems to always be garbage collection). What does this mean?
  • Is there any way to sidestep garbage collection? It seems to be taking a lot of time. For example, why can't a subcomputation be forked, return the result in shared memory, and then die?
  • Is there a better way (arrows, applicative) to express parallelism?

回答1:


The answer is pretty easy: Because you have at no point introduced parallelism. Eval is just a monad to order computations, you have to ask for things to be executed in parallel manually. What you probably want is:

do xr <- rpar $ runEval $ mergeSort' x
   yr <- rseq $ runEval $ mergeSort' y
   rseq (merge xr yr)

This will make Haskell actually create a spark for the first computation, instead of trying to evaluate it on the spot.

Standard tips also kind-of apply:

  1. The result should be evaluated deeply (e.g. using evalTraversable rseq). Otherwise you will only force the head of the tree, and the bulk of the data will just be returned unevaluated.
  2. Just sparking everything will most likely eat up any gains. It would be a good idea to introduce a parameter that stops sparking at lower recursion levels.

Edit: The following actually doesn't apply anymore after the question edit

But the worst part last: Your algorithm as you state it is very flawed. Your top-level seq only forces the first cons-cell of the list, which allows GHC to use lazyness to great effect. It will never actually construct the result list, just plow through all of them in a search for the minimum element (that's not even strictly needed, but GHC only produces the cell after the minimum is known).

So don't be surprised when performance actually drops sharply when you start introducing parallelism under the assumptions that you need the whole list at some point in the program...

Edit 2: Some more answers to the edits

The biggest problem with your program is probably that it is using lists. If you want to make more than a toy example, consider at least using (unpacked) Arrays. If you want to go into serious number-crunching, maybe consider a specialised library like repa.

On "Further Discussion":

  • The colors stand for different GC states, I can't remember which. Try to look at the event log for the associated event.

  • The way to "sidestep" garbage collection is to not produce so much garbage in the first place, e.g. by using better data structures.

  • Well, if you are looking for an inspiration on robust parallelization it might be worthwhile to have a look at monad-par, which is relatively new but (I feel) less "surprising" in its parallel behaviour.

With monad-par, your example might become something like:

  do xr <- spawn $ mergeSort' x
     yr <- spawn $ mergeSort' y
     merge <$> get xr <*> get yr

So here the get actually forces you to specify the join points - and the library does the required deepseq automatically behind the scenes.




回答2:


I had similar luck to what you report in EDIT 3 on a dual core system with these variants. I used a smaller list length because I'm on a smaller computer, compiled with ghc -O2 -rtsopts -threaded MergePar.hs, and ran with ./MergePar +RTS -H256M -N. This might offer a more structured way to compare performance. Note that the RTS option -qa sometimes helps the simple par variants.

import Control.Applicative
import Control.Parallel
import Control.Parallel.Strategies
import Criterion.Main
import GHC.Conc (numCapabilities)

data Tree a = Leaf a | Node (Tree a) (Tree a) deriving Show

listToTree [] = error "listToTree -- empty list"
listToTree [x] = Leaf x
listToTree xs = Node (listToTree (take half xs)) (listToTree (drop half xs))
  where half = length xs `div` 2

-- Merge two ordered lists
merge :: Ord a => [a] -> [a] -> [a]
merge [] y = y
merge x [] = x
merge (x:xs) (y:ys) | x < y = x : merge xs (y:ys)
                    | otherwise = y : merge (x:xs) ys

-- Simple merge sort
mergeSort' :: Ord a => Tree a -> [a]
mergeSort' (Leaf v) = [v]
mergeSort' (Node x y) = merge (mergeSort' x) (mergeSort' y)

mergeSort :: Ord a => [a] -> [a]
mergeSort = mergeSort' . listToTree

-- Merge sort with 'par' annotations on every recursive call
mergeSortP' :: Ord a => Tree a -> [a]
mergeSortP' (Leaf v) = [v]
mergeSortP' (Node x y) = let xr = mergeSortP' x
                             yr = mergeSortP' y
                         in xr `par` yr `pseq` merge xr yr

mergeSortP :: Ord a => [a] -> [a]
mergeSortP = mergeSortP' . listToTree

-- Merge sort with 'rpar' annotations on every recursive call
mergeSortR' :: Ord a => Tree a -> [a]
mergeSortR' (Leaf v) = [v]
mergeSortR' (Node x y) = 
  runEval $ merge <$> rpar (mergeSortR' x) <*> rpar (mergeSortR' y)

mergeSortR :: Ord a => [a] -> [a]
mergeSortR = mergeSortR' . listToTree

-- Parallel merge sort that stops looking for parallelism at a certain
-- depth
smartMerge' :: Ord a => Int -> Tree a -> [a]
smartMerge' _ (Leaf v) = [v]
smartMerge' n t@(Node x y)
  | n <= 1 = mergeSort' t
  | otherwise = let xr = smartMerge' (n-1) x
                    yr = smartMerge' (n-2) y
                in xr `par` yr `pseq` merge xr yr

smartMerge :: Ord a => [a] -> [a]
smartMerge = smartMerge' numCapabilities . listToTree

main = defaultMain $ [ bench "original" $ nf mergeSort lst
                     , bench "par" $ nf mergeSortP lst
                     , bench "rpar" $ nf mergeSortR lst
                     , bench "smart" $ nf smartMerge lst ]
  where lst = [100000,99999..0] :: [Int]


来源:https://stackoverflow.com/questions/6299658/no-speedup-with-naive-merge-sort-parallelization-in-haskell

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!