No speedup with naive merge sort parallelization in Haskell

前端 未结 2 1900
深忆病人
深忆病人 2021-02-14 17:14

Note: This post was completely rewritten 2011-06-10; thanks to Peter for helping me out. Also, please don\'t be offended if I don\'t accept one answer, since this quest

2条回答
  •  佛祖请我去吃肉
    2021-02-14 17:55

    The answer is pretty easy: Because you have at no point introduced parallelism. Eval is just a monad to order computations, you have to ask for things to be executed in parallel manually. What you probably want is:

    do xr <- rpar $ runEval $ mergeSort' x
       yr <- rseq $ runEval $ mergeSort' y
       rseq (merge xr yr)
    

    This will make Haskell actually create a spark for the first computation, instead of trying to evaluate it on the spot.

    Standard tips also kind-of apply:

    1. The result should be evaluated deeply (e.g. using evalTraversable rseq). Otherwise you will only force the head of the tree, and the bulk of the data will just be returned unevaluated.
    2. Just sparking everything will most likely eat up any gains. It would be a good idea to introduce a parameter that stops sparking at lower recursion levels.

    Edit: The following actually doesn't apply anymore after the question edit

    But the worst part last: Your algorithm as you state it is very flawed. Your top-level seq only forces the first cons-cell of the list, which allows GHC to use lazyness to great effect. It will never actually construct the result list, just plow through all of them in a search for the minimum element (that's not even strictly needed, but GHC only produces the cell after the minimum is known).

    So don't be surprised when performance actually drops sharply when you start introducing parallelism under the assumptions that you need the whole list at some point in the program...

    Edit 2: Some more answers to the edits

    The biggest problem with your program is probably that it is using lists. If you want to make more than a toy example, consider at least using (unpacked) Arrays. If you want to go into serious number-crunching, maybe consider a specialised library like repa.

    On "Further Discussion":

    • The colors stand for different GC states, I can't remember which. Try to look at the event log for the associated event.

    • The way to "sidestep" garbage collection is to not produce so much garbage in the first place, e.g. by using better data structures.

    • Well, if you are looking for an inspiration on robust parallelization it might be worthwhile to have a look at monad-par, which is relatively new but (I feel) less "surprising" in its parallel behaviour.

    With monad-par, your example might become something like:

      do xr <- spawn $ mergeSort' x
         yr <- spawn $ mergeSort' y
         merge <$> get xr <*> get yr
    

    So here the get actually forces you to specify the join points - and the library does the required deepseq automatically behind the scenes.

提交回复
热议问题