Performance considerations of Haskell FFI / C?

后端 未结 4 1996
南方客
南方客 2020-12-08 07:09

If using Haskell as a library being called from my C program, what is the performance impact of making calls in to it? For instance if I h

相关标签:
4条回答
  • 2020-12-08 07:55

    I use a mix of C and Haskell threads for one of my applications and haven't noticed that much of a performance hit switching between the two. So I crafted a simple benchmark... which is quite a bit faster/cheaper than Don's. This is measuring 10 million iterations on a 2.66GHz i7:

    $ ./foo
    IO  : 2381952795 nanoseconds total, 238.195279 nanoseconds per, 160000000 value
    Pure: 2188546976 nanoseconds total, 218.854698 nanoseconds per, 160000000 value
    

    Compiled with GHC 7.0.3/x86_64 and gcc-4.2.1 on OSX 10.6

    ghc -no-hs-main -lstdc++ -O2 -optc-O2 -o foo ForeignExportCost.hs Driver.cpp
    

    Haskell:

    {-# LANGUAGE ForeignFunctionInterface #-}
    
    module ForeignExportCost where
    
    import Foreign.C.Types
    
    foreign export ccall simpleFunction :: CInt -> CInt
    simpleFunction i = i * i
    
    foreign export ccall simpleFunctionIO :: CInt -> IO CInt
    simpleFunctionIO i = return (i * i)
    

    And an OSX C++ app to drive it, should be simple to adjust to Windows or Linux:

    #include <stdio.h>
    #include <mach/mach_time.h>
    #include <mach/kern_return.h>
    #include <HsFFI.h>
    #include "ForeignExportCost_stub.h"
    
    static const int s_loop = 10000000;
    
    int main(int argc, char** argv) {
        hs_init(&argc, &argv);
    
        struct mach_timebase_info timebase_info = { };
        kern_return_t err;
        err = mach_timebase_info(&timebase_info);
        if (err != KERN_SUCCESS) {
            fprintf(stderr, "error: %x\n", err);
            return err;
        }
    
        // timing a function in IO
        uint64_t start = mach_absolute_time();
        HsInt32 val = 0;
        for (int i = 0; i < s_loop; ++i) {
            val += simpleFunctionIO(4);
        }
    
        // in nanoseconds per http://developer.apple.com/library/mac/#qa/qa1398/_index.html
        uint64_t duration = (mach_absolute_time() - start) * timebase_info.numer / timebase_info.denom;
        double duration_per = static_cast<double>(duration) / s_loop;
        printf("IO  : %lld nanoseconds total, %f nanoseconds per, %d value\n", duration, duration_per, val);
    
        // run the loop again with a pure function
        start = mach_absolute_time();
        val = 0;
        for (int i = 0; i < s_loop; ++i) {
            val += simpleFunction(4);
        }
    
        duration = (mach_absolute_time() - start) * timebase_info.numer / timebase_info.denom;
        duration_per = static_cast<double>(duration) / s_loop;
        printf("Pure: %lld nanoseconds total, %f nanoseconds per, %d value\n", duration, duration_per, val);
    
        hs_exit();
    }
    
    0 讨论(0)
  • 2020-12-08 07:58

    Haskell can peek into that 20k blob if you pass the pointer.

    0 讨论(0)
  • 2020-12-08 08:08

    what is the performance impact of making calls in to it

    Assuming you start the Haskell runtime up only once (like this), on my machine, making a function call from C into Haskell, passing an Int back and forth across the boundary, takes about 80,000 cycles (31,000 ns on my Core 2) -- determined experimentally via the rdstc register

    is it going to be possible for me to keep that 20kB of data as a global immutable reference somewhere that is accessed by the Haskell code

    Yes, that is certainly possible. If the data really is immutable, then you get the same result whether you:

    • thread the data back and forth across the language boundary by marshalling;
    • pass a reference to the data back and forth;
    • or cache it in an IORef on the Haskell side.

    Which strategy is best? It depends on the data type. The most idiomatic way would be to pass a reference to the C data back and forth, treating it as a ByteString or Vector on the Haskell side.

    I'd like to parallelize this

    I'd strongly recommend inverting the control then, and doing the parallelization from the Haskell runtime -- it'll be much more robust, as that path has been heavily tested.

    Regarding thread safety, it is apparently safe to make parallel calls to foreign exported functions running in the same runtime -- though fairly sure no one has tried this in order to gain parallelism. Calls in acquire a capability, which is essentially a lock, so multiple calls may block, reducing your chances for parallelism. In the multicore case (e.g. -N4 or so) your results may be different (multiple capabilities are available), however, this is almost certainly a bad way to improve performance.

    Again, making many parallel functions calls from Haskell via forkIO is a better documented, better tested path, with less overhead than doing the work on the C side, and probably less code in the end.

    Just make a call into your Haskell function, that in turn will do the parallelism via many Haskell threads. Easy!

    0 讨论(0)
  • 2020-12-08 08:08

    Disclaimer: I have no experience with the FFI.

    But it seems to me that if you want to reuse the 20 Kb of data so you're not passing it every time, then you could simply have a method that takes a list of "personalities", and returns a list of "decisions".

    So if you have a function

    f :: LotsaData -> Personality -> Decision
    f data p = ...
    

    Then why not make a helper function

    helper :: LotsaData -> [Personality] -> [Decision]
    helper data ps = map (f data) ps
    

    And invoke that? Using this way, though, if you wanted to parallelize, you would need to do it Haskell-side with parallel lists and parallel map.

    I defer to the experts to explain if/how C arrays can be marshaled into Haskell lists (or similar structure) easily.

    0 讨论(0)
提交回复
热议问题