Does anyone know of a quick-starting Haskell interpreter that would be suitable for use in writing shell scripts? Running \'hello world\' using Hugs took 400ms on my old laptop
Using ghc -e
is pretty much equivalent to invoking ghci
. I believe that GHC's runhaskell
compiles the code to a temporary executable before running it, as opposed to interpreting it like ghc -e
/ghci
, but I'm not 100% certain.
$ time echo 'Hello, world!'
Hello, world!
real 0m0.021s
user 0m0.000s
sys 0m0.000s
$ time ghc -e 'putStrLn "Hello, world!"'
Hello, world!
real 0m0.401s
user 0m0.031s
sys 0m0.015s
$ echo 'main = putStrLn "Hello, world!"' > hw.hs
$ time runhaskell hw.hs
Hello, world!
real 0m0.335s
user 0m0.015s
sys 0m0.015s
$ time ghc --make hw
[1 of 1] Compiling Main ( hw.hs, hw.o )
Linking hw ...
real 0m0.855s
user 0m0.015s
sys 0m0.015s
$ time ./hw
Hello, world!
real 0m0.037s
user 0m0.015s
sys 0m0.000s
How hard is it to simply compile all your "scripts" before running them?
Ah, providing binaries for multiple architectures is a pain indeed. I've gone down that road before, and it's not much fun...
Sadly, I don't think it's possible to make any Haskell compiler's startup overhead any better. The language's declarative nature means that it's necessary to read the entire program first even before trying to typecheck anything, nevermind execution, and then you either suffer the cost of strictness analysis or unnecessary laziness and thunking.
The popular 'scripting' languages (shell, Perl, Python, etc.) and the ML-based languages require only a single pass... well okay, ML requires a static typecheck pass and Perl has this amusing 5-pass scheme (with two of them running in reverse); either way, being procedural means that the compiler/interpreter has a lot easier of a job assembling the bits of the program together.
In short, I don't think it's possible to get much better than this. I haven't tested to see if Hugs or GHCi has a faster startup, but any difference there is still faaar away from non-Haskell languages.
What about having a ghci daemon and a feeder script that takes the script path and location, communicates with the already running ghci process to load and execute the script in the proper directory and pipes the output back to the feeder script for stdout?
Unfortunately, I have no idea how to write something like this, but it seems like it could be really fast judging by the speed of :l in ghci. As it seems most of the cost in runhaskell is in starting up ghci, not parsing and running the script.
Edit: After some playing around, I found the Hint package (a wrapper around the GHC API) to be of perfect use here. The following code will load the passed in module name (here assumed to be in the same directory) and will execute the main function. Now 'all' that's left is to make it a daemon, and have it accept scripts on a pipe or socket.
import Language.Haskell.Interpreter
import Control.Monad
run = runInterpreter . test
test :: String -> Interpreter ()
test mname =
do
loadModules [mname ++ ".hs"]
setTopLevelModules [mname]
res <- interpret "main" (as :: IO())
liftIO res
Edit2: As far as stdout/err/in go, using this specific GHC trick It looks like it would be possible to redirect the std's of the client program into the feeder program, then into some named pipes (perhaps) that the daemon is connected to at all times, and then have the daemon's stdout back to another named pipe that the feeder program is listening to. Pseudo-example:
grep ... | feeder my_script.hs | xargs ...
| ^---------------- <
V |
named pipe -> daemon -> named pipe
Here the feeder would be a small compiled harness program to just redirect the std's into and then back out of the daemon and give the name and location of the script to the daemon.
Why not create a script front-end that compiles the script if it hasn't been before or if the compiled version is out of date.
Here's the basic idea, this code could be improved a lot--search the path rather then assuming everything's in the same directory, handle other file extensions better, etc. Also i'm pretty green at haskell coding (ghc-compiled-script.hs):
import Control.Monad
import System
import System.Directory
import System.IO
import System.Posix.Files
import System.Posix.Process
import System.Process
getMTime f = getFileStatus f >>= return . modificationTime
main = do
scr : args <- getArgs
let cscr = takeWhile (/= '.') scr
scrExists <- doesFileExist scr
cscrExists <- doesFileExist cscr
compile <- if scrExists && cscrExists
then do
scrMTime <- getMTime scr
cscrMTime <- getMTime cscr
return $ cscrMTime <= scrMTime
else
return True
when compile $ do
r <- system $ "ghc --make " ++ scr
case r of
ExitFailure i -> do
hPutStrLn stderr $
"'ghc --make " ++ scr ++ "' failed: " ++ show i
exitFailure
ExitSuccess -> return ()
executeFile cscr False args Nothing
Now we can create scripts such as this (hs-echo.hs):
#! ghc-compiled-script
import Data.List
import System
import System.Environment
main = do
args <- getArgs
putStrLn $ foldl (++) "" $ intersperse " " args
And now running it:
$ time hs-echo.hs "Hello, world\!"
[1 of 1] Compiling Main ( hs-echo.hs, hs-echo.o )
Linking hs-echo ...
Hello, world!
hs-echo.hs "Hello, world!" 0.83s user 0.21s system 97% cpu 1.062 total
$ time hs-echo.hs "Hello, world, again\!"
Hello, world, again!
hs-echo.hs "Hello, world, again!" 0.01s user 0.00s system 60% cpu 0.022 total
If you are really concerned with speed you are going to be hampered by re-parsing the code for every launch. Haskell doesn't need to be run from an interpreter, compile it with GHC and you should get excellent performance.
You have two parts to this question:
If you care about performance, the only serious option is GHC, which is very very fast: http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=all
If you want something light for Unix scripting, I'd use GHCi. It is about 30x faster than Hugs, but also supports all the libraries on hackage.
So install GHC now (and get GHCi for free).