Downloading large files from the Internet in Haskell

谁都会走 提交于 2019-12-09 12:58:00

问题


Are there any suggestions about how to download large files in Haskell? I figure Http.Conduit is is the library is a good library for this. However, how does it solve this? There is an example in its documentation but it is not fit for downloading large files, it just downloads a file:

 import Data.Conduit.Binary (sinkFile)
 import Network.HTTP.Conduit
 import qualified Data.Conduit as C

 main :: IO ()
 main = do
      request <- parseUrl "http://google.com/"
      withManager $ \manager -> do
          response <- http request manager
          responseBody response C.$$+- sinkFile "google.html"

What I want is be able to download large files and not run out of RAM, e.g. do it effectively in terms of performance, etc. Preferably, being able to continue downloading them "later", meaning "some part now, another part later".

I also found the download-curl package on hackage, but I'm not positive this is a good fit, or even that it downloads files chunk by chunk like I need.


回答1:


Network.HTTP.Conduit provides three functions for performing a request:

  • simpleHttp
  • httpLbs
  • http

Out of the three functions, the first two functions will make the entire response body to live in memory. If you want to operate in constant memory, then use http function. The http function gives you access to a streaming interface through ResumableSource

The example you have provided in your code uses interleaved IO to write the response body to a file in constant memory space. So, you will not run out of memory when downloading a large file.



来源:https://stackoverflow.com/questions/24718873/downloading-large-files-from-the-internet-in-haskell

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!