haskell-pipes

Join two consumers into a single consumer that returns multiple values?

给你一囗甜甜゛ 提交于 2019-12-04 11:01:02
I have been experimenting with the new pipes-http package and I had a thought. I have two parsers for a web page, one that returns line items and another a number from elsewhere in the page. When I grab the page, it'd be nice to string these parsers together and get their results at the same time from the same bytestring producer, rather than fetching the page twice or fetching all the html into memory and parsing it twice. In other words, say you have two Consumers: c1 :: Consumer a m r1 c2 :: Consumer a m r2 Is it possible to make a function like this: combineConsumers :: Consumer a m r1 ->

Haskell Pipes - get return value of last Proxy in pipeline

跟風遠走 提交于 2019-12-04 09:20:41
Let's say I have two Proxy in Haskell Pipes. They represent external system processes. produce :: MonadIO m => Producer ByteString m ExitCode consume :: MonadIO m => Consumer ByteString m ExitCode So I hook them into an Effect , like this: effect :: Effect m ExitCode effect = produce >-> consume This Effect is going to give me the ExitCode from the first Proxy that terminates. Ordinarily this will be the produce , not the consume . What's the idiomatic Pipes way to get the return value of the consume even if it does not terminate first? So far I am thinking this is not possible without doing

using haskell pipes-bytestring to iterate a file by line

*爱你&永不变心* 提交于 2019-12-04 07:45:14
I am using the pipes library and need to convert a ByteString stream to a stream of lines (i.e. String ), using ASCII encoding. I am aware that there are other libraries (Pipes.Text and Pipes.Prelude) that perhaps let me yield lines from a text file more easily, but because of some other code I need to be able to get lines as String from a Producer of ByteString . More formally, I need to convert a Producer ByteString IO () to a Producer String IO () , which yields lines. I am sure this must be a one-liner for an experienced Pipes-Programmer, but I so far did not manage to successfully hack

What's the benefit of conduit's leftovers?

独自空忆成欢 提交于 2019-12-03 02:04:27
I'm trying to understand the differences between conduit and pipes . Unlike pipes , conduit has the concept of leftovers. What are leftovers useful for? I'd like to see some examples where leftovers are essential. And since pipes don't have the concept of leftovers, is there any way to achieve a similar behavior with them? Gabriel's point that leftovers are always part of parsing is interesting. I'm not sure I would agree, but that may just depend on the definition of parsing. There are a large category of use cases which require leftovers. Parsing is certainly one: any time a parse requires

What is pipes/conduit trying to solve

跟風遠走 提交于 2019-12-03 00:53:26
问题 I have seen people recommending pipes/conduit library for various lazy IO related tasks. What problem do these libraries solve exactly? Also, when I try to use some hackage related libraries, it is highly likely there are three different versions. Example: attoparsec pipes-attoparsec attoparsec-conduit This confuses me. For my parsing tasks should I use attoparsec or pipes-attoparsec/attoparsec-conduit? What benefit do the pipes/conduit version give me as compared to the plain vanilla

What is pipes/conduit trying to solve

隐身守侯 提交于 2019-12-02 14:17:08
I have seen people recommending pipes/conduit library for various lazy IO related tasks. What problem do these libraries solve exactly? Also, when I try to use some hackage related libraries, it is highly likely there are three different versions. Example: attoparsec pipes-attoparsec attoparsec-conduit This confuses me. For my parsing tasks should I use attoparsec or pipes-attoparsec/attoparsec-conduit? What benefit do the pipes/conduit version give me as compared to the plain vanilla attoparsec? J. Abrahamson Lazy IO Lazy IO works like this readFile :: FilePath -> IO ByteString where

Forking the streaming flow in haskell-pipes

穿精又带淫゛_ 提交于 2019-12-01 10:46:55
问题 I'm having trouble directing flow though a pipeline with haskell-pipes. Basically, I analyze a bunch of files and then I have to either print results to the terminal in a human-friendly way encode results to JSON The chosen path depends upon a command line option. In the second case, I have to output an opening bracket, then every incoming value followed by a comma and then a closing bracket. Currently insertCommas never terminates, so the closing bracket is never outputted. import Pipes

Haskell: Splitting pipes (broadcast) without using spawn

半城伤御伤魂 提交于 2019-11-30 21:55:00
This question is a bit codegolf and a lot newb. I'm using the awesome pipes library in Haskell, and I'd like to split a pipe to send the same data along multiple channels (do a broadcast). The Pipes.Concurrent tutorial suggests using spawn to create mailboxes, taking advantage of Output 's monoid status. For example, we might do something like this: main = do (output1, input1) <- spawn Unbounded (output2, input2) <- spawn Unbounded let effect1 = fromInput input1 >-> pipe1 let effect2 = fromInput input2 >-> pipe2 let effect3 = P.stdinLn >-> toOutput (output1 <> output2) ... Is this indirection

Idiomatic bidirectional Pipes with downstream state without loss

偶尔善良 提交于 2019-11-30 17:32:25
Say I have simple producer/consumer model where the consumer wants to pass back some state to the producer. For instance, let the downstream-flowing objects be objects we want to write to a file and the upstream objects be some token representing where the object was written in the file (e.g. an offset). These two processes might look something like this (with pipes-4.0 ), {-# LANGUAGE GeneralizedNewtypeDeriving #-} import Pipes import Pipes.Core import Control.Monad.Trans.State import Control.Monad newtype Object = Obj Int deriving (Show) newtype ObjectId = ObjId Int deriving (Show, Num)

Haskell Pipes and Branching

依然范特西╮ 提交于 2019-11-30 12:45:05
Problem I'm attempting to implement a simple web server with Haskell and the Pipes library. I understand now that cyclic or diamond topologies aren't possible with pipes, however I thought that what I am trying to is. My desired topology is thus: -GET--> handleGET >-> packRequest >-> socketWriteD | socketReadS >-> parseRequest >-routeRequest | -POST-> handlePOST >-> packRequest >-> socketWriteD I have HTTPRequest RequestLine Headers Message and HTTPResponse StatusLine Headers Message types which are used in the chain. socketReadS takes bytes from the socket and forwards them to parseRequest ,