iterate

How to use IO with Scalaz7 Iteratees without overflowing the stack?

倾然丶 夕夏残阳落幕 提交于 2019-11-29 12:34:08
问题 Consider this code (taken from here and modified to use bytes rather than lines of characters). import java.io.{ File, InputStream, BufferedInputStream, FileInputStream } import scalaz._, Scalaz._, effect._, iteratee.{ Iteratee => I, _ } import std.list._ object IterateeIOExample { type ErrorOr[+A] = EitherT[IO, Throwable, A] def openStream(f: File) = IO(new BufferedInputStream(new FileInputStream(f))) def readByte(s: InputStream) = IO(Some(s.read()).filter(_ != -1)) def closeStream(s:

Play2 Framework proxy streaming content to client keeps connection open after streaming is done

不问归期 提交于 2019-11-29 05:19:00
The below code does streaming back to client, in, what I gather is a more idiomatic way than using Java's IO Streams. It, however, has an issue: connection is kept open after stream is done. def getImage() = Action { request => val imageUrl = "http://hereandthere.com/someimageurl.png" Ok.stream({ content: Iteratee[Array[Byte], Unit] => WS.url(imageUrl).withHeaders("Accept"->"image/png").get { response => content } return }).withHeaders("Content-Type"->"image/png") } this is intended for streaming large (>1 mb) files from internal API to requester. The question is, why does it keep the

Is there an Iteratee-like concept which pulls data from multiple sources?

偶尔善良 提交于 2019-11-29 03:57:35
It is possible to pull on demand from a number (say two for simplicity) of sources using streams (lazy lists). Iteratees can be used to process data coming from a single source. Is there an Iteratee-like functional concept for processing multiple input sources? I could imagine an Iteratee whose state signals from which source does it want to pull. To do this using pipes you nest the Pipe monad transformer within itself, once for each producer you wish to interact with. For example: import Control.Monad import Control.Monad.Trans import Control.Pipe producerA, producerB :: (Monad m) => Producer

Forward a file upload stream to S3 through Iteratee with Play2 / Scala

℡╲_俬逩灬. 提交于 2019-11-28 22:46:09
问题 I've read some stuff about the possibility to send a file to S3 through Iteratee, which seems to permit to send so S3 chunks of a file as we receive them and avoid an OutOfMemory for large files for exemple. I've found this SO post which is probably almost what i need to do: Play 2.x : Reactive file upload with Iteratees I don't really understand how to do it, or either if it's really available in Play 2.0.2 (because Sadek Brodi says foldM is available in Play 2.1 only for exemple) Can

Why makes calling error or done in a BodyParser's Iteratee the request hang in Play Framework 2.0?

放肆的年华 提交于 2019-11-27 22:32:22
问题 I am trying to understand the reactive I/O concepts of Play 2.0 framework. In order to get a better understanding from the start I decided to skip the framework's helpers to construct iteratees of different kinds and to write a custom Iteratee from scratch to be used by a BodyParser to parse a request body. Starting with the information available in Iteratees and ScalaBodyParser docs and two presentations about play reactive I/O this is what I came up with: import play.api.mvc._ import play

Play 2.x : Reactive file upload with Iteratees

末鹿安然 提交于 2019-11-27 17:21:06
I will start with the question: How to use Scala API's Iteratee to upload a file to the cloud storage (Azure Blob Storage in my case, but I don't think it's most important now) Background: I need to chunk the input into blocks of about 1 MB for storing large media files (300 MB+) as an Azure's BlockBlobs . Unfortunately, my Scala knowledge is still poor (my project is Java based and the only use for Scala in it will be an Upload controller). I tried with this code: Why makes calling error or done in a BodyParser's Iteratee the request hang in Play Framework 2.0? (as a Input Iteratee ) - it