Play 2.x : Reactive file upload with Iteratees

后端 未结 4 874
北恋
北恋 2020-12-04 10:18

I will start with the question: How to use Scala API\'s Iteratee to upload a file to the cloud storage (Azure Blob Storage in my case, but I don\'t thi

相关标签:
4条回答
  • 2020-12-04 10:53

    If your goal is to stream to S3, here is a helper that I have implemented and tested:

    def uploadStream(bucket: String, key: String, enum: Enumerator[Array[Byte]])
                    (implicit ec: ExecutionContext): Future[CompleteMultipartUploadResult] = {
      import scala.collection.JavaConversions._
    
      val initRequest = new InitiateMultipartUploadRequest(bucket, key)
      val initResponse = s3.initiateMultipartUpload(initRequest)
      val uploadId = initResponse.getUploadId
    
      val rechunker: Enumeratee[Array[Byte], Array[Byte]] = Enumeratee.grouped {
        Traversable.takeUpTo[Array[Byte]](5 * 1024 * 1024) &>> Iteratee.consume()
      }
    
      val uploader = Iteratee.foldM[Array[Byte], Seq[PartETag]](Seq.empty) { case (etags, bytes) =>
        val uploadRequest = new UploadPartRequest()
          .withBucketName(bucket)
          .withKey(key)
          .withPartNumber(etags.length + 1)
          .withUploadId(uploadId)
          .withInputStream(new ByteArrayInputStream(bytes))
          .withPartSize(bytes.length)
    
        val etag = Future { s3.uploadPart(uploadRequest).getPartETag }
        etag.map(etags :+ _)
      }
    
      val futETags = enum &> rechunker |>>> uploader
    
      futETags.map { etags =>
        val compRequest = new CompleteMultipartUploadRequest(bucket, key, uploadId, etags.toBuffer[PartETag])
        s3.completeMultipartUpload(compRequest)
      }.recoverWith { case e: Exception =>
        s3.abortMultipartUpload(new AbortMultipartUploadRequest(bucket, key, uploadId))
        Future.failed(e)
      }
    
    }
    
    0 讨论(0)
  • 2020-12-04 10:57

    add the following to your config file

    play.http.parser.maxMemoryBuffer=256K

    0 讨论(0)
  • 2020-12-04 10:59

    For those who are also trying to figure out a solution of this streaming problem, instead of writing a whole new BodyParser, you can also use what has already been implemented in parse.multipartFormData. You can implement something like below to overwrite the default handler handleFilePartAsTemporaryFile.

    def handleFilePartAsS3FileUpload: PartHandler[FilePart[String]] = {
      handleFilePart {
        case FileInfo(partName, filename, contentType) =>
    
          (rechunkAdapter &>> writeToS3).map {
            _ =>
              val compRequest = new CompleteMultipartUploadRequest(...)
              amazonS3Client.completeMultipartUpload(compRequest)
              ...
          }
      }
    }
    
    def multipartFormDataS3: BodyParser[MultipartFormData[String]] = multipartFormData(handleFilePartAsS3FileUpload)
    

    I am able to make this work but I am still not sure whether the whole upload process is streamed. I tried some large files, it seems the S3 upload only starts when the whole file has been sent from the client side.

    I looked at the above parser implementation and I think everything is connected using Iteratee so the file should be streamed. If someone has some insight on this, that will be very helpful.

    0 讨论(0)
  • 2020-12-04 11:09

    Basically what you need first is rechunk input as bigger chunks, 1024 * 1024 bytes.

    First let's have an Iteratee that will consume up to 1m of bytes (ok to have the last chunk smaller)

    val consumeAMB = 
      Traversable.takeUpTo[Array[Byte]](1024*1024) &>> Iteratee.consume()
    

    Using that, we can construct an Enumeratee (adapter) that will regroup chunks, using an API called grouped:

    val rechunkAdapter:Enumeratee[Array[Byte],Array[Byte]] =
      Enumeratee.grouped(consumeAMB)
    

    Here grouped uses an Iteratee to determine how much to put in each chunk. It uses the our consumeAMB for that. Which means the result is an Enumeratee that rechunks input into Array[Byte] of 1MB.

    Now we need to write the BodyParser, which will use the Iteratee.foldM method to send each chunk of bytes:

    val writeToStore: Iteratee[Array[Byte],_] =
      Iteratee.foldM[Array[Byte],_](connectionHandle){ (c,bytes) => 
        // write bytes and return next handle, probable in a Future
      }
    

    foldM passes a state along and uses it in its passed function (S,Input[Array[Byte]]) => Future[S] to return a new Future of state. foldM will not call the function again until the Future is completed and there is an available chunk of input.

    And the body parser will be rechunking input and pushing it into the store:

    BodyParser( rh => (rechunkAdapter &>> writeToStore).map(Right(_)))
    

    Returning a Right indicates that you are returning a body by the end of the body parsing (which happens to be the handler here).

    0 讨论(0)
提交回复
热议问题