backpressure

Handle back-pressure in FixedThreadPool

戏子无情 提交于 2020-06-01 05:12:26
问题 How to deal with back-pressure in Java using thread pool? How to reject new tasks so there are no more than N submitted tasks. N - is the maximum number of allowed tasks in submission queue, which include new, running, paused (not finished) tasks. Use case Users submit calculation tasks that run for some time . Sometimes, there are so many users submitting tasks at the same time. How to reject new tasks if there are already N tasks submitted. In other words, the total number of submitted (not

node.js how to handle fast producer and slow consumer with backpressure

戏子无情 提交于 2020-02-02 14:55:11
问题 I'm very novice in node.js and don't understand the documentation about streams. Hoping to get some tips. I'm reading a very large file line, and then for each line I'm calling an async network api. Obviously the local file is read much faster than the async calls are completed: var lineReader = require('readline').createInterface({ input: require('fs').createReadStream(program.input) }); lineReader.on('line', function (line) { client.execute(query, [line], function(err, result) { // needs to

How Spark Structured Streaming handles backpressure?

三世轮回 提交于 2020-01-12 22:46:15
问题 I'm analyzing the backpressure feature on Spark Structured Streaming. Does anyone know the details? Is it possible to tune process incoming records by code? Thanks 回答1: If you mean dynamically changing the size of each internal batch in Structured Streaming, then NO . There are not receiver-based sources in Structured Streaming, so that's totally not necessary. From another point of view, Structured Streaming cannot do real backpressure, because, such as, Spark cannot tell other applications

How to handle backpressure using google cloud functions

十年热恋 提交于 2019-12-20 02:58:16
问题 Using google cloud functions, is there a way to manage execution concurrency the way AWS Lambda is doing? (https://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html) My intent is to design a function that consumes a file of tasks and publish those tasks to a work queue (pub/sub). I want to have a function that consumes tasks from the work queue (pub/sub) and execute the task. The above could result in a large number of almost concurrent execution. My dowstream consumer service

How to process RxJS stream n items at a time and once an item is done, autofill back to n again?

狂风中的少年 提交于 2019-12-13 16:24:48
问题 I have a stream of events and I would like to call a function that returns a promise for each of those events, the problem is that this function is very expensive, so I would like to process at most n events at a time. This pebble diagram is probably wrong but this is what I would like: ---x--x--xxxxxxx-------------x-------------> //Events ---p--p--pppp------p-p-p-----p-------------> //In Progress -------d--d--------d-d-dd------dddd--------> //Promise Done ---1--21-2-34-----------3----4-3210-

Back pressure in Kafka

让人想犯罪 __ 提交于 2019-12-13 12:28:10
问题 I have a situation in Kafka where the producer publishes the messages at a very higher rate than the consumer consumption rate. I have to implement the back pressure implementation in kafka for further consumption and processing. Please let me know how can I implement in spark and also in normal java api. 回答1: Kafka acts as the regulator here. You produce at whatever rate you want to into Kafka, scaling the brokers out to accommodate the ingest rate. You then consume as you want to; Kafka

Avoiding data loss when slow consumers force backpressure in stream processing (spark, aws)

为君一笑 提交于 2019-12-11 17:13:13
问题 I'm new to distributed stream processing (Spark). I've read some tutorials/examples which cover how backpressure results in the producer(s) slowing down in response to overloaded consumers. The classic example given is ingesting and analyzing tweets. When there is an unexpected spike in traffic such that the consumers are unable to handle the load, they apply backpressure and the producer responds by adjusting its rate lower. What I don't really see covered is what approaches are used in

Selective request-throttling using akka-http stream

痞子三分冷 提交于 2019-12-11 06:45:28
问题 I got one API which calls two another Downstream APIs. One downstream api ( https://test/foo ) is really important and it is very fast. Another slow downstream api ( https://test/bar ) has its limitation, the throughput of it only can handle 50 requests per sec. I would like to make sure the downstream api https://test/foo has more priority than https://test/bar . For example, if the API thread pool is 75, I only allow 50 parallel incoming connection to go through https://test/bar . Rest of