akka

is it ever ok to dispatch a child actor directly or must all messages be passed from the root actor down in Akka?

我与影子孤独终老i 提交于 2020-07-23 07:47:28
问题 While working on the Akka tutorial, I was wondering in a scenario where I have the ActorSystem reference live in the application, I can dispatch messages directly to a child actor like /user/a in the diagram below or must all messages that need to go to that child be passed down from /user and from / to /user . I ask because in the tutorial, the IotSupervisor extends AbstractBehavior<Void> and I imagine that that would need to be changed to extending AbstractBehavior<DeviceManager.Command> .

How to clean up substreams in continuous Akka streams

泄露秘密 提交于 2020-06-25 09:18:14
问题 Given I have a very long running stream of events flowing through something as show below. When a long time has passed there will be lots of sub streams created that is no longer needed. Is there a way to clean up a specific substream at a given time, for example the substream created by id 3 should be cleaned and the state in the scan method lost at 13Pm (expires property of Wid)? case class Wid(id: Int, v: String, expires: LocalDateTime) test("Substream with scan") { val (pub, sub) =

Akka-http - configuration of request and host-level pools

China☆狼群 提交于 2020-06-13 06:12:34
问题 What is the relationship in terms of pool settings between the super-pool used by the Request-level API, and the cached pool created by the host-level API? To provide more context: I need to query the same host/endpoint with fast, responsive requests, and with more expensive requests. My current strategy is to use Http().singleRequest() for the cheap queries, and a cached host-pool to "isolate" the more expensive queries. I want to make sure that the expensive queries won't use up all the

Akka-http - configuration of request and host-level pools

不羁岁月 提交于 2020-06-13 06:11:09
问题 What is the relationship in terms of pool settings between the super-pool used by the Request-level API, and the cached pool created by the host-level API? To provide more context: I need to query the same host/endpoint with fast, responsive requests, and with more expensive requests. My current strategy is to use Http().singleRequest() for the cheap queries, and a cached host-pool to "isolate" the more expensive queries. I want to make sure that the expensive queries won't use up all the

Throttle concurrent HTTP requests from Spark executors

梦想的初衷 提交于 2020-05-27 06:42:37
问题 I want to do some Http requests from inside a Spark job to a rate limited API. In order to keep track of the number of concurrent requests in a non-distributed system (in Scala), following works: a throttling actor which maintains a semaphore (counter) which increments when the request starts and decrements when the request completes. Although Akka is distributed, there are issues to (de)serialize the actorSystem in a distributed Spark context. using parallel streams with fs2: https://fs2.io

Throttle concurrent HTTP requests from Spark executors

早过忘川 提交于 2020-05-27 06:42:07
问题 I want to do some Http requests from inside a Spark job to a rate limited API. In order to keep track of the number of concurrent requests in a non-distributed system (in Scala), following works: a throttling actor which maintains a semaphore (counter) which increments when the request starts and decrements when the request completes. Although Akka is distributed, there are issues to (de)serialize the actorSystem in a distributed Spark context. using parallel streams with fs2: https://fs2.io

Use Case for Database and akka actors

心已入冬 提交于 2020-05-17 06:12:09
问题 I have certain design issues. I can not change the main class and it's like the following: object Main { ... implicit val schema = "schema" implicit val db= Database.forURL("url") val impl = new ImplA() val implB = new ImplB() system.actorOf(ActorA.props(impl, implB)) } The implementation classes are like: class ImplA(implicit val schema: String, db: Database) { val queryA = TableQuery[STable] def getData() = { ... db.run(query) } } class ImplB(implicit val schema: String, db: Database) { val

Flink各种报错汇总及解决方法

被刻印的时光 ゝ 提交于 2020-05-07 20:29:34
Table is not an append-only table. Use the toRetractStream() in order to handle add and retract messages. 这个是因为动态表不是append-only模式的,需要用toRetractStream(回撤流)处理就好了. tableEnv.toRetractStream Person .print() 今天在启动Flink任务的时候报错Caused by: java.lang.RuntimeException: Couldn't deploy Yarn cluster,然后仔细看发现里面有这么一句话system times on machines may be out of sync,意思说是机器上的系统时间可能不同步. (1)安装ntpdate工具 yum -y install ntp ntpdate (2)设置系统时间与网络时间同步 ntpdate cn.pool.ntp.org 在三台机器上分别执行完这个,在启动任务,发现可以了. Could not retrieve the redirect address of the current leader. Please try to refresh,Flink任务在运行了一段时间后,进程还在但是刷新UI界面提示报错

akka进阶(1)

杀马特。学长 韩版系。学妹 提交于 2020-05-06 02:09:11
当我们设计并行系统时,常常会在这稳定性、可扩展性、实时性这几个特性上深究。 团队从半年前开始构建一个机遇akka的网络爬虫项目,现在项目已经上线运营。这段时间大家开始做一些细致的工作,或是性能调优,或是重构代码,从这个过程中了解到更多akka强大而简便的功能: 提高系统级稳定性 -> supervisor strategy 横向可扩展性 -> 不同routers 实时性 -> 不同的router和dispatcher 这个系列的文章也抄抄官方文档的基本概念,但更多的是总结我们自己项目做下来的心得体会。本文基于一个项目的个人看法,不妨写的偏激一点,虽不全面不精准,但希望帮大家绕过一些坑。 1. supervisor strategy 在akka中,每个actor都是其子actor的supervisor。当一个子actor失败时,supervisor有两种策略: OneForOneStrategy 只针对异常的那个子actor操作 OneForAllStrategy 对所有子actor操作 可选的行为有 Resume Restart Stop Escalate 。 想不到特别的情形会使用OneForAllStrategy,通常父actor会对不同的子actor进行不同的管理,或者是当做router简单的转发。若是Router,可以使用Kill或者PoisonPill来做停机重启的操作。

Akka Stop Kill PoisonPill 区别

笑着哭i 提交于 2020-05-04 09:36:54
Akka Stop Kill PoisonPill 区别 Both stop and PoisonPill will terminate the actor and stop the message queue. They will cause the actor to cease processing messages, send a stop call to all its children, wait for them to terminate, then call its postStop hook. All further messages are sent to the dead letters mailbox. The difference is in which messages get processed before this sequence starts. In the case of the stop call, the message currently being processed is completed first, with all others discarded. When sending a PoisonPill , this is simply another message in the queue, so the sequence