akka

akka-rpc(基于akka的rpc实现)

≯℡__Kan透↙ 提交于 2020-01-08 16:45:24
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> akka-rpc(基于akka的rpc的实现) 代码:http://git.oschina.net/for-1988/Simples 目前的工作在基于akka(java)实现数据服务总线,Akka 2.3中提供了 Cluster Sharing(分片集群)和Persistence功能可以很简单的写出一个大型的分布式集群的架构。里面的一块功能就是RPC(远程过程调用)。 RPC 远程过程调用(Remote Procedure Call,RPC)是一个计算机通信 协议 。该协议允许运行于一台计算机的 程序 调用另一台计算机的 子程序 ,而程序员无需额外地为这个交互作用编程。如果涉及的软件采用 面向对象编程 ,那么远程过程调用亦可称作远程调用或远程方法调用,例: Java RMI 。 实现原理 整个RPC的调用过程完全基于akka来传递对象,因为需要进行网络通信,所以我们的接口实现类、调用参数以及返回值都需要实现java序列化接口。客户端跟服务端其实都是在一个Akka 集群关系中,Client跟Server都是集群中的一个节点。首先Client需要初始化RpcClient对象,在初始化的过程中,我们启动了AkkaSystem,加入到整个集群中,并创建了负责与Server进行通信的Actor

怎样在 Akka Persistence 中实现分页查询

99封情书 提交于 2020-01-08 16:38:18
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 在 Akka Persistence 中,数据都缓存在服务内存(状态),后端存储的都是一些持久化的事件日志,没法使用类似 SQL 一样的 DSL 来进行分页查询。利用 Akka Streams 和 Actor 我们可以通过编码的方式来实现分页查询的效果,而且这个分页查询还是分步式并行的…… EventSourcedBehavior Akka Persistence的 EventSourcedBehavior 里实现了 CQRS 模型,通过 commandHandler 与 eventHandler 解耦了命令处理与事件处理。 commandHandler 处理传入的命令并返回一个事件,并可选择将这个事件持久化;若事件需要持久化,则事件将被传给 eventHandler 处理, eventHandler 处理完事件后将返回一个“新的”状态(也可以不更新,直接返回原状态)。 def apply[Command, Event, State]( persistenceId: PersistenceId, emptyState: State, commandHandler: (State, Command) => Effect[Event, State], eventHandler: (State, Event) =>

Scala, Akka: pattern matching for object in trait issue

喜夏-厌秋 提交于 2020-01-07 05:42:31
问题 Good day. I'm making a simple program which check's some server state and faced the issue with pattern matching. Here is the code: Entry point: object Run extends App with StateActor.Api{ private implicit val system = ActorSystem() implicit val blockingDispatcher: MessageDispatcher = system.dispatchers.lookup("blocking-dispatcher") protected val log: LoggingAdapter = Logging(system, getClass) protected implicit val materializer: ActorMaterializer = ActorMaterializer() import scala.concurrent

Share messages type among two project in Akka

流过昼夜 提交于 2020-01-07 04:44:55
问题 I am developing a reactive library that internally is implemented using Akka actors. Let's call this library server . The library exposes a single type of Actor as its API. This Actor accepts different types of messages, that define its public interface. Let's say that these messages are defined as the followings. sealed trait Request case class Creation(name: String) extends Request sealed trait Response case class CreationAck(name: String) extends Response Now, I have to implement also a

akka-http not showing metrics in NewRelic

我只是一个虾纸丫 提交于 2020-01-07 01:49:29
问题 I'm trying to monitor my akka-http Rest web-service with NewRelic The application has only one GET url (defined with akka-http) I have the following configuration in the plugins.sbt logLevel := Level.Warn addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.0.4") addSbtPlugin("com.gilt.sbt" % "sbt-newrelic" % "0.1.4") I have the following configuration in the build.sbt scalaVersion := "2.11.7" name := "recommender-api" ...blablabla... libraryDependencies += "com.typesafe.akka" % "akka

Akka actors scaling across JVM's and servers, and example at akka.io

亡梦爱人 提交于 2020-01-06 15:58:11
问题 I find the introductory example of Akka remoting supplied on the Akka landing page a bit hard to swallow as an introduction, and the length of documentation necessary for learning the ins and outs of remoting malstructured for introductory purposes. Here below is the code from the mentioned example, and I'd like to ask for a delineation of what that code means with some fair context, while relating to the question of whether any actor can be messaged remotely as if it were local requiring

Register bean (with a custom bean name) programmatically

泄露秘密 提交于 2020-01-06 08:01:19
问题 My goal is to register bean (with a custom bean name) programmatically. @ComponentScan({ "com.test" }) public class AppConfiguration { //@Bean("test-bean") @Bean public Definition definition() { return () -> Test.class; } } @Named @Scope("prototype") public class Test extends DefinitionActor<?> { .... } Here, I use Akka and hence I have to go with @Scope("prototype"). I don't want to hard code the bean name to test-bean for some reason. Hence, I am using the BeanPostProcessor. @Component

Information and management of the akka cluster using JMX console

倖福魔咒の 提交于 2020-01-06 07:25:47
问题 I am working on a project based on akka cluster, where I got to implement JMX console to the manage akka clusters. When I was looking at the akka documentation I got a very minimal information. Then I tried looking at Java VisualVM, found an option to add new jmx connection like below, then what should be the connection url there ? I tried localhost:8080 but unsuccessful . What should be configured else to get the JMX console to my akka cluster ? 回答1: In the application.conf for the node(s)

How to put contents of stream into a val?

眉间皱痕 提交于 2020-01-06 04:13:30
问题 I have a stream like this: def myStream[T: AS: MAT](source: Source[T, NotUsed]): Future[Seq[T]] = { return source.runWith(Sink.seq) } def myMethod(colorStream: Source[Color, NotUsed]) { val allColors = myStream(colorStream).map(_.toList) //how can I actually extract the things from allColors //so that I can call my method below? myOtherMethod if I do println(allColors.map(println _)) I can print the elements fine } def myOtherMethod(colors: Seq[Color] = List.empty()) { ... } 回答1: allColors is

Send data from InputStream over Akka/Spring stream

China☆狼群 提交于 2020-01-05 09:21:32
问题 Here is my previous question: Send big file over reactive stream I have managed to send file over Akka stream using FileIO.fromPath(Paths.get(file.toURI())) and it works fine. However, I would like to compress and encrypt file before sending it. I have created method, that opens FileInputStream, routes it through compression stream and then through encryption stream and now I would like to direct it into socket using Akka stream. File -> FileInputStream -> CompressedInputStream ->