flink-cep

java.io.NotSerializableException using Apache Flink with Lagom

寵の児 提交于 2019-12-02 09:59:48
I am writing Flink CEP program inside the Lagom's Microservice Implementation. My FLINK CEP program run perfectly fine in simple scala application. But when i use this code inside the Lagom service implementation i am receiving the following exception Lagom Service Implementation override def start = ServiceCall[NotUsed, String] { val env = StreamExecutionEnvironment.getExecutionEnvironment var executionConfig = env.getConfig env.setParallelism(1) executionConfig.disableSysoutLogging() var topic_name="topic_test" var props= new Properties props.put("bootstrap.servers", "localhost:9092") props

Apache Flink: How to count the total number of events in a DataStream

白昼怎懂夜的黑 提交于 2019-12-01 13:11:00
I have two raw streams and I am joining those streams and then I want to count what is the total number of events that have been joined and how much events have not. I am doing this by using map on joinedEventDataStream as shown below joinedEventDataStream.map(new RichMapFunction<JoinedEvent, Object>() { @Override public Object map(JoinedEvent joinedEvent) throws Exception { number_of_joined_events += 1; return null; } }); Question # 1: Is this the appropriate way to count the number of events in the stream? Question # 2: I have noticed a wired behavior, which some of you might not believe.

Apache Flink: How to count the total number of events in a DataStream

可紊 提交于 2019-12-01 10:18:36
问题 I have two raw streams and I am joining those streams and then I want to count what is the total number of events that have been joined and how much events have not. I am doing this by using map on joinedEventDataStream as shown below joinedEventDataStream.map(new RichMapFunction<JoinedEvent, Object>() { @Override public Object map(JoinedEvent joinedEvent) throws Exception { number_of_joined_events += 1; return null; } }); Question # 1: Is this the appropriate way to count the number of

Flink: How to handle external app configuration changes in flink

梦想与她 提交于 2019-11-29 20:57:35
问题 My requirement is to stream millions of records in a day and it has huge dependency on external configuration parameters. For example, a user can go and change the required setting anytime in the web application and after the change is made, the streaming has to happen with the new application config parameters. These are app level configurations and we also have some dynamic exclude parameters which each data has to be passed through and filtered. I see that flink doesn’t have global state