Spark Streaming Accumulated Word Count

前端 未结 2 847
臣服心动
臣服心动 2021-02-14 13:58

This is a spark streaming program written in scala. It counts the number of words from a socket in every 1 second. The result would be the word count, for example, the word coun

相关标签:
2条回答
  • 2021-02-14 14:05

    I have a very simple answer and its just few lines of code. you can find this is most of the spark books. remember that I have used localhost and port 9999.

    from pyspark import SparkContext
    from pyspark.streaming import StreamingContext
    
    sc = SparkContext(appName="PythonStreamingNetworkWordCount")
    ssc = StreamingContext(sc, 1)
    lines = ssc.socketTextStream("localhost", 9999)
    counts = lines.flatMap(lambda line: line.split(" "))\
                         .map(lambda word: (word, 1))\
                         .reduceByKey(lambda a, b: a+b)
    counts.pprint()
    ssc.start()
    ssc.awaitTermination()
    

    and to stop you can use a simple

    ssc.stop()

    This is a very basic code but this code is helpful in a basic understanding of spark streaming, Dstream to be more specific.

    to give input to the localhost in your terminal (Mac terminal) type

    nc -l 9999

    so it would listen to everything you type after that and the words would be counted

    Hope this is helpful.

    0 讨论(0)
  • 2021-02-14 14:15

    You can use a StateDStream for this. There is an example of stateful word count from sparks examples.

    object StatefulNetworkWordCount {
      def main(args: Array[String]) {
        if (args.length < 2) {
          System.err.println("Usage: StatefulNetworkWordCount <hostname> <port>")
          System.exit(1)
        }
    
        StreamingExamples.setStreamingLogLevels()
    
        val updateFunc = (values: Seq[Int], state: Option[Int]) => {
          val currentCount = values.foldLeft(0)(_ + _)
    
          val previousCount = state.getOrElse(0)
    
          Some(currentCount + previousCount)
        }
    
        val sparkConf = new SparkConf().setAppName("StatefulNetworkWordCount")
        // Create the context with a 1 second batch size
        val ssc = new StreamingContext(sparkConf, Seconds(1))
        ssc.checkpoint(".")
    
        // Create a NetworkInputDStream on target ip:port and count the
        // words in input stream of \n delimited test (eg. generated by 'nc')
        val lines = ssc.socketTextStream(args(0), args(1).toInt)
        val words = lines.flatMap(_.split(" "))
        val wordDstream = words.map(x => (x, 1))
    
        // Update the cumulative count using updateStateByKey
        // This will give a Dstream made of state (which is the cumulative count of the words)
        val stateDstream = wordDstream.updateStateByKey[Int](updateFunc)
        stateDstream.print()
        ssc.start()
        ssc.awaitTermination()
      }
    }
    

    The way it works is you get an Seq[T] for each batch, then you update an Option[T] which acts like an accumulator. The reason it is an Option is because on the first batch it will be None and stay that way unless it's updated. In this example the count is an int, if you are dealing with a lot of data you may want to even have a Long or BigInt

    0 讨论(0)
提交回复
热议问题