How to add timestamp from kafka to spark streaming during converting to DF

别说谁变了你拦得住时间么 提交于 2019-12-11 17:17:36

问题


I am doing spark streaming from kafka. I want to convert my rdd from kafka to dataframe. i am using following approach. val ssc = new StreamingContext("local[*]", "KafkaExample", Seconds(4))

    val kafkaParams = Map[String, Object](
    "bootstrap.servers" -> "dofff2.dl.uk.feefr.com:8002",
    "security.protocol" -> "SASL_PLAINTEXT",
    "key.deserializer" -> classOf[StringDeserializer],
    "value.deserializer" -> classOf[StringDeserializer],
    "group.id" -> "1",
    "auto.offset.reset" -> "latest",
    "enable.auto.commit" -> (false: java.lang.Boolean)
    )

   val topics = Array("csv")
   val stream = KafkaUtils.createDirectStream[String, String](
   ssc,
   PreferConsistent,
   Subscribe[String, String](topics, kafkaParams)
   )

   val strmk = stream.map(record => (record.value))
  val rdd1 = strmk.map(line => line.split(',')).map(s => (s(0).toString, s(1).toString,s(2).toString,s(3).toString,s(4).toString, s(5).toString,s(6).toString,s(7).toString))

  rdd1.foreachRDD((rdd, time) => {
  val sqlContext = SQLContextSingleton.getInstance(rdd.sparkContext)
  import sqlContext.implicits._
  val requestsDataFrame = rdd.map(w => Record(w._1, w._2, w._3,w._4, w._5, w._6,w._7, w._8)).toDF()
  requestsDataFrame.createOrReplaceTempView("requests")
  val word_df =sqlContext.sql("select * from  requests ")
  println(s"========= $time =========")
  word_df.show()
  })

But in the dataframe i want to include timestamp from kafka also. can someone help how to do it ?


回答1:


Kafka records have various attributes.

See https://spark.apache.org/docs/2.2.0/structured-streaming-kafka-integration.html

Note that there is a Streaming and Batch approach to Kafka.

An example:

import java.sql.Timestamp

import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._
import org.apache.spark.sql.streaming.OutputMode

val sparkSession = SparkSession.builder
  .master("local")
  .appName("example")
  .getOrCreate()

import sparkSession.implicits._
sparkSession.sparkContext.setLogLevel("ERROR")
val socketStreamDs = sparkSession.readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", "localhost:9092")  
  .option("subscribe", "AAA")     
  .option("startingOffsets", "earliest")
  .load()
  //.as[String]
  //
  //.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)", "CAST(timestamp AS STRING)")
  .selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)", "timestamp")
  .writeStream
    .format("console")
    .option("truncate", "false")
    .outputMode(OutputMode.Append())
    .start().awaitTermination()

My sample output is as follows:

-------------------------------------------
Batch: 0
-------------------------------------------
+----+-----+-----------------------+
|key |value|timestamp              |
+----+-----+-----------------------+
|null|RRR  |2019-02-07 04:37:34.983|
|null|HHH  |2019-02-07 04:37:36.802|
|null|JJJ  |2019-02-07 04:37:39.1  |
+----+-----+-----------------------+

For non-structured Streaming though,

You just need to expand your statement above:

stream.map { record => (record.timestamp(), record.key(), record.value()) }


来源:https://stackoverflow.com/questions/54563312/how-to-add-timestamp-from-kafka-to-spark-streaming-during-converting-to-df

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!