Deserializing Spark structured stream data from Kafka topic

筅森魡賤 提交于 2019-12-13 00:34:29

问题


I am working off Kafka 2.3.0 and Spark 2.3.4. I have already built a Kafka Connector which reads off a CSV file and posts a line from the CSV to the relevant Kafka topic. The line is like so: "201310,XYZ001,Sup,XYZ,A,0,Presales,6,Callout,0,0,1,N,Prospect". The CSV contains 1000s of such lines. The Connector is able to successfully post them on the topic and I am also able to get the message in Spark. I am not sure how can I deserialize that message to my schema? Note that the messages are headerless so the key part in the kafka message is null. The value part includes the complete CSV string as above. My code is below.

I looked at this - How to deserialize records from Kafka using Structured Streaming in Java? but was unable to port it to my csv case. In addition I've tried other spark sql mechanisms to try and retrieve the individual row from the 'value' column but to no avail. If I do manage to get a compiling version (e.g. a map over the indivValues Dataset or dsRawData) I get errors similar to: "org.apache.spark.sql.AnalysisException: cannot resolve 'IC' given input columns: [value];". If I understand correctly, it is because value is a comma separated string and spark isn't really going to magically map it for me without me doing 'something'.

//build the spark session
SparkSession sparkSession = SparkSession.builder()
    .appName(seCfg.arg0AppName)
    .config("spark.cassandra.connection.host",config.arg2CassandraIp)
    .getOrCreate();

...
//my target schema is this:
StructType schema = DataTypes.createStructType(new StructField[] {
    DataTypes.createStructField("timeOfOrigin",  DataTypes.TimestampType, true),
    DataTypes.createStructField("cName", DataTypes.StringType, true),
    DataTypes.createStructField("cRole", DataTypes.StringType, true),
    DataTypes.createStructField("bName", DataTypes.StringType, true),
    DataTypes.createStructField("stage", DataTypes.StringType, true),
    DataTypes.createStructField("intId", DataTypes.IntegerType, true),
    DataTypes.createStructField("intName", DataTypes.StringType, true),
    DataTypes.createStructField("intCatId", DataTypes.IntegerType, true),
    DataTypes.createStructField("catName", DataTypes.StringType, true),
    DataTypes.createStructField("are_vval", DataTypes.IntegerType, true),
    DataTypes.createStructField("isee_vval", DataTypes.IntegerType, true),
    DataTypes.createStructField("opCode", DataTypes.IntegerType, true),
    DataTypes.createStructField("opType", DataTypes.StringType, true),
    DataTypes.createStructField("opName", DataTypes.StringType, true)
    });
...

 Dataset<Row> dsRawData = sparkSession
    .readStream()
    .format("kafka")
    .option("kafka.bootstrap.servers", config.arg3Kafkabootstrapurl)
    .option("subscribe", config.arg1TopicName)
    .option("failOnDataLoss", "false")
    .load();

//getting individual terms like '201310', 'XYZ001'.. from "values"
Dataset<String> indivValues = dsRawData
    .selectExpr("CAST(value AS STRING)")
    .as(Encoders.STRING())
    .flatMap((FlatMapFunction<String, String>) x -> Arrays.asList(x.split(",")).iterator(), Encoders.STRING());

//indivValues when printed to console looks like below which confirms that //I receive the data correctly and completely
/*
When printed on console, looks like this:
                +--------------------+
                |               value|
                +--------------------+
                |              201310|
                |              XYZ001|
                |                 Sup|
                |                 XYZ|
                |                   A|
                |                   0|
                |            Presales|
                |                   6|
                |             Callout|
                |                   0|
                |                   0|
                |                   1|
                |                   N|
                |            Prospect|
                +--------------------+
*/

StreamingQuery sq = indivValues.writeStream()
    .outputMode("append")
    .format("console")
    .start();
//await termination
sq.awaitTermination();
  • I require the data to be typed as my custom schema shown above since I would be running mathematical calculations over it (for every new row combined with some older rows).
  • Is it better to synthesize headers in the Kafka Connector source task before pushing them onto the topic? Will having headers make this issue resolution simpler?

Thanks!


回答1:


Given your existing code, the easiest way to parse your input from your dsRawData is to convert it to a Dataset<String> and then use the native csv reader api

//dsRawData has raw incoming data from Kafka...
Dataset<String> indivValues = dsRawData
                .selectExpr("CAST(value AS STRING)")
                .as(Encoders.STRING());

Dataset<Row>    finalValues = sparkSession.read()
                .schema(schema)
                .option("delimiter",",")
                .csv(indivValues);

With such a construct you can use exactly the same CSV parsing options that are available when directly reading a CSV file from Spark.




回答2:


I have been able to resolve this now. Via use of spark sql. The code to the solution is below.

//dsRawData has raw incoming data from Kafka...
Dataset<String> indivValues = dsRawData
                .selectExpr("CAST(value AS STRING)")
                .as(Encoders.STRING());

//create new columns, parse out the orig message and fill column with the values
Dataset<Row> dataAsSchema2 = indivValues
                    .selectExpr("value",
                            "split(value,',')[0] as time",
                            "split(value,',')[1] as cname",
                            "split(value,',')[2] as crole",
                            "split(value,',')[3] as bname",
                            "split(value,',')[4] as stage",
                            "split(value,',')[5] as intid",
                            "split(value,',')[6] as intname",
                            "split(value,',')[7] as intcatid",
                            "split(value,',')[8] as catname",
                            "split(value,',')[9] as are_vval",
                            "split(value,',')[10] as isee_vval",
                            "split(value,',')[11] as opcode",
                            "split(value,',')[12] as optype",
                            "split(value,',')[13] as opname")
                    .drop("value");

//remove any whitespaces as they interfere with data type conversions
dataAsSchema2 = dataAsSchema2
                    .withColumn("intid", functions.regexp_replace(functions.col("int_id"),
                            " ", ""))
                    .withColumn("intcatid", functions.regexp_replace(functions.col("intcatid"),
                            " ", ""))
                    .withColumn("are_vval", functions.regexp_replace(functions.col("are_vval"),
                            " ", ""))
                    .withColumn("isee_vval", functions.regexp_replace(functions.col("isee_vval"),
                            " ", ""))
                    .withColumn("opcode", functions.regexp_replace(functions.col("opcode"),
                            " ", ""));

    //change types to ready for calc
dataAsSchema2 = dataAsSchema2
                    .withColumn("intcatid",functions.col("intcatid").cast(DataTypes.IntegerType))
                    .withColumn("intid",functions.col("intid").cast(DataTypes.IntegerType))
                    .withColumn("are_vval",functions.col("are_vval").cast(DataTypes.IntegerType))
                    .withColumn("isee_vval",functions.col("isee_vval").cast(DataTypes.IntegerType))
                    .withColumn("opcode",functions.col("opcode").cast(DataTypes.IntegerType));


//build a POJO dataset    
Encoder<Pojoclass2> encoder = Encoders.bean(Pojoclass2.class);
        Dataset<Pojoclass2> pjClass = new Dataset<Pojoclass2>(sparkSession, dataAsSchema2.logicalPlan(), encoder);


来源:https://stackoverflow.com/questions/57057159/deserializing-spark-structured-stream-data-from-kafka-topic

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!