Why use Avro with Kafka - How to handle POJOs

后端 未结 3 890
Happy的楠姐
Happy的楠姐 2021-02-15 18:24

I have a spring application that is my kafka producer and I was wondering why avro is the best way to go. I read about it and all it has to offer, but why can\'t I just seriali

3条回答
  •  北荒
    北荒 (楼主)
    2021-02-15 19:23

    You don't need AVSC, you can use an AVDL file, which basically looks the same as a POJO with only the fields

    @namespace("com.example.mycode.avro")
    protocol ExampleProtocol {
       record User {
         long id;
         string name;
       }
    }
    

    Which, when using the idl-protocol goal of the Maven plugin, will create this AVSC for you, rather than you writing it yourself.

    {
      "type" : "record",
      "name" : "User",
      "namespace" : "com.example.mycode.avro",
      "fields" : [ {
        "name" : "id",
        "type" : "long"
      }, {
        "name" : "name",
        "type" : "string"
      } ]
    }
    

    And it'll also place a SpecificData POJO User.java on your classpath for using in your code.


    If you already had a POJO, you don't need to use AVSC or AVDL files. There are libraries to convert POJOs. For example, you can use Jackson, which is not only for JSON, you would just need to likely create a JacksonAvroSerializer for Kafka, for example, or find if one exists.

    Avro also has built-in library based on reflection.


    So to the question - why Avro (for Kafka)?

    Well, having a schema is a good thing. Think about RDBMS tables, you can explain the table, and you see all the columns. Move to NoSQL document databases, and they can contain literally anything, and this is the JSON world of Kafka.

    Let's assume you have consumers in your Kafka cluster that have no idea what is in the topic, they have to know exactly who/what has been produced into a topic. They can try the console consumer, and if it were a plaintext like JSON, then they have to figure out some fields they are interested in, then perform flaky HashMap-like .get("name") operations again and again, only to run into an NPE when a field doesn't exist. With Avro, you clearly define defaults and nullable fields.

    You aren't required to use a Schema Registry, but it provides that type of explain topic semantics for the RDBMS analogy. It also saves you from needing to send the schema along with every message, and the expense of extra bandwidth on the Kafka topic. The registry is not only useful for Kafka, though, as it could be used for Spark, Flink, Hive, etc for all Data Science analysis surrounding streaming data ingest.


    Assuming you did want to use JSON, then try using MsgPack instead and you'll likely see an increase in your Kafka throughput and save disk space on the brokers


    You can also use other formats like Protobuf or Thrift, as Uber has compared

提交回复
热议问题