Kafka Connect JDBC vs Debezium CDC

后端 未结 1 518
终归单人心
终归单人心 2021-02-14 00:26

What are the differences between JDBC Connector and Debezium SQL Server CDC Connector (or any other relational database connector) and when should I choose one over another, sea

1条回答
  •  孤街浪徒
    2021-02-14 00:47

    This explanation focuses on the differences between Debezium SQL Server CDC Connector and JDBC Connector, with more general interpretation about Debezium and CDC.

    tl;dr- scroll down :)


    Debezium

    Debezium is used only as a source connector, records all row-level changes.
    Debezium Documentation says:

    Debezium is a set of distributed services to capture changes in your databases so that your applications can see those changes and respond to them. Debezium records all row-level changes within each database table in a change event stream, and applications simply read these streams to see the change events in the same order in which they occurred.

    Debezium Connector for SQL Server first records a snapshot of the database and then sending records of row-level changes to Kafka, each table to different Kafka topic.
    Debezium Connector for SQL Server Documentation says:

    Debezium’s SQL Server Connector can monitor and record the row-level changes in the schemas of a SQL Server database.

    The first time it connects to a SQL Server database/cluster, it reads a consistent snapshot of all of the schemas. When that snapshot is complete, the connector continuously streams the changes that were committed to SQL Server and generates corresponding insert, update and delete events. All of the events for each table are recorded in a separate Kafka topic, where they can be easily consumed by applications and services.


    Kafka Connect JDBC

    Kafka Connect JDBC can be used either as a source or a sink connector to Kafka, supports any database with JDBC driver.
    JDBC Connector Documentation says:

    You can use the Kafka Connect JDBC source connector to import data from any relational database with a JDBC driver into Apache Kafka® topics. You can use the JDBC sink connector to export data from Kafka topics to any relational database with a JDBC driver. The JDBC connector supports a wide variety of databases without requiring custom code for each one.

    They have some specifications about installing on Microsoft SQL Server which I find non relevant for this discussion.

    So if JDBC Connector supports both source and sink and Debezium supports only source (not sink), we understand that in order to write data from Kafka to databases with a JDBC driver (sink), the JDBC Connector is the way to go (including SQL Server).

    Now the comparison should be narrowed only to the sources field.
    JDBC Source Connector Documentation doesn't say much more at first sight:

    Data is loaded by periodically executing a SQL query and creating an output record for each row in the result set. By default, all tables in a database are copied, each to its own output topic. The database is monitored for new or deleted tables and adapts automatically. When copying data from a table, the connector can load only new or modified rows by specifying which columns should be used to detect new or modified data.


    Searching a little further in order to understand their differences, in this Debezium blog which uses Debezium MySQL Connector as a source and JDBC Connector as a sink, there is an explanation about the differences between the two, which generally telling us that Debezium provides records with more information about the database changes, while JDBC Connector provides records which are more focused about converting the database changes into simple insert/upsert commands:

    The Debezium MySQL Connector was designed to specifically capture database changes and provide as much information as possible about those events beyond just the new state of each row. Meanwhile, the Confluent JDBC Sink Connector was designed to simply convert each message into a database insert/upsert based upon the structure of the message. So, the two connectors have different structures for the messages, but they also use different topic naming conventions and behavior of representing deleted records.

    Moreover, they have different topic naming and different delete methods:

    Debezium uses fully qualified naming for target topics representing each table it manages. The naming follows the pattern [logical-name].[database-name].[table-name]. Kafka Connect JDBC Connector works with simple names [table-name].

    ...

    When the Debezium connector detects a row is deleted, it creates two event messages: a delete event and a tombstone message. The delete message has an envelope with the state of the deleted row in the before field, and an after field that is null. The tombstone message contains same key as the delete message, but the entire message value is null, and Kafka’s log compaction utilizes this to know that it can remove any earlier messages with the same key. A number of sink connectors, including the Confluent’s JDBC Sink Connector, are not expecting these messages and will instead fail if they see either kind of message.

    This Confluent blog explains more how CDC and JDBC Connector works, it (JDBC Connector) executing queries to the source database every fixed interval, which is not very scalable solution, while CDC has higher frequency, streaming from the database transaction log:

    The connector works by executing a query, over JDBC, against the source database. It does this to pull in all rows (bulk) or those that changed since previously (incremental). This query is executed at the interval defined in poll.interval.ms. Depending on the volumes of data involved, the physical database design (indexing, etc.), and other workload on the database, this may not prove to be the most scalable option.

    ...

    Done properly, CDC basically enables you to stream every single event from a database into Kafka. Broadly put, relational databases use a transaction log (also called a binlog or redo log depending on DB flavour), to which every event in the database is written. Update a row, insert a row, delete a row – it all goes to the database’s transaction log. CDC tools generally work by utilising this transaction log to extract at very low latency and low impact the events that are occurring on the database (or a schema/table within it).

    This blog also states the differences between CDC and JDBC Connector, mainly says that JDBC Connector doesn't support syncing deleted records thus fits for prototyping, and CDC fits for more mature systems:

    The JDBC Connector cannot fetch deleted rows. Because, how do you query for data that doesn’t exist?

    ...

    My general steer on CDC vs JDBC is that JDBC is great for prototyping, and fine low-volume workloads. Things to consider if using the JDBC connector:

    Doesn’t give true CDC (capture delete records, want before/after record versions) Latency in detecting new events Impact of polling the source database continually (and balancing this with the desired latency) Unless you’re doing a bulk pull from a table, you need to have an ID and/or timestamp that you can use to spot new records. If you don’t own the schema, this becomes a problem.


    tl;dr Conclusion

    The main differences between Debezium and JDBC Connector are:

    1. Debezium is used only as a Kafka source and JDBC Connector can be used as Kafka source and sink.

    For sources:

    1. JDBC Connector doesn't support syncing deleted records, while Debezium does.
    2. JDBC Connector queries the database every fixed interval, which is not very scalable solution, while CDC has higher frequency, streaming from the database transaction log.
    3. Debezium provides records with more information about the database changes, and JDBC Connector provides records which are more focused about converting the database changes into simple insert/upsert commands.
    4. Different topic naming.

    0 讨论(0)
提交回复
热议问题