Purge Kafka Topic

后端 未结 19 2276
慢半拍i
慢半拍i 2020-11-28 00:06

Is there a way to purge the topic in kafka?

I pushed a message that was too big into a kafka message topic on my local machine, now I\'m getting an

相关标签:
19条回答
  • 2020-11-28 00:48

    A lot of great answers over here but among them, I didn't find one about docker. I spent some time to figure out that using the broker container is wrong for this case (obviously!!!)

    ## this is wrong!
    docker exec broker1 kafka-topics --zookeeper localhost:2181 --alter --topic mytopic --config retention.ms=1000
    
    Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
            at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:258)
            at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
            at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
            at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:254)
            at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:112)
            at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1826)
            at kafka.admin.TopicCommand$ZookeeperTopicService$.apply(TopicCommand.scala:280)
            at kafka.admin.TopicCommand$.main(TopicCommand.scala:53)
            at kafka.admin.TopicCommand.main(TopicCommand.scala)
    

    and I should have used zookeeper:2181 instead of --zookeeper localhost:2181 as per my compose file

    ## this might be an option, but as per comment below not all zookeeper images can have this script included
    docker exec zookeper1 kafka-topics --zookeeper localhost:2181 --alter --topic mytopic --config retention.ms=1000
    

    the correct command would be

    docker exec broker1 kafka-configs --zookeeper zookeeper:2181 --alter --entity-type topics --entity-name dev_gdn_urls --add-config retention.ms=12800000
    

    Hope it will save someone's time.

    Also, be aware that the messages won't be deleted immediately and it will happen when the segment of the log will be closed.

    0 讨论(0)
  • 2020-11-28 00:49
    ./kafka-topics.sh --describe --zookeeper zkHost:2181 --topic myTopic
    

    This should give retention.ms configured. Then you can use above alter command to change to 1second (and later revert back to default).

    Topic:myTopic   PartitionCount:6        ReplicationFactor:1     Configs:retention.ms=86400000
    
    0 讨论(0)
  • 2020-11-28 00:50

    kafka don't have direct method for purge/clean-up topic (Queues), but can do this via deleting that topic and recreate it.

    first of make sure sever.properties file has and if not add delete.topic.enable=true

    then, Delete topic bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic myTopic

    then create it again.

    bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic myTopic --partitions 10 --replication-factor 2
    
    0 讨论(0)
  • 2020-11-28 00:51

    Sometimes, if you've a saturated cluster (too many partitions, or using encrypted topic data, or using SSL, or the controller is on a bad node, or the connection is flaky, it'll take a long time to purge said topic.

    I follow these steps, particularly if you're using Avro.

    1: Run with kafka tools :

    bash kafka-configs.sh --alter --entity-type topics --zookeeper zookeeper01.kafka.com --add-config retention.ms=1 --entity-name <topic-name>
    

    2: Run on Schema registry node:

    kafka-avro-console-consumer --consumer-property security.protocol=SSL --consumer-property ssl.truststore.location=/etc/schema-registry/secrets/trust.jks --consumer-property ssl.truststore.password=password --consumer-property ssl.keystore.location=/etc/schema-registry/secrets/identity.jks --consumer-property ssl.keystore.password=password --consumer-property ssl.key.password=password --bootstrap-server broker01.kafka.com:9092 --topic <topic-name> --new-consumer --from-beginning

    3: Set topic retention back to the original setting, once topic is empty.

    bash kafka-configs.sh --alter --entity-type topics --zookeeper zookeeper01.kafka.com --add-config retention.ms=604800000 --entity-name <topic-name>
    

    Hope this helps someone, as it isn't easily advertised.

    0 讨论(0)
  • 2020-11-28 00:51

    Another, rather manual, approach for purging a topic is:

    in the brokers:

    1. stop kafka broker
      sudo service kafka stop
    2. delete all partition log files (should be done on all brokers)
      sudo rm -R /kafka-storage/kafka-logs/<some_topic_name>-*

    in zookeeper:

    1. run zookeeper command line interface
      sudo /usr/lib/zookeeper/bin/zkCli.sh
    2. use zkCli to remove the topic metadata
      rmr /brokers/topic/<some_topic_name>

    in the brokers again:

    1. restart broker service
      sudo service kafka start
    0 讨论(0)
  • 2020-11-28 00:51

    Following command can be used to delete all the existing messages in kafka topic:

    kafka-delete-records --bootstrap-server <kafka_server:port> --offset-json-file delete.json
    

    The structure of the delete.json file should be following:

    { "partitions": [ { "topic": "foo", "partition": 1, "offset": -1 } ], "version": 1 }

    where offset :-1 will delete all the records (This command has been tested with kafka 2.0.1

    0 讨论(0)
提交回复
热议问题