问题
Currently I use redis -> s3 -> elastic search -> kibana stack to pipe and visualise my logs. But due to large volume of data in elastic search I can retain logs upto 7 days.
I want to bring kafka cluster in this stack and retain logs for more number of days. I am thinking of following stack.
app nodes piping logs to kafka -> kafka cluster -> elastics search cluster -> kibana
How can I use kafka to retain logs for more number of days?
回答1:
Looking through the Apache Kafka broker configs, there are two properties that determine when a log will get deleted. One by time and the other by space.
log.retention.{ms,minutes,hours}
log.retention.bytes
Also note that if both log.retention.hours and log.retention.bytes are both set we delete a segment when either limit is exceeded.
Those two dictate when logs are deleted in Kafka. The log.retention.bytes defaults to -1, and I'm pretty sure leaving it to -1 allows only the time config to solely determine when a log gets deleted.
So to directly answer your question, set log.retention.hours to however many hours you wish to retain your data and don't change the log.retention.bytes configuration.
来源:https://stackoverflow.com/questions/33565895/how-can-i-use-kafka-to-retain-logs-in-logstash-for-longer-period