How can I use Kafka to retain logs in logstash for longer period?

你离开我真会死。 提交于 2019-12-02 06:29:43

问题


Currently I use redis -> s3 -> elastic search -> kibana stack to pipe and visualise my logs. But due to large volume of data in elastic search I can retain logs upto 7 days.

I want to bring kafka cluster in this stack and retain logs for more number of days. I am thinking of following stack.

app nodes piping logs to kafka -> kafka cluster -> elastics search cluster -> kibana

How can I use kafka to retain logs for more number of days?


回答1:


Looking through the Apache Kafka broker configs, there are two properties that determine when a log will get deleted. One by time and the other by space.

log.retention.{ms,minutes,hours}
log.retention.bytes

Also note that if both log.retention.hours and log.retention.bytes are both set we delete a segment when either limit is exceeded.

Those two dictate when logs are deleted in Kafka. The log.retention.bytes defaults to -1, and I'm pretty sure leaving it to -1 allows only the time config to solely determine when a log gets deleted.

So to directly answer your question, set log.retention.hours to however many hours you wish to retain your data and don't change the log.retention.bytes configuration.



来源:https://stackoverflow.com/questions/33565895/how-can-i-use-kafka-to-retain-logs-in-logstash-for-longer-period

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!