Kubernetes logs split in kibana

两盒软妹~` 提交于 2021-01-28 10:32:49

问题


I have Kubernetes system in Azure and used the following instrustions to install fluent, elasticsearch and kibana: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch I am able to see my pods logs in kibana but when i send logs more then 16k chars its just split.

if i send 35k chars . its split into 3 logs.

how can i increase the limit of 1 log? I want to able to see the 36k chars in one log.

image here


回答1:


https://github.com/fluent-plugins-nursery/fluent-plugin-concat

did the job combine to one log. solve docker's max log line (of 16Kb) solve long lines in my container logs get split into multiple lines solve max size of the message seems to be 16KB therefore for a message of 85KB the result is that 6 messages were created in different chunks.




回答2:


I have been digging this topic half a year ago. In short, this is an expected behavior.

Docker chunks log messages at 16K, because of a 16K buffer for log messages. If a message length exceeds 16K, it should be split by the json file logger and merged at the endpoint.

It looks like the Docker_mode option for Fluentbit might help, but I'm not sure how exactly you are parsing container logs.



来源:https://stackoverflow.com/questions/64087056/kubernetes-logs-split-in-kibana

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!