fluentd loses milliseconds and now log messages are stored out of order in elasticsearch

Deadly 提交于 2019-11-30 03:44:31

问题


I am using fluentd to centralize log messages in elasticsearch and view them with kibana. When I view log messages, messages that occured in the same second are out of order and the milliseconds in @timestamp is all zeros

2015-01-13T11:54:01.000-06:00   DEBUG   my message

How do I get fluentd to store milliseconds?


回答1:


fluentd does not currently support sub-second resolution: https://github.com/fluent/fluentd/issues/461

I worked around this by adding a new field to all of the log messages with record_reformer to store nanoseconds since epoch

For example if your fluentd has some inputs like so:

#
# Syslog
#
<source>
    type syslog
    port 5140
    bind localhost
    tag syslog
</source>

#
# Tomcat log4j json output
#
<source>
    type tail
    path /home/foo/logs/catalina-json.out
    pos_file /home/foo/logs/fluentd.pos
    tag tomcat
    format json
    time_key @timestamp
    time_format "%Y-%m-%dT%H:%M:%S.%L%Z"
</source>

Then change them to look like this and add a record_reformer that adds a nanosecond field

#
# Syslog
#
<source>
    type syslog
    port 5140
    bind localhost
    tag cleanup.syslog
</source>

#
# Tomcat log4j json output
#
<source>
    type tail
    path /home/foo/logs/catalina-json.out
    pos_file /home/foo/logs/fluentd.pos
    tag cleanup.tomcat
    format json
    time_key @timestamp
    time_format "%Y-%m-%dT%H:%M:%S.%L%Z"
</source>

<match cleanup.**>
    type record_reformer
    time_nano ${t = Time.now; ((t.to_i * 1000000000) + t.nsec).to_s}
    tag ${tag_suffix[1]}
</match>

Then add the time_nano field to your kibana dashboards and use it to sort instead of @timestamp and everything will be in order.



来源:https://stackoverflow.com/questions/27928479/fluentd-loses-milliseconds-and-now-log-messages-are-stored-out-of-order-in-elast

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!