Logs shipped with wrong timestamp and timekey ignored

早过忘川 提交于 2019-12-25 00:23:22

问题


I want to ship my Vault logs to s3. Based on this issue I did this:

## vault input
<source>
  @type tail
  path /var/log/vault_audit.log
  pos_file /var/log/td-agent/vault.audit_log.pos
  <parse>
    @type json
  </parse>
  tag s3.vault.audit
</source>

## s3 output
<match s3.*.*>
  @type s3

  s3_bucket vault
  path logs/

  <buffer time>
    @type file
    path /var/log/td-agent/s3
    timekey 30m
    timekey_wait 5m
    chunk_limit_size 256m
  </buffer>

  time_slice_format %Y/%m/%d/%H%M
</match>

What I'd expect is for my logs to be shipped to S3 every 30 minutes, and be formatted in directories as ie: logs/2019/05/01/1030

Instead my logs are shipped every 2-3ish minutes on average, and the output time format in S3 is starting from the epoch ie: logs/1970/01/01/0030_0.gz

(the time is correctly set on my system)


回答1:


Here is sample configuration which worked fine for me.

You need to make sure, you pass time to buffer section and also try to provide what kind of format it should be explicitly.

Check whether your match expression is working fine by checking agent start up logs. Also, try with <match s3.**>

<match>
  @type s3

  s3_bucket somebucket
  s3_region "us-east-1"
  path "logs/%Y/%m/%d/%H"
  s3_object_key_format "%{path}/%{time_slice}_%{index}.%{file_extension}"
  include_time_key true
  time_format "%Y-%m-%dT%H:%M:%S.%L"

  <buffer tag,time>
    @type file
    path /fluentd/buffer/s3
    timekey_wait 5m
    timekey 30m
    chunk_limit_size 64m
    flush_at_shutdown true
    total_limit_size 256m
    overflow_action block
  </buffer>
  <format>
    @type json
  </format>
  time_slice_format %Y%m%d%H%M%S
</match>


来源:https://stackoverflow.com/questions/56025747/logs-shipped-with-wrong-timestamp-and-timekey-ignored

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!