fluentd

fluentd loses milliseconds and now log messages are stored out of order in elasticsearch

Deadly 提交于 2019-11-30 03:44:31
问题 I am using fluentd to centralize log messages in elasticsearch and view them with kibana. When I view log messages, messages that occured in the same second are out of order and the milliseconds in @timestamp is all zeros 2015-01-13T11:54:01.000-06:00 DEBUG my message How do I get fluentd to store milliseconds? 回答1: fluentd does not currently support sub-second resolution: https://github.com/fluent/fluentd/issues/461 I worked around this by adding a new field to all of the log messages with

将博客搬至CSDN

橙三吉。 提交于 2019-11-28 18:41:26
mkdir -p /srv/volume/fluentd/ cd /srv/volume/fluentd/ mkdir -p plugins/ cat > Dockerfile << 'EOF' FROM fluent/fluentd:v1.2.5-debian-onbuild ENV TZ=Asia/Shanghai RUN apt-get update \ && apt-get -y install tzdata \ && apt-get -y install curl \ && ln -snf /usr/share/zoneinfo/$TZ /etc/localtime \ && echo $TZ > /etc/timezone RUN buildDeps="sudo make gcc g++ libc-dev ruby-dev" \ && apt-get update \ && apt-get install -y --no-install-recommends $buildDeps \ && sudo gem install \ fluent-plugin-elasticsearch \ && SUDO_FORCE_REMOVE=yes \ apt-get purge -y --auto-remove \ -o APT::AutoRemove:

Fluentd SSL/TLS secured TCP output plugin to generic receiver (Logstash)?

泪湿孤枕 提交于 2019-11-28 13:09:38
I've been looking for a while for fluentd output plugin for tcp which is also ssl secured that doesn't force my receiver to be from a specific kind. In my case, my receiver is logstash. Here are a few of the plugins which came close (close but no cigar): Forward Output - not supporting ssl connection. Secure Forward Output - sends data only to another fluentd receiver. Some were https plugins and some were specific service plugins (which required a token/user/password of some kind). Is there any other plugin i can use? maybe with some workaround? After spending days on searching for an

k8s-日志收集架构

本秂侑毒 提交于 2019-11-27 08:14:05
日志收集 Kubernetes 集群中监控系统的搭建,除了对集群的监控报警之外,还有一项运维工作是非常重要的,那就是日志的收集。 介绍 应用程序和系统日志可以帮助我们了解集群内部的运行情况,日志对于我们调试问题和监视集群情况也是非常有用的。而且大部分的应用都会有日志记录,对于传统的应用大部分都会写入到本地的日志文件之中。对于容器化应用程序来说则更简单,只需要将日志信息写入到 stdout 和 stderr 即可,容器默认情况下就会把这些日志输出到宿主机上的一个 JSON 文件之中,同样我们也可以通过 docker logs 或者 kubectl logs 来查看到对应的日志信息。 但是,通常来说容器引擎或运行时提供的功能不足以记录完整的日志信息,比如,如果容器崩溃了、Pod 被驱逐了或者节点挂掉了,我们仍然也希望访问应用程序的日志。所以,日志应该独立于节点、Pod 或容器的生命周期,这种设计方式被称为 cluster-level-logging,即完全独立于 Kubernetes 系统,需要自己提供单独的日志后端存储、分析和查询工具。 Kubernetes 中的基本日志 下面这个示例是 Kubernetes 中的一个基本日志记录的示例,直接将数据输出到标准输出流,如下: apiVersion: v1 kind: Pod metadata: name: counter spec:

Fluentd SSL/TLS secured TCP output plugin to generic receiver (Logstash)?

房东的猫 提交于 2019-11-27 07:33:21
问题 I've been looking for a while for fluentd output plugin for tcp which is also ssl secured that doesn't force my receiver to be from a specific kind. In my case, my receiver is logstash. Here are a few of the plugins which came close (close but no cigar): Forward Output - not supporting ssl connection. Secure Forward Output - sends data only to another fluentd receiver. Some were https plugins and some were specific service plugins (which required a token/user/password of some kind). Is there