fluentd

Fluentd gives the error: Log file is not writable, when starting the server

筅森魡賤 提交于 2019-12-10 17:18:25
问题 Here's my td-agent.conf file <source> @type http port 8888 </source> <match whatever.access> @type file path /var/log/what.txt </match> But when I try to start the server using sudo /etc/init.d/td-agent start it gives the following error: '2016-02-01 10:45:49 +0530 [error]: fluent/supervisor.rb:359:rescue in >main_process: config error file="/etc/td-agent/td-agent.conf" error="out_file: > /var/log/what.txt.20160201_0.log is not writable" Can someone explain what's wrong? 回答1: If you installed

12Factor App: Capturing stdout/stderr logs with Fluentd

耗尽温柔 提交于 2019-12-09 15:00:07
问题 By reading the following post from 12factor I have come up with a question I'd like to check how you guys handle this. Basically, an app should write directly to stdout/stderr. Is there anyway to redirect these streams directly to fluentd (not bound to rsyslog/syslog)? As I become more aware of fluentd, I believe it would be a great tool for log aggregation from multiple apps/platforms. The main reasoning for this is, if the app is cross-platform, rsyslog/syslog may not be available, and as I

腾讯云多Kubernetes的多维度监控实践

◇◆丶佛笑我妖孽 提交于 2019-12-06 13:54:09
欢迎大家前往 腾讯云社区 ,获取更多腾讯海量技术实践干货哦~ 本次内容根据2017年11月4日 K8S Geek Gathering 沙龙深圳站腾讯云高级工程师王天夫的演讲内容整理而成。 本次分享的主要内容涉及腾讯云容器的顶层整体设计,包括产品功能,及提供的附加能力。同时会介绍我们现在Master集群化部署的整体方案。通过这些内容的提前了解,可以更好理解后面和大家分享的关于容器监控的内容,所有监控的设计都是依赖于Master集群化部署的。最后会和大家分享腾讯云容器服务监控的Future Work。 大家可以看一下这个图,这是腾讯云容器服务PaaS平台顶层的设计,最上面是云Portal,意义是用户在使用我们容器服务的时候能够从这几个维度去掌控他们的集群、创建他们的容器。第一个是MC,可以理解为是控制管理台,大家都知道Kubernetes本身的概念是比较复杂的,对一个刚刚接手Kubernetes的人来说理解成本是特别大的,我们在Kubernetes之上包装了它的概念,对于刚刚接手的人来说是非常好的理解。同时可视化出来提供页面给大家用。第二点是支持原生的K8S API,因为整个容器服务是基于K8S开发的,为了保证用户使用不同的Kubernetes,所以我们在Kubernetes主干上并不会去做很大的改动,因为一开始我们是做过这方面的东西,但是我们发现如果我们在K8S上做了特别大的改动

Python's SyslogHandler and TCP

纵饮孤独 提交于 2019-12-06 08:53:18
问题 I'm trying to understand why the SyslogHandler class from Python's logging framework (logging.handlers) does not implement any of the framing mechanism described by RFC 6587: Octet Counting : it "prepends" the message length to the syslog frame: Non-Transparent-Framing : a trailer character to separate messages. This is what most of the servers understand. This "problem" can be easily solved by adding a LF character to the end of the messages, however I would expect that the SyslogHandler

How can I send the data from fluentd in kubernetes cluster to the elasticsearch in remote standalone server outside cluster?

旧时模样 提交于 2019-12-06 06:08:16
I have three kubernetes cluster environments set up in GCP. I have installed Fluentd as daemonset in all these environments to collect the logs from all the pods. I have also installed elasticsearch and kibana in a separate server outside the cluster. I need to feed the logs in fluentd to the elasticsearch in remote server and thereby run a centralised logging platform. How can I send the data from fluentd to the elasticsearch in remote server? The error received is: error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error="Can not reach Elasticsearch cluster There are two

Parse nginx ingress logs in fluentd

这一生的挚爱 提交于 2019-12-06 02:28:10
问题 I'd like to parse ingress nginx logs using fluentd in Kubernetes. That was quite easy in Logstash, but I'm confused regarding fluentd syntax. Right now I have the following rules: <source> type tail path /var/log/containers/*.log pos_file /var/log/es-containers.log.pos time_format %Y-%m-%dT%H:%M:%S.%NZ tag kubernetes.* format json read_from_head true keep_time_key true </source> <filter kubernetes.**> type kubernetes_metadata </filter> And as a result I get this log but it is unparsed: 127.0

Can you use environment variables in config file for fluentd

百般思念 提交于 2019-12-05 22:10:45
问题 I was wondering how to use env vars in the Fluentd config, I tried: <match **> type elasticsearch logstash_format true logstash_prefix $ENV_VAR host *** port *** include_tag_key true tag_key _key </match> but it doesn't work, any idea? 回答1: EDIT: Here is a much better solution: If you pass "--use-v1-config" option to Fluentd, this is possible with the "#{ENV['env_var_name']" like this: <match foobar.**> # ENV["FOO"] is foobar type elasticsearch logstash_prefix "#{ENV['FOO']}" logstash_format

Is it possible to use stdout as a fluentd source to capture specific logs for write to elasticsearch?

末鹿安然 提交于 2019-12-05 19:03:15
I'm a noob to both fluentd and elasticsearch, and I'm wondering if it's possible for fluentd to capture specific logs (in this case, custom audit logs generated by our apps) from stdout - use stdout as a source - and write them to a specific index in elasticsearch. Many thanks in advance for your replies. Yes, you could use fluentd's exec input plugin to launch your apps and capture their stdout. Note this means fluentd would be in charge of launching your application which may not be desirable - in that case if the application already writes to log file you can set fluentd up to tail that

Fluented,Kubernetes和谷歌云平台——处理日志流的解决方案

纵饮孤独 提交于 2019-12-05 18:51:15
也许你对Fluentd的统一日志记录层已经有所耳闻。可能你对日志是流不是文件这个概念也已经很熟悉,所以现在就让我们用这个方法来思考日志层。 事实上,最后导致决定性的一点就是fluentd是如何被配置的。全部都是关于我们如何处理stream的不同元素的:我们从哪里得到数据,当我们获取到的时候用它来做什么,我们将处理过的数据发送到哪里,以及它们在发送过程中的时候,我们要如何处理它。在这篇博客中,我们会回顾一下这些概念,并且将他们运用到以下案例中: 1、日志从Docker容器中输出命令(但是当容器中止的时候,要保持配置) 2、处理JSON日志 3、通过等级将信息进行分类 4、将数据流分离到两个目的地 事实证明,谷歌云平台和Kubernetes默认设置下已经包括了fluentd日志层输出,这样的话,你就可以精确地做这些事情,但是首先,让我们来看一下fluentd.conf文件中的指令: 1、source指令确定输入源 2、match指令确定输出目的地 3、filter指令确定event处理管道 4、system指令设置系统范围的配置 5、label指令将内部路由的输出和过滤器分组 6、@include指令包括其他文件 基本方案(用于日志记录Docker标准输出命令) 现在对于我们的目标来说,我们主要会考虑source和match指令。以下是一个样本,为日志记录命令配置

一文看懂 K8s 日志系统设计和实践

扶醉桌前 提交于 2019-12-05 02:24:08
上一篇 中我们介绍了为什么需要一个日志系统、为什么云原生下的日志系统如此重要以及云原生下日志系统的建设难点,相信DevOps、SRE、运维等同学看了是深有体会的。本篇文章单刀直入,会直接跟大家分享一下如何在云原生的场景下搭建一个灵活、功能强大、可靠、可扩容的日志系统。 需求驱动架构设计 技术架构,是将产品需求转变为技术实现的过程。对于所有的架构师而言,能够将产品需求分析透彻是非常基本也是非常重要的一点。很多系统刚建成没多久就要被推翻,最根本的原因还是没有解决好产品真正的需求。 我所在的日志服务团队在日志这块有近10年的经验,几乎服务阿里内部所有的团队,涉及电商、支付、物流、云计算、游戏、即时通讯、IoT等领域,多年来的产品功能的优化和迭代都是基于各个团队的日志需求变化。 有幸我们最近几年在阿里云上实现了产品化,服务了数以万计的企业用户,包括国内各大直播类、短视频、新闻媒体、游戏等行业Top1互联网客户。产品功能从服务一个公司到服务上万家公司会有质的差别,上云促使我们更加深入的去思考:究竟哪些功能是日志这个平台需要去为用户去解决的,日志最核心的诉求是什么,如何去满足各行各业、各种不同业务角色的需求... 需求分解与功能设计 上一节中我们分析了公司内各个不同角色对于日志的相关需求,总结起来有以下几点: 支持各种日志格式、数据源的采集,包括非K8s 能够快速的查找/定位问题日志