fluentd

从零开始入门 K8s | 应用编排与管理:Job & DaemonSet

那年仲夏 提交于 2019-12-29 15:28:16
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 一、Job 需求来源 Job 背景问题 首先我们来看一下 Job 的需求来源。我们知道 K8s 里面,最小的调度单元是 Pod,我们可以直接通过 Pod 来运行任务进程。这样做将会产生以下几种问题: 我们如何保证 Pod 内进程正确的结束? 如何保证进程运行失败后重试? 如何管理多个任务,且任务之间有依赖关系? 如何并行地运行任务,并管理任务的队列大小? Job:管理任务的控制器 我们来看一下 Kubernetes 的 Job 为我们提供了什么功能: 首先 kubernetes 的 Job 是一个管理任务的控制器,它可以创建一个或多个 Pod 来指定 Pod 的数量,并可以监控它是否成功地运行或终止; 我们可以根据 Pod 的状态来给 Job 设置重置的方式及重试的次数; 我们还可以根据依赖关系,保证上一个任务运行完成之后再运行下一个任务; 同时还可以控制任务的并行度,根据并行度来确保 Pod 运行过程中的并行次数和总体完成大小。 用例解读 我们根据一个实例来看一下Job是如何来完成下面的应用的。 Job 语法 上图是 Job 最简单的一个 yaml 格式,这里主要新引入了一个 kind 叫 Job,这个 Job 其实就是 job-controller 里面的一种类型。 然后 metadata 里面的 name

kubernetes日志收集

谁说我不能喝 提交于 2019-12-25 23:59:54
kubernetes的日志收集 日志收集在本篇文章中主要分2种方案 需要明确的是,kubernetes里对容器日志的处理方式,都叫做cluster-level-logging。 对于一个容器来说,当应用日志输出到stdout和stderr之后,容器项目在默认情况下就会把这些日志输出到宿主机上的一个JSON文件里。这样就能通过kubectl logs查看到日志了。 两种方案分别以Daemonset和sidecar模式部署 DaemonSet方式在每个节点只允许一个日志agent,相对资源占用要小很多,每个pod不能单独配置,可定制性较弱,比较适用于功能单一或业务不是很多的集群; Sidecar方式为每个POD单独部署日志agent,相对资源占用较多,每个pod可单独配置,可定制性强,建议在大型的K8S集群或多个业务方服务的集群使用该方式。 第一种   在Node上部署logging-agent,将日志文件发送到后端保存起来。   实际上这种模式的核心就是将logging-agent以Daemonset的方式运行在节点上,然后将宿主机上的容器日志挂载进去,最后由logging-agent把日志发送出去。   这种工作模式最大的有点,在于一个节点只需要部署一个agent,并且不会对应用和pod有任何的入侵。   在这里,我们通过fluentd作为logging

Fluentd tail plugin: tail all files in a directory

蓝咒 提交于 2019-12-25 04:03:15
问题 Fluentd accepts CSV filenames to log. But that too implies a prior knowledge of file-names. <source> type tail path /var/log/nginx/*.log tag logging format /^(?<time>.+) \[(?<level>[^\]]+)\] *(?<message>.*)$/ time_format %Y/%m/%d %H:%M:%S </source> Is there an option or a hack to do something logically equivalent to- path /var/log/*.log 回答1: Check out the tail_ex plugin. It allows you to use file globbing in the path. 来源: https://stackoverflow.com/questions/21703413/fluentd-tail-plugin-tail

Fluentd tail plugin: tail all files in a directory

与世无争的帅哥 提交于 2019-12-25 04:03:12
问题 Fluentd accepts CSV filenames to log. But that too implies a prior knowledge of file-names. <source> type tail path /var/log/nginx/*.log tag logging format /^(?<time>.+) \[(?<level>[^\]]+)\] *(?<message>.*)$/ time_format %Y/%m/%d %H:%M:%S </source> Is there an option or a hack to do something logically equivalent to- path /var/log/*.log 回答1: Check out the tail_ex plugin. It allows you to use file globbing in the path. 来源: https://stackoverflow.com/questions/21703413/fluentd-tail-plugin-tail

Logs shipped with wrong timestamp and timekey ignored

早过忘川 提交于 2019-12-25 00:23:22
问题 I want to ship my Vault logs to s3. Based on this issue I did this: ## vault input <source> @type tail path /var/log/vault_audit.log pos_file /var/log/td-agent/vault.audit_log.pos <parse> @type json </parse> tag s3.vault.audit </source> ## s3 output <match s3.*.*> @type s3 s3_bucket vault path logs/ <buffer time> @type file path /var/log/td-agent/s3 timekey 30m timekey_wait 5m chunk_limit_size 256m </buffer> time_slice_format %Y/%m/%d/%H%M </match> What I'd expect is for my logs to be shipped

fluentd create tag based on key value

♀尐吖头ヾ 提交于 2019-12-24 18:42:45
问题 Shipping logs from Kubernetes to Fluentd aggregator. Is there a way to transform one of the key values into the tag value? For example there is key value for application_name. If this could be transformed into the tag value it would be possible to direct to different outputs. Thanks, 回答1: There is no way to edit the tag once the record is created. The way to do this is to re-emit the record with the rewrite tag filter You could do something like this: <match kubernetes_logs> @type rewrite_tag

How does fluentd benefit this scenario?

北战南征 提交于 2019-12-24 16:28:17
问题 I've come across Fluentd. Why would you use such a thing when its easy enough to store raw data on a db directly? I might be misunderstanding the use of the technology here. Glad to hear some feedback. Why would anyone want to go through another layer, when its easy enough to capture and store raw data in your own a data store? Consider this scenario. I want to store page views. Raw data is stored in an RDBMS and formatted data is stored in Mongodb This is a short description of my current

解决Docker容器 iptables问题---docker: Error response from daemon: driver failed programming external connectivity on endpoint quizzical_thompson

眉间皱痕 提交于 2019-12-23 04:47:36
一、问题现象 最近在研究Docker容器日志管理时,启动容器出现iptables相关报错,具体问题如下 运行容器 [root@node-11 ~]# docker run -d -p 24224:24224 -p 24224:24224/udp -v /data:/fluentd/log fluent/fluentd 出现如下报错 docker: Error response from daemon: driver failed programming external connectivity on endpoint quizzical_thompson (c2b238f6b003b1f789c989db0d789b4bf3284ff61152ba40dacd0e01bd984653): (iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.3 --dport 24224 -j ACCEPT: iptables: No chain/target/match by that name. (exit status 1)). 二、解决办法 经过查阅资料得知是docker0网桥的原因,解决上面报错问题需要进行一下步骤 1

Fluentd: Multiple formats in one match

我的未来我决定 提交于 2019-12-23 04:08:37
问题 I'm new to Fluentd. I have one problem regarding the <match> tag and its format. For example Our system returns 2 different formats: format1 , and format2 at the same tag: tag Using fluent.conf we are able to catch the provided tag but we are unable to separate those two formats I tried the fluent-plugin-multi-format-parser but it does not allow me to add the prefixes. <match tag> @type parser format multi <pattern> format format1 add_prefix pattern1 ... </pattern> <pattern> format format2

Fluentd: Multiple formats in one match

夙愿已清 提交于 2019-12-23 04:08:04
问题 I'm new to Fluentd. I have one problem regarding the <match> tag and its format. For example Our system returns 2 different formats: format1 , and format2 at the same tag: tag Using fluent.conf we are able to catch the provided tag but we are unable to separate those two formats I tried the fluent-plugin-multi-format-parser but it does not allow me to add the prefixes. <match tag> @type parser format multi <pattern> format format1 add_prefix pattern1 ... </pattern> <pattern> format format2