fluentd

Rancher 2.2.4发布,CVE修复,项目监控回归!

时光怂恿深爱的人放手 提交于 2019-12-02 19:14:08
6月20日,北京,由Rancher Labs主办的【2019企业容器创新大会】限免报名已开启!全天18场演讲,特邀中国人寿、中国联通、平安科技、新东方、阿里云、百度云等著名企业的IT负责人,分享容器技术的企业级落地经验。 大会上,Rancher Labs研发经理还将现场Demo即将发布的Rancher 2.3中Istio、Windows容器的功能及使用!还有K3s、Rio等的现场交流。点击 http://hdxu.cn/hMsQ8 了解详情及在线报名啦~ 2019年6月6日,Rancher Labs发布了Rancher全新版本2.2.4,该版本修复了近期发现的两个安全漏洞CVE-2019-12303 和 CVE-2019-12274,项目级别的监控功能也在此版本回归,还有一系列功能与优化。 CVE修复 2.2.4版本包含两个安全漏洞修复 CVE-2019-12303 和 CVE-2019-12274 第一个漏洞会影响v2.0.0到v2.2.3的版本,项目管理员可以在对接日志系统时,通过注入额外的Fluentd配置参数,来读取到Fluentd容器内的文件或执行任意命令,例如其他项目管理员配置的ElasticSearch。 具体链接: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12274 第二个漏洞会影响v1.6

Google container engine logging to Stackdriver Error Reporting

耗尽温柔 提交于 2019-12-02 10:20:41
I'm currently trying to log errors to Stackdriver Error Reporting from Google Container Engine. I'm using the built-in fluentd-based Stackdriver Logging agent from GKE which works great. However, when I log an error according to the specification( https://cloud.google.com/error-reporting/docs/formatting-error-messages ), I do not see it appear in Stackdriver Error Reporting The payload I see in Stackdriver Logging is { insertId: "xatjb4fltv246" jsonPayload: { stream: "event" message: "path was incorrect" environment: "production" event_type: "RAILS_ERROR" context: { path: "/2", reportLocation:

Fluentd apache format [warn]: pattern not match:

蹲街弑〆低调 提交于 2019-12-02 01:08:22
In my /etc/fluent/fluent.conf <source> @type tail format apache2 path /var/log/apache2/other_vhosts_access.log tag apache2.access </source> Error / warn : 2016-02-11 00:59:10 +0100 [warn]: pattern not match: "mybebsite.dz:443 105.101.114.234 - - [11/Feb/2016:00:59:10 +0100] \"POST /__es/_all/_search HTTP/1.1\" 200 794 \" https://mywebsite.net/ \" \"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:43.0) Gecko/20100101 Firefox/43.0\"" Why this patern dosn't match ? Best. It seems that tail plugin does not support the format for apache log format "vhost_combined" but "combined". How about changing the

建设DevOps统一运维监控平台,先从日志监控说起

馋奶兔 提交于 2019-12-01 14:34:39
前言 随着Devops、云计算、微服务、容器等理念的逐步落地和大力发展,机器越来越多,应用越来越多,服务越来越微,应用运行基础环境越来多样化,容器、虚拟机、物理机不一而足。 面对动辄几百上千个虚拟机、容器,数十种要监控的对象,现有的监控系统还能否支撑的住?来自于容器、虚拟机、物理机的应用日志、系统服务日志如何采用同一套方案快速、完整的收集和检索?怎样的架构、技术方案才更适合如此庞大繁杂的监控需求呢?本文主要从以下几个方面来分享下笔者在日志监控方面的一些经验。 目录 一、DevOps浪潮下带来的监控挑战 二、统一监控平台架构解析 三、日志监控的技术栈 四、日志监控经典方案ELK 五、微服务+容器云背景下的日志监控实践Journald+fluentd+elasticsearch 六、如何选择适合自己的日志监控方案? 一、DevOps浪潮下带来的监控挑战 现在Devops、云计算、微服务、容器等理念正在逐步落地和大力发展,机器越来越多,应用越来越多,服务越来越微,应用运行基础环境越来多样化,容器,监控面临的压力越来越大。挑战主要有: 监控源的多样化挑战 业务、应用、网络设备、存储设备、物理机、虚拟机、容器、数据库、各种系统软件等等,需要监控的对象越来越多,指标也多种多样,如何以一个统一的视角,监控到所有的数据? 海量数据的分析处理挑战 设备越来越多,应用越来越多

Kibana - How to extract fields from existing Kubernetes logs

笑着哭i 提交于 2019-12-01 11:07:04
I have a sort of ELK stack, with fluentd instead of logstash, running as a DaemonSet on a Kubernetes cluster and sending all logs from all containers, in logstash format, to an Elasticsearch server. Out of the many containers running on the Kubernetes cluster some are nginx containers which output logs of the following format: 121.29.251.188 - [16/Feb/2017:09:31:35 +0000] host="subdomain.site.com" req="GET /data/schedule/update?date=2017-03-01&type=monthly&blocked=0 HTTP/1.1" status=200 body_bytes=4433 referer="https://subdomain.site.com/schedule/2589959/edit?location=23092&return=monthly"

Can't log from (fluentd) logdriver using service name in compose

a 夏天 提交于 2019-12-01 06:30:49
问题 I have the following setup in docker: Application (httpd) Fluentd ElasticSearch Kibana The configuration of the logdriver of the application is describing the fluentd container. The logs will be saved in ES and shown in Kibana. When the logdriver is configured as this, it works: web: image: httpd container_name: httpd ports: - "80:80" links: - fluentd logging: driver: "fluentd" options: fluentd-address: localhost:24224 tag: httpd.access And fluentd is mapping its exposed port 24224 on port

How to setup error reporting in Stackdriver from kubernetes pods?

那年仲夏 提交于 2019-12-01 03:30:13
I'm a bit confused at how to setup error reporting in kubernetes, so errors are visible in Google Cloud Console / Stackdriver "Error Reporting"? According to documentation https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine we need to enable fluentd' "forward input plugin" and then send exception data from our apps. I think this approach would have worked if we had setup fluentd ourselves, but it's already pre-installed on every node in a pod that just runs gcr.io/google_containers/fluentd-gcp docker image. How do we enable forward input on those pods and make sure that

How to setup error reporting in Stackdriver from kubernetes pods?

雨燕双飞 提交于 2019-11-30 23:06:51
问题 I'm a bit confused at how to setup error reporting in kubernetes, so errors are visible in Google Cloud Console / Stackdriver "Error Reporting"? According to documentation https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine we need to enable fluentd' "forward input plugin" and then send exception data from our apps. I think this approach would have worked if we had setup fluentd ourselves, but it's already pre-installed on every node in a pod that just runs gcr.io

fluentd loses milliseconds and now log messages are stored out of order in elasticsearch

℡╲_俬逩灬. 提交于 2019-11-30 20:11:22
I am using fluentd to centralize log messages in elasticsearch and view them with kibana. When I view log messages, messages that occured in the same second are out of order and the milliseconds in @timestamp is all zeros 2015-01-13T11:54:01.000-06:00 DEBUG my message How do I get fluentd to store milliseconds? fluentd does not currently support sub-second resolution: https://github.com/fluent/fluentd/issues/461 I worked around this by adding a new field to all of the log messages with record_reformer to store nanoseconds since epoch For example if your fluentd has some inputs like so: # #

从零开始入门 K8s | 应用编排与管理:Job & DaemonSet

∥☆過路亽.° 提交于 2019-11-30 15:02:34
一、Job 需求来源 Job 背景问题 首先我们来看一下 Job 的需求来源。我们知道 K8s 里面,最小的调度单元是 Pod,我们可以直接通过 Pod 来运行任务进程。这样做将会产生以下几种问题: 我们如何保证 Pod 内进程正确的结束? 如何保证进程运行失败后重试? 如何管理多个任务,且任务之间有依赖关系? 如何并行地运行任务,并管理任务的队列大小? Job:管理任务的控制器 我们来看一下 Kubernetes 的 Job 为我们提供了什么功能: 首先 kubernetes 的 Job 是一个管理任务的控制器,它可以创建一个或多个 Pod 来指定 Pod 的数量,并可以监控它是否成功地运行或终止; 我们可以根据 Pod 的状态来给 Job 设置重置的方式及重试的次数; 我们还可以根据依赖关系,保证上一个任务运行完成之后再运行下一个任务; 同时还可以控制任务的并行度,根据并行度来确保 Pod 运行过程中的并行次数和总体完成大小。 用例解读 我们根据一个实例来看一下Job是如何来完成下面的应用的。 Job 语法 上图是 Job 最简单的一个 yaml 格式,这里主要新引入了一个 kind 叫 Job,这个 Job 其实就是 job-controller 里面的一种类型。 然后 metadata 里面的 name 来指定这个 Job 的名称,下面 spec.template