graphite

Spark应用监控解决方案--使用Prometheus和Grafana监控Spark应用

℡╲_俬逩灬. 提交于 2020-01-06 05:11:33
Spark任务启动后,我们通常都是通过跳板机去Spark UI界面查看对应任务的信息,一旦任务多了之后,这将会是让人头疼的问题。如果能将所有任务信息集中起来监控,那将会是很完美的事情。 通过S park官网指导文档 ,发现Spark只支持以下sink Each instance can report to zero or more sinks . Sinks are contained in the org.apache.spark.metrics.sink package: ConsoleSink : Logs metrics information to the console. CSVSink : Exports metrics data to CSV files at regular intervals. JmxSink : Registers metrics for viewing in a JMX console. MetricsServlet : Adds a servlet within the existing Spark UI to serve metrics data as JSON data. GraphiteSink : Sends metrics to a Graphite node. Slf4jSink : Sends metrics to slf4j

Unknown number of metrics received with statsd and graphite

扶醉桌前 提交于 2020-01-05 08:48:24
问题 I'm trying to gather some data on the performance of graphite and the carbon daemon. Luckily for me the carbon daemon reports to graphite every 60 seconds with some stats on its workings such as the number of metrics received. I'm using statsd to aggregate stats and flush them to the carbon daemon every second, but noticed some weird behavior when setting up to show the number of metrics received in a certain time interval. I'm using grafana to connect to my Graphite instance and pull data

Migrating from graphite to graph-explorer

痴心易碎 提交于 2020-01-05 02:49:23
问题 The graphite-webapp does not encourage ad-hoc graphing. Graphiti et al are just fancy UIs that, while improve UI-UX, do not do much regarding the inherent linear metric search that plagues the graphite-webapp. Correct me if wrong here, but the only option I came across that encourages ad-hoc graphing has been Graph-Explorer. Assuming, that Graph-Explorer is the only way ahead. I have some 1000 distinct metrics currently. Named in the following fashion- stats.beta.pluto.ip-10-0-1-81.helios.pa

Calculate percentage in Graphite for groupByNode() results

放肆的年华 提交于 2020-01-01 18:16:13
问题 I have two groups of Graphite series, both in this format. The second group is identical, except that instead of "a.b", it has "x.y" prefix. a.b.ccc.a1.hr a.b.ccc.a2.hr a.b.ccc.a3.hr a.b.ddd.a1.hr a.b.ddd.a4.hr To group by 3rd node I use groupByNode(a.b.*.*.hr,2,"sumSeries") , which gets me two series: ccc and ddd . I would like to divide ccc and ddd series from the first group by corresponding series in the second group. How do I use the result of groupByNode in the map/reduce function? 回答1:

Fabric区块链基于Prometheus和StatsD的运维监控

梦想的初衷 提交于 2019-12-27 19:16:06
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> Hyperledger Fabric是强调运维的区块链,Fabric自1.4版本开始就包含了用于peer和orderer节点运维的特性。本教程将介绍如何配置Fabric网络节点的运维管理服务,以及如何使用Prometheus和statsD/Graphite来可视化监控Hyperledger Fabric网络中各节点的实时运行指标。 相关教程: Fabric区块链Java开发详解 | Fabric区块链Node.JS开发详解 1、配置Hyperledger Fabric节点的运维服务 Hyperledger Fabric 1.4提供了如下的特性用于peer和orderer节点的运维服务API: 日志等级管理:/logspec 节点健康检查:/healthz 运行监控指标:/metrics 配置Fabric区块链节点的运维服务虽然不是尖端的火箭科技,但是如果你漏掉了某些细节也会觉得不那么容易。 首先修改core.yaml来配置peer节点的运维服务,主要包括监听地址的配置和TLS的配置(我们先暂时禁用这部分)。 用编辑器打开core.yaml: $ vi ~/fabric-samples/config/core.yaml 下图显示了peer节点的运维服务监听地址 listenAddress 的默认值:

augeas in puppet does not change file

眉间皱痕 提交于 2019-12-25 07:16:50
问题 I want to manage the contents of the carbon.conf file using Augeas from Puppet. I have used Augeas before in Puppet for managing an xml file and that worked great. However this time when the puppet catalog is applied, nothing happens to the carbon.conf file. There is also no error in the log. Here's my code in the puppet manifest file: augeas { 'cache config': notify => Service[carbon-cache], incl => '/opt/graphite/conf/carbon.conf', context => '/cache', lens => 'Carbon.lns', changes => [

Scripted Dashboards for Graphite

感情迁移 提交于 2019-12-24 04:23:15
问题 I am trying to generate dashboards for some metrics using graphite. Ideally, if i would like to display metrics such as CPU usage, Memory, and log statistics stored in graphite whisper DB. Is there any tool (and documentation) such as kibana3 which supports scripted dash-boards. Thanks 回答1: Try Grafana (http://grafana.org) it is based on Kibana. 回答2: Generated graphs can be configured and saved in the following ways- 1. Dashboard The dashboard can be accessed at- http://graphite-url/dashboard

Grafana: How to have the duration for a selected period

橙三吉。 提交于 2019-12-23 16:18:18
问题 I can't find the correct mathematical formula to compute a SLA (availability) with Grafana: I have graph to show the duration of downtime for each days: From this, i would like to compute the SLA (eg: 99,5%). On the graph for the selected period (Last 7 days) i can to have this data: 71258 is the sum of duration of downtime in second. I have this with summarize(1day, max, false) I need to have the sum of duration of time for the selected period (here 7 days = 604800second). But how ? If i

How to change the x axis in Graphite/Grafana (to graph by day)?

早过忘川 提交于 2019-12-23 10:04:55
问题 I would like to have a bar graph in graphite/grafana that has a single bar per day, over the week. Ideally we would have the days of the week (Monday,Tuesday...etc) on the x axis labels, and then seven bars in the graph, one for each day of the week. I can't seem to change the X axis at all though. Thoughts: I could cook the time data, and send it a fixed time since epoch value, but this results in a very thin bar on the grafana page. I could write a script to send a huge amount of metrics

Graphite. Some metrics lost, but seen in tcpdump

独自空忆成欢 提交于 2019-12-22 17:45:16
问题 I'm using graphite for pretty long time, and first time facing issue with some metrics getting… lost? Through tcpdump -nA dst port 2003 I can see that metrics are delivered to Graphite node. Also, some of them are getting created in whisper database, and seen in /var/log/carbon/updates.log But most of them are not appearing anywhere. So my question is: how do I debug it? How do I prove that Graphite really receives these metrics from eth0? I couldn't find any debug logs except for updates.log