datadog

How to Inspect the Queue Processing a Celery Task

我的梦境 提交于 2020-06-27 16:56:24
问题 I'm currently leveraging celery for periodic tasks. I am new to celery. I have two workers running two different queues. One for slow background jobs and one for jobs user's queue up in the application. I am monitoring my tasks on datadog because it's an easy way to confirm my workers a running appropriately. What I want to do is after each task completes, record which queue the task was completed on. @after_task_publish.connect() def on_task_publish(sender=None, headers=None, body=None, *

How to Inspect the Queue Processing a Celery Task

自古美人都是妖i 提交于 2020-06-27 16:56:18
问题 I'm currently leveraging celery for periodic tasks. I am new to celery. I have two workers running two different queues. One for slow background jobs and one for jobs user's queue up in the application. I am monitoring my tasks on datadog because it's an easy way to confirm my workers a running appropriately. What I want to do is after each task completes, record which queue the task was completed on. @after_task_publish.connect() def on_task_publish(sender=None, headers=None, body=None, *

Datadog alert when Amazon RDS is created

妖精的绣舞 提交于 2020-01-25 07:20:32
问题 I have an alert in Datadog when CPU Credits are low. The problem is when I create a new RDS in Amazon, initially it has 0 CPU credits and I receive this alert. How can I avoid this case? I tried to find "time since creation" metric, but with no success. 回答1: Have you tried composite monitors? You should be able to combine your low CPU Credit monitor with another monitor that looks at events from RDS. Two monitors such as: A: CPU Credit < 10 B: Number of event received about RDS creation > 1

debugging imbalanced kafka message_in rate

久未见 提交于 2020-01-16 10:31:31
问题 I've a 4 node kafka cluster in my production where we are using custom partitioner which does mod 64 of an id to determine the partition. since last week, there has been imbalanced kafka messages_in rate on 1 of our nodes as can been seen in the graph attached. The pink line shows the message in rate on kafka01 node and bluish yellow line shows the message in rate on all other 3 boxes . I'm using datadog for monitoring and using the metric kafka.messages_in.rate . Assuming that there has been

datadog agent not reachable from inside docker container

不羁的心 提交于 2019-12-18 13:24:27
问题 I installed dd-agent on Amazon linux ec2. If I run my python script directly on the host machine (I used the SDK named "dogstatsd-python"), all the metrics can be sent to datadog (I logged in to datadoghq.com and saw the metrics there). the script is something like: from statsd import statsd statsd.connect('localhost', 8125) statsd.increment('mymetrics') However, I launched a docker container and run the same script from inside the container: from statsd import statsd statsd.connect('172.14.0

Datadog event trigger returns no data instead of 0

主宰稳场 提交于 2019-12-08 03:34:51
问题 I have created a event monitor( for example events('sources:rds event_source:db-instance').by('dbinstanceidentifier').rollup('count').last('1d') >= 1 ) but it returns "NO DATA" when there are no any events. How to make it return 0 when there are no any events? 来源: https://stackoverflow.com/questions/58873617/datadog-event-trigger-returns-no-data-instead-of-0

datadog agent not reachable from inside docker container

浪子不回头ぞ 提交于 2019-11-30 09:17:43
I installed dd-agent on Amazon linux ec2. If I run my python script directly on the host machine (I used the SDK named "dogstatsd-python"), all the metrics can be sent to datadog (I logged in to datadoghq.com and saw the metrics there). the script is something like: from statsd import statsd statsd.connect('localhost', 8125) statsd.increment('mymetrics') However, I launched a docker container and run the same script from inside the container: from statsd import statsd statsd.connect('172.14.0.1', 8125) statsd.increment('my metrics') '172.14.0.1' is the IP of the host, which was extracted with