cloudwatch-alarms

AWS Cloudwatch Math Expressions: removing Insufficient Data: is there a “coalesce” function like SQL?

好久不见. 提交于 2021-01-29 09:39:48
问题 Can I replace a None/Insufficient data point as a value (constant is fine) in a Cloudwatch Math Expression? I am using a math expression of several metrics: if's, arithmetic, etc. The problem is that you are now bound by all of the variables having sufficient data. If one is missing a datapoint, WHAM! Insufficient data for that math expression. Ideally, I'd like to do something like the following based on the standard SQL coalesce function: coalsece(m1, m2, 15) + coalesce(m3, 25) / coalesce

Cloudwatch alarm for list of servers

北慕城南 提交于 2021-01-05 07:25:53
问题 I am trying to set a few alerts across a list of servers, I have my servers defined in locals as below: locals { my_list = [ "server1", "server2" ] } I then defined my cloudwatch alerts as so: (This is one such alert) resource "aws_cloudwatch_metric_alarm" "ec2-high-cpu-warning" { for_each = toset(local.my_list) alarm_name = "ec2-high-cpu-warning-for-${each.key}" comparison_operator = "GreaterThanThreshold" evaluation_periods = "1" metric_name = "CPUUtilization" namespace = "AWS/EC2"

Cloudwatch alarm for list of servers

蓝咒 提交于 2021-01-05 07:24:06
问题 I am trying to set a few alerts across a list of servers, I have my servers defined in locals as below: locals { my_list = [ "server1", "server2" ] } I then defined my cloudwatch alerts as so: (This is one such alert) resource "aws_cloudwatch_metric_alarm" "ec2-high-cpu-warning" { for_each = toset(local.my_list) alarm_name = "ec2-high-cpu-warning-for-${each.key}" comparison_operator = "GreaterThanThreshold" evaluation_periods = "1" metric_name = "CPUUtilization" namespace = "AWS/EC2"

AWS CloudWatch Alarm to add capacity to EC2 autoscaling group has been in alarm forever

家住魔仙堡 提交于 2020-01-03 05:44:20
问题 I set a CloudWatch Alarm to add 1 capacity unit to EC2 autoscaling group when memory reservation is > 70%. The Alarm was triggered at the right moment, but it has since been in alarm for 16 hours+ with no change at all in the EC2 autoscaling group. What could possibly be going wrong? Here's my ECS CloudFormation template: ECSCluster: Type: AWS::ECS::Cluster Properties: ClusterName: !Ref EnvironmentName ECSAutoScalingGroup: DependsOn: ECSCluster Type: AWS::AutoScaling::AutoScalingGroup