amazon-cloudwatchlogs

API Gateway Cloudwatch advanced logging

爱⌒轻易说出口 提交于 2019-12-11 16:30:45
问题 I am trying to get to the point of billing for API calls made to our services, this includes creating metrics for each API Key usage, but before I even start that I would like to understand a certain aspect of the CloudWatch logs first. In this first image, you'll notice 1.06 million hits recorded on the graph at 6 weeks and 30 day period: My understanding on this is that the 1.06m is the amount of hits that have taken place on this API, the "custom (6w)" is the time period, i.e. over 6 weeks

CloudWatch custom metrics not working as expected

橙三吉。 提交于 2019-12-11 05:57:12
问题 I had already created 7 other metrics based on some log files I send to CloudWatch with no problems. Some time ago we had a problem with MongoDB connection, and I identified that through logs, so I'd like to create a Metric, so that I can create an Alarm based on it. I did create the Metric, but (of course) there are no data being fed into that Metic, because no more "MongoError" messages exists. But does that also mean that I can't even access the Metric to create the Alarm? Because this is

CloudWatch Logs agent can't assume role to send logs to different account

陌路散爱 提交于 2019-12-11 04:45:38
问题 I have 2 AWS accounts. Account A: EC2 instances with awslogs client from amazon Account B: Centralized logging account I want to send logs from the EC2 instance with awslogs client ( in account A ) from one account to CloudWatch Logs in an another account ( account B ). It works fine by creating an IAM user in Account B and setting up the AWS credential key in awscli.conf , but I do not want keys to be hardcoded, so I'm trying to assume role as follows: IAM Role in Account B (the CloudWatch

AWS CloudWatchLog limit

不想你离开。 提交于 2019-12-10 19:10:40
问题 I am trying to find centralized solution to move my application logging from database (RDS). I was thinking to use CloudWatchLog but noticed that there is a limit for PutLogEvents requests: The maximum rate of a PutLogEvents request is 5 requests per second per log stream. Even if I will break my logs into many streams (based on EC2, log type - error,info,warning,debug) the limit of 5 req. per second is still very restrictive for an active application. The other solution is to somehow

How does Amazon CloudWatch batch logs when streaming to AWS Lambda?

允我心安 提交于 2019-12-10 18:28:16
问题 The AWS documentation indicates that multiple log event records are provided to Lambda when streaming logs from CloudWatch. logEvents The actual log data, represented as an array of log event records. The "id" property is a unique identifier for every log event. How does CloudWatch group these logs? Time? Count? Randomly, from my perspective? 回答1: Some aws services allow you to configure the log intervals such as elastic load balancing. There's a choice between five and sixty minute log

AWS CloudWatch log subscription filters decode

ぃ、小莉子 提交于 2019-12-10 08:40:49
问题 I am using CloudWatch log subscription filters stream to Lambda and publish a message to an SNS topic. But it will output garbled message and can't success decode. my output: k %" jVbB If not decode will output like this: { "awslogs": {"data": "BASE64ENCODED_GZIP_COMPRESSED_DATA"} } My code is below and it is using nodejs: console.log("Loading function"); var AWS = require("aws-sdk"); exports.handler = function(event, context) { var eventText = JSON.stringify(event, null, 2); var decodeText =

Create a logStream for each log file in cloudwatchLogs

北战南征 提交于 2019-12-07 04:45:18
问题 I use AWS CloudWatch log agent to push my application log to AWS Cloudwatch. In the cloudwatchLogs config file inside my EC2 instance , I have this entry: [/scripts/application] datetime_format = %Y-%m-%d %H:%M:%S file = /workingdir/customer/logfiles/*.log buffer_duration = 5000 log_stream_name = {instance_id} initial_position = start_of_file log_group_name = /scripts/application According to this configuration, all log files in workingdir directory are being sent to cloudwatchLogs in the

How to parse mixed text and JSON log entries in AWS CloudWatch for Log Metric Filter

↘锁芯ラ 提交于 2019-12-06 20:44:36
问题 I am trying to parse log entries which are mix of text and JSON. First line is text representation and next lines are JSON payload of the event. One of the possible examples are: 2016-07-24T21:08:07.888Z [INFO] Command completed lessonrecords-create { "key": "lessonrecords-create", "correlationId": "c1c07081-3f67-4ab3-a5e2-1b3a16c87961", "result": { "id": "9457ce88-4e6f-4084-bbea-14fff78ce5b6", "status": "NA", "private": false, "note": "Test note", "time": "2016-02-01T01:24:00.000Z",

AWS CloudWatch: EndpointConnectionError: Could not connect to the endpoint URL

时光毁灭记忆、已成空白 提交于 2019-12-06 03:57:32
问题 I just followed these instructions (Link) to get AWS CloudWatch installed on my EC2 instance. I updated my repositories: sudo yum update -y I installed the awslogs package: sudo yum install -y awslogs I edited the /etc/awslogs/awscli.conf, confirming that my AZ is us-west-2b on the EC2 page I left the default condiguration of the /etc/awslogs/awslogs.conf file as is, confirming that the default path indeed has logs being written to it I checked the /var/log/awslogs.log file and it is

Missing log lines when writing to cloudwatch from ECS Docker containers

老子叫甜甜 提交于 2019-12-05 23:36:51
问题 (Docker container on AWS-ECS exits before all the logs are printed to CloudWatch Logs) Why are some streams of a CloudWatch Logs Group incomplete (i.e., the Fargate Docker Container exits successfully but the logs stop being updated abruptly)? Seeing this intermittently, in almost all log groups, however, not on every log stream/task run. I'm running on version 1.3.0 Description: A Dockerfile runs node.js or Python scripts using the CMD command. These are not servers/long-running processes,