amazon-cloudwatchlogs

AWS Cloudwatch logs with Docker Container - NoCredentialProviders: no valid providers in chain

痴心易碎 提交于 2019-12-05 03:58:51
My docker-compose file: version: '2' services: scraper: build: ./Scraper/ logging: driver: "awslogs" options: awslogs-region: "eu-west-1" awslogs-group: "doctors-logs" awslogs-stream: "scrapers-stream" volumes: - ./Scraper/spiders:/spiders I have added my AWS credentials to my mac using the aws configure command and the credentials are stored correctly in ~/.aws/credentials When I run docker-compose up I get the following error: ERROR: for scraper Cannot start service scraper: Failed to initialize logging driver: NoCredentialProviders: no valid providers in chain. Deprecated. For verbose

How to parse mixed text and JSON log entries in AWS CloudWatch for Log Metric Filter

孤者浪人 提交于 2019-12-05 01:27:38
I am trying to parse log entries which are mix of text and JSON. First line is text representation and next lines are JSON payload of the event. One of the possible examples are: 2016-07-24T21:08:07.888Z [INFO] Command completed lessonrecords-create { "key": "lessonrecords-create", "correlationId": "c1c07081-3f67-4ab3-a5e2-1b3a16c87961", "result": { "id": "9457ce88-4e6f-4084-bbea-14fff78ce5b6", "status": "NA", "private": false, "note": "Test note", "time": "2016-02-01T01:24:00.000Z", "updatedAt": "2016-07-24T21:08:07.879Z", "createdAt": "2016-07-24T21:08:07.879Z", "authorId": null, "lessonId":

Set expiration of CloudWatch Log Group for Lambda Function

僤鯓⒐⒋嵵緔 提交于 2019-12-04 15:29:42
问题 By default when I create a Lambda function, the CloudWatch Log Group is set to Never Expire. Is it possible to set the expiration (saying 14 days) so I don't have to set it manually from the console after creation? Updated#1 Thanks to @jens walter answer this is a code snippet of how to solve the problem Resources: LambdaFunction: Type: AWS::Serverless::Function Properties: Handler: index.handler Runtime: nodejs6.10 CodeUri: <your code uri> Policies: <your policies> LambdaFunctionLogGroup:

Missing log lines when writing to cloudwatch from ECS Docker containers

只谈情不闲聊 提交于 2019-12-04 04:03:18
(Docker container on AWS-ECS exits before all the logs are printed to CloudWatch Logs) Why are some streams of a CloudWatch Logs Group incomplete (i.e., the Fargate Docker Container exits successfully but the logs stop being updated abruptly)? Seeing this intermittently, in almost all log groups, however, not on every log stream/task run. I'm running on version 1.3.0 Description: A Dockerfile runs node.js or Python scripts using the CMD command. These are not servers/long-running processes, and my use case requires the containers to exit when the task completes. Sample Dockerfile: FROM node:6

My AWS Cloudwatch bill is huge. How do I work out which log stream is causing it?

蹲街弑〆低调 提交于 2019-12-03 19:52:30
问题 I got a $1,200 invoice from Amazon for Cloudwatch services last month (specifically for 2 TB of log data ingestion in "AmazonCloudWatch PutLogEvents"), when I was expecting a few tens of dollars. I've logged into the Cloudwatch section of the AWS Console, and can see that one of my log groups used about 2TB of data, but there are thousands of different log streams in that log group, how can I tell which one used that amount of data? 回答1: On the CloudWatch console, use the IncomingBytes

CloudWatch logs acting weird

只愿长相守 提交于 2019-12-03 10:35:21
问题 I have two log files with multi-line log statements. Both of them have same datetime format at the begining of each log statement. The configuration looks like this: state_file = /var/lib/awslogs/agent-state [/opt/logdir/log1.0] datetime_format = %Y-%m-%d %H:%M:%S file = /opt/logdir/log1.0 log_stream_name = /opt/logdir/logs/log1.0 initial_position = start_of_file multi_line_start_pattern = {datetime_format} log_group_name = my.log.group [/opt/logdir/log2-console.log] datetime_format = %Y-%m-

Set expiration of CloudWatch Log Group for Lambda Function

≯℡__Kan透↙ 提交于 2019-12-03 09:36:32
By default when I create a Lambda function, the CloudWatch Log Group is set to Never Expire. Is it possible to set the expiration (saying 14 days) so I don't have to set it manually from the console after creation? Updated#1 Thanks to @jens walter answer this is a code snippet of how to solve the problem Resources: LambdaFunction: Type: AWS::Serverless::Function Properties: Handler: index.handler Runtime: nodejs6.10 CodeUri: <your code uri> Policies: <your policies> LambdaFunctionLogGroup: Type: "AWS::Logs::LogGroup" DependsOn: "LambdaFunction" Properties: RetentionInDays: 14 LogGroupName:

How to Send Kubernetes Logs to AWS CloudWatch?

谁都会走 提交于 2019-12-03 08:46:23
问题 AWS CloudWatch Logs in Docker Setting an AWS CloudWatch Logs driver in docker is done with log-driver=awslogs and log-opt , for example - #!/bin/bash docker run \ --log-driver=awslogs \ --log-opt awslogs-region=eu-central-1 \ --log-opt awslogs-group=whatever-group \ --log-opt awslogs-stream=whatever-stream \ --log-opt awslogs-create-group=true \ wernight/funbox \ fortune My Problem I would like to use AWS CloudWatch logs in a Kubernetes cluster, where each pod contains a few Docker containers

AWS Elastic Beanstalk: Add custom logs to CloudWatch?

半城伤御伤魂 提交于 2019-12-03 03:47:40
问题 How to add custom logs to CloudWatch? Defaults logs are sent but how to add a custom one? I already added a file like this: (in .ebextensions) files: "/opt/elasticbeanstalk/tasks/bundlelogs.d/applogs.conf" : mode: "000755" owner: root group: root content: | /var/app/current/logs/* "/opt/elasticbeanstalk/tasks/taillogs.d/cloud-init.conf" : mode: "000755" owner: root group: root content: | /var/app/current/logs/* As I did bundlelogs.d and taillogs.d these custom logs are now tailed or retrieved

How to Send Kubernetes Logs to AWS CloudWatch?

拟墨画扇 提交于 2019-12-03 00:15:24
AWS CloudWatch Logs in Docker Setting an AWS CloudWatch Logs driver in docker is done with log-driver=awslogs and log-opt , for example - #!/bin/bash docker run \ --log-driver=awslogs \ --log-opt awslogs-region=eu-central-1 \ --log-opt awslogs-group=whatever-group \ --log-opt awslogs-stream=whatever-stream \ --log-opt awslogs-create-group=true \ wernight/funbox \ fortune My Problem I would like to use AWS CloudWatch logs in a Kubernetes cluster, where each pod contains a few Docker containers. Each deployment would have a separate Log Group, and each container would have a separate stream. I