AWS CloudWatchLog limit

不想你离开。 提交于 2019-12-10 19:10:40

问题


I am trying to find centralized solution to move my application logging from database (RDS).

I was thinking to use CloudWatchLog but noticed that there is a limit for PutLogEvents requests:

The maximum rate of a PutLogEvents request is 5 requests per second per log stream.

Even if I will break my logs into many streams (based on EC2, log type - error,info,warning,debug) the limit of 5 req. per second is still very restrictive for an active application.

The other solution is to somehow accumulate logs and send PutLogEvents with log records batch, but it means then I am forced to use database to accumulate that records.

So the questions is:

  1. May be I'm wrong and limit of 5 req. per second is not so restrictive?
  2. Is there any other solution that I should consider, for example DynamoDB?

回答1:


PutLogEvents is designed to put several events by definition (as per it name: PutLogEvent"S") :) Cloudwatch logs agent is doing this on its own and you don't have to worry about this.

However please note: I don't recommend you to generate to much logs (e.g don't run debug mode in prodution), as cloudwatch logs can become pretty expensive as your volume of log is growing.




回答2:


My advice would be to use a Logstash solution on an AWS instance.

In alternative, you can run logstash on another existing instance or container.

https://www.elastic.co/products/logstash

It is designed for this scope and it does it wonderfully.

Cloudwatch, is not designed mainly for your needs.

I hope this helps somehow.




回答3:


If you are calling this API directly from your application: the short answer is that you need to batch you log events (it's 5 for PutLogEvents).

If you are writing the logs to disk and after that you are pushing them there is already an agent that knows how to push the logs (http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/QuickStartEC2Instance.html)

Meta: I would suggest that you prototype this and ensure that it works for the log volume that you have. Also, keep in mind that, because of how the cloudwatch api works, only one application/user can push to a log stream at a time (see the token you have to pass in) - so that you probably need to use multiple stream, one per user / maybe per log type to ensure that your applicaitions are not competing for the log.

Meta Meta: think about how your application behaves if the logging subsystem fails and if you can live with the possibility of losing the logs (ie is it critical for you to always/always have the guarantee that you will get the logs?). this will probably drive what you do / what solution you ultimately pick.



来源:https://stackoverflow.com/questions/36642047/aws-cloudwatchlog-limit

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!