how to view aws log real time (like tail -f)

后端 未结 11 548
执笔经年
执笔经年 2021-01-31 01:13

I can view the log using the following command.

aws logs get-log-events --log-group-name groupName --log-stream-name streamName --limit 100

wha

相关标签:
11条回答
  • 2021-01-31 01:59

    The aws cli does not provide a live tail -f option.

    Those other tools mentioned above do provide a tailing feature, however, I tried all these tools, awslogs, cwtail and found them frustrating. They were slow to download events, often unreliable and not helpful in displaying JSON log data and were primitive with query options.

    I wanted an extremely fast, simple log viewer that would allow me to instantly and easily see application errors and status. The CloudWatch logs viewer is slow and CloudWatch Insights can take > 1m for some pretty basic queries.

    So I created SenseLogs, a free AWS CloudWatch Logs viewer that runs entirely in your browser. There is no server-side services required. SenseLogs transparently downloads log data and stores events in your browser application cache for immediate viewing, smooth infinite scrolling and full text queries. SenseLogs has live tail with infinite back scrolling. See https://github.com/sensedeep/senselogs/blob/master/README.md for details.

    0 讨论(0)
  • 2021-01-31 02:02

    Note that tailing an aws log is now a supported feature of the official awscli, albeit only in awscli v2, which is not released yet. Tailing and following the logs (like tail -f) can now be accomplished by something like:

    aws logs tail $group_name --follow
    

    To install the v2 version, see the instructions on this page. It was implemented in this PR. To see it demonstrated at the last re:Invent conference, see this video.

    In addition to tailing the logs, it allows viewing the logs back to a specified time using the --since parameter, which can take an absolute or relative time

    aws logs tail $group_name --since 5d
    

    To keep the v1 and v2 versions of awscli separate, I installed awscli v2 into a separate python virtual environment and activate it only when I need to use awscli v2.

    0 讨论(0)
  • 2021-01-31 02:04

    To tail CloudWatch Logs effectively I created a tool called cw.

    It's super easy to install (it supports brew, snap and scoop), fast (it targets the specific hardware architecture, no intermediate runtime) and it has a set of features that make life easier.

    Your example with cw would be:

    cw tail -f groupName:streamName
    
    0 讨论(0)
  • 2021-01-31 02:05

    Here's a bash script that you can use. The script requires the AWS CLI and jq.

    #!/bin/bash
    
    # Bail out if anything fails, or if we do not have the required variables set
    set -o errexit -o nounset
    
    LOG_GROUP_NAME=$1
    LOG_BEGIN=$(date --date "${2-now}" +%s)
    LOG_END=$(date --date "${3-2 minutes}" +%s)
    LOG_INTERVAL=5
    LOG_EVENTIDS='[]'
    
    while (( $(date +%s) < $LOG_END + $LOG_INTERVAL )); do
      sleep $LOG_INTERVAL
      LOG_EVENTS=$(aws logs filter-log-events --log-group-name $LOG_GROUP_NAME --start-time "${LOG_BEGIN}000" --end-time "${LOG_END}000" --output json)
      echo "$LOG_EVENTS" | jq -rM --argjson eventIds "$LOG_EVENTIDS" '.events[] as $event | select($eventIds | contains([$event.eventId]) | not) | $event | "\(.timestamp / 1000 | todateiso8601) \(.message)"'
      LOG_EVENTIDS=$(echo "$LOG_EVENTS" | jq -crM --argjson eventIds "$LOG_EVENTIDS" '$eventIds + [.events[].eventId] | unique')
    done
    

    Usage: save the file, chmod +x it, and then run it: ./cloudwatch-logs-tail.sh log-group-name. The script also takes parameters for begin and end times, which default to now and 2 minutes respectively. You can specify any strings which can be parsed by date --date for these parameters.

    How it works: the script keeps a list of event IDs that have been displayed, which is empty to begin with. It queries CloudWatch Logs to get all log entries in the specified time interval, and displays those which do not match our list of event IDs. The it saves all of the event IDs for the next iteration.

    The script polls every few seconds (set by LOG_INTERVAL in the script), and keeps polling for one more interval past the end time to account for the delay between log ingestion and availability.

    Note that this script is not going to be great if you want to keep tailing the logs for more than a few minutes at a time, because the query results that it gets from AWS will keep getting bigger with every added log item. It's fine for quick runs though.

    0 讨论(0)
  • 2021-01-31 02:07

    I was really disappointed with awslogs and cwtail so I made my own tool called Saw that efficiently streams CloudWatch logs to the console (and colorizes the JSON output):

    You can install it on MacOS with:

    brew tap TylerBrock/saw
    brew install saw
    

    It has a bunch of nice features like the ability to automatically expand (indent) the JSON output (try running the tool with --expand):

    saw watch my_log_group --expand
    

    Got a Lambda you want to see error logs for? No Problem:

    saw watch /aws/lambda/my_func --filter error 
    

    Saw is great because the output is easily readable and you can stream logs from entire log group, not just a single stream in the group. Filtering and watching streams with a certain prefix is also just as easy!

    0 讨论(0)
提交回复
热议问题