how to view aws log real time (like tail -f)

后端 未结 11 546
执笔经年
执笔经年 2021-01-31 01:13

I can view the log using the following command.

aws logs get-log-events --log-group-name groupName --log-stream-name streamName --limit 100

wha

11条回答
  •  清酒与你
    2021-01-31 02:05

    Here's a bash script that you can use. The script requires the AWS CLI and jq.

    #!/bin/bash
    
    # Bail out if anything fails, or if we do not have the required variables set
    set -o errexit -o nounset
    
    LOG_GROUP_NAME=$1
    LOG_BEGIN=$(date --date "${2-now}" +%s)
    LOG_END=$(date --date "${3-2 minutes}" +%s)
    LOG_INTERVAL=5
    LOG_EVENTIDS='[]'
    
    while (( $(date +%s) < $LOG_END + $LOG_INTERVAL )); do
      sleep $LOG_INTERVAL
      LOG_EVENTS=$(aws logs filter-log-events --log-group-name $LOG_GROUP_NAME --start-time "${LOG_BEGIN}000" --end-time "${LOG_END}000" --output json)
      echo "$LOG_EVENTS" | jq -rM --argjson eventIds "$LOG_EVENTIDS" '.events[] as $event | select($eventIds | contains([$event.eventId]) | not) | $event | "\(.timestamp / 1000 | todateiso8601) \(.message)"'
      LOG_EVENTIDS=$(echo "$LOG_EVENTS" | jq -crM --argjson eventIds "$LOG_EVENTIDS" '$eventIds + [.events[].eventId] | unique')
    done
    

    Usage: save the file, chmod +x it, and then run it: ./cloudwatch-logs-tail.sh log-group-name. The script also takes parameters for begin and end times, which default to now and 2 minutes respectively. You can specify any strings which can be parsed by date --date for these parameters.

    How it works: the script keeps a list of event IDs that have been displayed, which is empty to begin with. It queries CloudWatch Logs to get all log entries in the specified time interval, and displays those which do not match our list of event IDs. The it saves all of the event IDs for the next iteration.

    The script polls every few seconds (set by LOG_INTERVAL in the script), and keeps polling for one more interval past the end time to account for the delay between log ingestion and availability.

    Note that this script is not going to be great if you want to keep tailing the logs for more than a few minutes at a time, because the query results that it gets from AWS will keep getting bigger with every added log item. It's fine for quick runs though.

提交回复
热议问题