How to trigger alert in Kubernetes using Prometheus Alert Manager

扶醉桌前 提交于 2020-05-16 20:11:28

问题


I have setup kube-prometheus in my cluster(https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus). It contains some default alerts like "CoreDNSdown etc". How to create my own alert?

Could any one provide me sample example to create an alert that will send an email to my gmail account?

I followed this Alert when docker container pod is in Error or CarshLoopBackOff kubernetes. But I couldn't make it work.


回答1:


To send an alert to your gmail account, you need to setup the alertmanager configuration in a file say alertmanager.yaml:

cat <<EOF > alertmanager.yml
route:
  group_by: [Alertname]
  # Send all notifications to me.
  receiver: email-me

receivers:
- name: email-me
  email_configs:
  - to: $GMAIL_ACCOUNT
    from: $GMAIL_ACCOUNT
    smarthost: smtp.gmail.com:587
    auth_username: "$GMAIL_ACCOUNT"
    auth_identity: "$GMAIL_ACCOUNT"
    auth_password: "$GMAIL_AUTH_TOKEN"
EOF

Now, as you're using kube-prometheus so you will have a secret named alertmanager-main that is default configuration for alertmanager. You need to create a secret alertmanager-main again with the new configuration using following command:

kubectl create secret generic alertmanager-main --from-file=alertmanager.yaml -n monitoring

Now you're alertmanager is set to send an email whenever it receive alert from the prometheus.

Now you need to setup an alert on which your mail will be sent. You can set up DeadManSwitch alert which fires in every case and it is used to check your alerting pipeline

groups:
- name: meta
  rules:
    - alert: DeadMansSwitch
      expr: vector(1)
      labels:
        severity: critical
      annotations:
        description: This is a DeadMansSwitch meant to ensure that the entire Alerting
          pipeline is functional.
        summary: Alerting DeadMansSwitch

After that the DeadManSwitch alert will be fired and should send email to your mail.

Reference link:

https://coreos.com/tectonic/docs/latest/tectonic-prometheus-operator/user-guides/configuring-prometheus-alertmanager.html

EDIT:

The deadmanswitch alert should go in a config-map which your prometheus is reading. I will share the relevant snaps from my prometheus here:

"spec": {
        "alerting": {
            "alertmanagers": [
                {
                    "name": "alertmanager-main",
                    "namespace": "monitoring",
                    "port": "web"
                }
            ]
        },
        "baseImage": "quay.io/prometheus/prometheus",
        "replicas": 2,
        "resources": {
            "requests": {
                "memory": "400Mi"
            }
        },
        "ruleSelector": {
            "matchLabels": {
                "prometheus": "prafull",
                "role": "alert-rules"
            }
        },

The above config is of my prometheus.json file which have the name of alertmanager to use and the ruleSelector which will select the rules based on prometheus and role label. So I have my rule configmap like:

kind: ConfigMap
apiVersion: v1
metadata:
  name: prometheus-rules
  namespace: monitoring
  labels:
    role: alert-rules
    prometheus: prafull
data:
  alert-rules.yaml: |+
   groups:
   - name: alerting_rules
     rules:
       - alert: LoadAverage15m
         expr: node_load15 >= 0.50
         labels:
           severity: major
         annotations:
           summary: "Instance {{ $labels.instance }} - high load average"
           description: "{{ $labels.instance  }} (measured by {{ $labels.job }}) has high load average ({{ $value }}) over 15 minutes."

Replace the DeadManSwitch in above config map.




回答2:


If you are using kube-promehtheus, by default it have alertmanager-main secrete and prometheus kind setup.

Step 1: You have to remove alertmanager-main secret

kubectl delete secret alertmanager-main -n monitoring

Step 2 :As Praful explained create secret with new change

cat <<EOF > alertmanager.yaml
route:
  group_by: [Alertname]
  # Send all notifications to me.
  receiver: email-me

receivers:
- name: email-me
  email_configs:
  - to: $GMAIL_ACCOUNT
    from: $GMAIL_ACCOUNT
    smarthost: smtp.gmail.com:587
    auth_username: "$GMAIL_ACCOUNT"
    auth_identity: "$GMAIL_ACCOUNT"
    auth_password: "$GMAIL_AUTH_TOKEN"
EOF

kubectl create secret generic alertmanager-main --from-file=alertmanager.yaml -n monitoring

Step3 : You have to add new prometheus rule

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  creationTimestamp: null
  labels:
    prometheus: k8s
    role: alert-rules
  name: prometheus-podfail-rules
spec:
  groups:
  - name: ./podfail.rules
    rules:
    - alert: PodFailAlert
      expr: sum(kube_pod_container_status_restarts_total{container="ffmpeggpu"}) BY (container) > 10

NB : The role should be role: alert-rules which is specified in the rule selector prometheus kind, To check that use

kubectl get prometheus k8s -n monitoring -o yaml


来源:https://stackoverflow.com/questions/53558057/how-to-trigger-alert-in-kubernetes-using-prometheus-alert-manager

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!