Logs sent to console using logback configuration in java app, not visible in Kubernetes using kubectl logs

拜拜、爱过 提交于 2020-01-02 23:04:50

问题


I read in kubernetes docs somewhere that kubernetes reads application logs from stdout and stderror in pods. I created a new application and configured it to send logs to a remote splunk hec endpoint (using splunk-logback jars) and at the same time to console. So by default, the console logs in logback should go to System.out, which should then be visible using kubectl logs . But it's not happening in my application.

my logback file:

<?xml version="1.0" encoding="UTF-8"?>

<configuration>

    <Appender name="SPLUNK" class="com.splunk.logging.HttpEventCollectorLogbackAppender">
        <url>${splunk_hec_url}</url>
        <token>${splunk_hec_token}</token>
        <index>${splunk_app_token}</index>
        <disableCertificateValidation>true</disableCertificateValidation>
        <batch_size_bytes>1000000</batch_size_bytes>
        <batch_size_count>${batch_size_count}</batch_size_count>
        <send_mode>sequential</send_mode>

        <layout class="ch.qos.logback.classic.PatternLayout">
            <pattern>%msg</pattern>
        </layout>
    </Appender>

    <Appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%msg</pattern>
        </encoder>
    </Appender>

    <Appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="STDOUT" />
    </Appender>

    <root level="INFO">
        <appender-ref ref="SPLUNK"/>
        <appender-ref ref="ASYNC"/>
    </root>

</configuration>

I am able to see the logs in splunk and If I login to the container from backend and start my java application, then also I can see the logs on the terminal that time. But if I let the container start by default on it's own, then the logs are only going to splunk and I can't view them using kubectl logs <POD_NAME>

The kubernetes yml file for my logger app:

apiVersion: v1
kind: Pod
metadata:
    name: logging-pod
    labels:
       app: logging-pod
spec:
  containers:
     - name: logging-container
       image: logger-splunk:latest
       command: ["java", "-jar", "logger-splunk-1.0-SNAPSHOT.jar"]
       resources:
          requests:
             cpu: 1
             memory: 1Gi
          limits:
             cpu: 1
             memory: 1Gi

回答1:


According to the Kubenetes documentation, all output (that a containerized application writes to stdout and stderr) is redirected to a JSON file by default. You can access it by using kubectl logs.

Let's test this feature by creating a simple pod that outputs numbers in stdout:

kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/counter-pod.yaml

counter-pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox
    args: [/bin/sh, -c,
            'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

where:
counter - name of the pod
count - name of the container inside "counter" pod

You can access the content of that file by running:

$ kubectl logs counter

You can access a log file of previously crashed container in a pod by the following command:

$ kubectl logs --previous

In case of multiple containers in the pod, you should add the name of the container as follows:

$ kubectl logs counter -c count

When the pod is removed from the cluster, all its logs (current and previous) are also removed.

Ensure you configure stdout in application correctly, and the output to stdout in your application is not silently skipped by any reason.




回答2:


ok so this finally got resolved. The issue was with the logs not being flushed.

In the PatternLayout the %n was missing. Hence everything was going into some buffer I guess and not reaching the console.



来源:https://stackoverflow.com/questions/50263724/logs-sent-to-console-using-logback-configuration-in-java-app-not-visible-in-kub

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!