Forwarding logs from kubernetes to splunk

被刻印的时光 ゝ 提交于 2020-04-14 08:22:06

问题


I'm pretty much new to Kubernetes and don't have hands-on experience on it.

My team is facing issue regarding the log format pushed by kubernetes to splunk.

Application is pushing log to stdout in this format

{"logname" : "app-log", "level" : "INFO"}

Splunk eventually get this format (splunkforwarder is used)

{
  "log" : "{\"logname\": \"app-log\", \"level\": \"INFO \"}",
  "stream" : "stdout",
  "time" : "2018-06-01T23:33:26.556356926Z" 
 }

This format kind of make things harder in Splunk to query based on properties.

Is there any options in Kubernetes to forward raw logs from app rather than grouping into another json ?

I came across this post in Splunk, but the configuration is done on Splunk side

Please let me know if we have any option from Kubernetes side to send raw logs from application


回答1:


Kubernetes architecture provides three ways to gather logs:

1. Use a node-level logging agent that runs on every node.

You can implement cluster-level logging by including a node-level logging agent on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.

The logs format depends on Docker settings. You need to set up log-driver parameter in /etc/docker/daemon.json on every node.

For example,

{
  "log-driver": "syslog"
}

or

{
  "log-driver": "json-file"
}
  • none - no logs are available for the container and docker logs does not return any output.
  • json-file - the logs are formatted as JSON. The default logging driver for Docker.
  • syslog - writes logging messages to the syslog facility.

For more options, check the link

2. Include a dedicated sidecar container for logging in an application pod.

You can use a sidecar container in one of the following ways:

  • The sidecar container streams application logs to its own stdout.
  • The sidecar container runs a logging agent, which is configured to pick up logs from an application container.

By having your sidecar containers stream to their own stdout and stderr streams, you can take advantage of the kubelet and the logging agent that already run on each node. The sidecar containers read logs from a file, a socket, or the journald. Each individual sidecar container prints log to its own stdout or stderr stream.

3. Push logs directly to a backend from within an application.

You can implement cluster-level logging by exposing or pushing logs directly from every application.

For more information, you can check official documentation of Kubernetes




回答2:


This week we had the same issue.

  1. Using splunk forwarder DaemonSet

  2. installing https://splunkbase.splunk.com/app/3743/ this plugin on splunk will solve your issue.




回答3:


Just want to update with the solution what we tried, this worked for our log structure

SEDCMD-1_unjsonify = s/{"log":"(?:\\u[0-9]+)?(.*?)\\n","stream.*/\1/g
SEDCMD-2_unescapequotes = s/\\"/"/g
BREAK_ONLY_BEFORE={"logname":


来源:https://stackoverflow.com/questions/50639744/forwarding-logs-from-kubernetes-to-splunk

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!