问题
I am using ELK with filebeat. I am sending logs from filebeat to Logstash and from there to Elastic and visualizing in Kibana. I am pasting the json result that is displayed in kibana's log result which is as below:
{
"_index": "filebeat-6.4.2-2018.10.30",
"_type": "doc",
"_source": {
"@timestamp": "2018-10-30T09:15:31.697Z",
"fields": {
"server": "server1"
},
"prospector": {
"type": "log"
},
"host": {
"name": "kushmathapa"
},
"message": "{ \"datetime\": \"2018-10-23T18:04:00.811660Z\", \"level\": \"ERROR\", \"message\": \"No response from remote. Handshake timed out or transport failure detector triggered.\" }",
"source": "C:\\logs\\batch-portal\\error.json",
"input": {
"type": "log"
},
"beat": {
"name": "kushmathapa",
"hostname": "kushmathapa",
"version": "6.4.2"
},
"offset": 0,
"tags": [
"lighthouse1",
"controller",
"trt"
]
},
"fields": {
"@timestamp": [
"2018-10-30T09:15:31.697Z"
]
}
}
I want this to show as
{
"_index": "filebeat-6.4.2-2018.10.30",
"_type": "doc",
"_source": {
"@timestamp": "2018-10-30T09:15:31.697Z",
"fields": {
"server": "server1"
},
"prospector": {
"type": "log"
},
"host": {
"name": "kushmathapa"
},
"datetime": 2018-10-23T18:04:00.811660Z,
"log_level": ERROR,
"message": "{ \"No response from remote. Handshake timed out or transport failure detector triggered.\" }",
"source": "C:\\logs\\batch-portal\\error.json",
"input": {
"type": "log"
},
"beat": {
"name": "kushmathapa",
"hostname": "kushmathapa",
"version": "6.4.2"
},
"offset": 0,
"tags": [
"lighthouse1",
"controller",
"trt"
]
},
"fields": {
"@timestamp": [
"2018-10-30T09:15:31.697Z"
]
}
}
My beats.config looks like this right now
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
} stdout {
codec => rubydebug { metadata => true }
}
}
I have applied filters but i seem to be missing something.
回答1:
You can go with the config file which looks something like this. In the grok filter, add the format of your log that you want to ingest to your elasticsearch(for example refer the mentioned config).
input {
beats {
port => 5044
id => "my_plugin_id"
tags => ["logs"]
type => "abc"
}
}
filter {
if [type] == "abc" {
mutate {
gsub => [ "message", "\r", "" ]
}
grok {
break_on_match => true
match => {
"message" => [
"%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log_level}%{SPACE}%{GREEDYDATA:message}"
]
}
overwrite => [ "message" ]
}
grok {
break_on_match => true
match => {
"message" => [
"%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log_level}%{SPACE}%{GREEDYDATA:message}"
]
}
overwrite => [ "message" ]
}
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" ]
}
}
}
output {
if [type] == "abc" {
elasticsearch {
hosts => ["ip of elasticsearch:port_number of elasticsearch"]
index => "logfiles"
}
}
else {
elasticsearch {
hosts => ["ip of elasticsearch:port_number of elasticsearch"]
index => "task_log"
}
}
stdout {
codec => rubydebug { metadata => true }
}
}
回答2:
Logstash needs to know that the message
field you are receiving is in JSON format. You can use the json
filter here and get almost all of what you're looking for out of the box:
filter {
json {
target => "message"
}
}
You can use mutation or add/remove fields to rename things like level
to log.level
and datetime
to @datetime
if those names are necessary.
来源:https://stackoverflow.com/questions/53045258/customize-logs-from-filebeat-in-the-lostashs-beats-config