问题
I have a logstash
event
, which has the following field
{
"_index": "logstash-2016.08.09",
"_type": "log",
"_id": "AVZvz2ix",
"_score": null,
"_source": {
"message": "function_name~execute||line_no~128||debug_message~id was not found",
"@version": "1",
"@timestamp": "2016-08-09T14:57:00.147Z",
"beat": {
"hostname": "coredev",
"name": "coredev"
},
"count": 1,
"fields": null,
"input_type": "log",
"offset": 22299196,
"source": "/project_root/project_1/log/core.log",
"type": "log",
"host": "coredev",
"tags": [
"beats_input_codec_plain_applied"
]
},
"fields": {
"@timestamp": [
1470754620147
]
},
"sort": [
1470754620147
]
}
I am wondering how to use filter
(kv
maybe?) to extract core.log
from "source": "/project_root/project_1/log/core.log"
, and put it in e.g. [@metadata][log_type]
, and so later on, I can use log_type
in output
to create an unique index
, composing of hostname + logtype + timestamp, e.g.
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][_source][host]}-%{[@metadata][log_type]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
stdout { codec => rubydebug }
}
回答1:
You can leverage the mutate/gsub filter in order to achieve this:
filter {
# add the log_type metadata field
mutate {
add_field => {"[@metadata][log_type]" => "%{source}"}
}
# remove everything up to the last slash
mutate {
gsub => [ "[@metadata][log_type]", "^.*\/", "" ]
}
}
Then you can modify your elasticsearch
output like this:
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{host}-%{[@metadata][log_type]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
stdout { codec => rubydebug }
}
来源:https://stackoverflow.com/questions/38900150/logstash-splits-event-field-values-and-assign-to-metadata-field