logstash-configuration

Filter/grok method on logstash

馋奶兔 提交于 2019-12-11 11:43:59
问题 Supposed I have this log file: Jan 1 22:54:17 drop %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 70.77.116.190; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2612; Jan 1 22:54:22 drop %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 61.164.41.144; dst: %DSTIP%; proto: udp; product: VPN-1 & FireWall-1; service: 5060; s_port: 5069; Jan 1 22:54:23 drop %LOGSOURCE% >eth1 rule: 7; rule_uid:

ELK - Kibana doesn't recognize geo_point field

南笙酒味 提交于 2019-12-11 11:34:13
问题 I'm trying to create a Tile map on Kibana, with GEO location points. For some reason, When I'm trying to create the map, I get the following message on Kibana: No Compatible Fields: The "logs" index pattern does not contain any of the following field types: geo_point My settings: Logstash (version 2.3.1): filter { grok { match => { "message" => "MY PATTERN" } } geoip { source => "ip" target => "geoip" add_field => [ "location", "%{[geoip][latitude]}, %{[geoip][longitude]}" ] #added this extra

Logstash-filter-rest sent field references incorrectly it always reference first field value it had referened

冷暖自知 提交于 2019-12-11 10:35:04
问题 I recently use logstash-filter-rest, and configure it like below: rest { url => "http://example.com/api" sprintf => true method => "post" params => { "post_key" => "%{a_field_in_log}" } response_key => "my_key" } after this, logstash make a post request to my api, but something is wrong, the value of a_field_in_log is identical in every request ( I check api access log, all of the value is the first field value sent to api ) it seems like there have caches for referenced field. Does someone

Delete old documents from Elastic Search using logstash

这一生的挚爱 提交于 2019-12-11 09:38:00
问题 I am using logstash to index data from postgres(jdbc input plugin) into elasticsearch. I don't have any time based information in the database. Postgres table users to import has 2 columns - userid(unique), uname Elastic search export - _id = userid I am exporting this data every hour using cron schedule in logstash. input { jdbc { schedule => "0 */1 * * *" statement => "SELECT userid, uname FROM users" } } output { elasticsearch { hosts => ["elastic_search_host"] index => "user_data"

Logstash is not reading from Kafka

百般思念 提交于 2019-12-11 06:15:23
问题 I am testing a simple pipeline - Filebeat > Fafka > Logstash > File. Logstash is not reading from Kafka, but I see Kafka has messages when i use this command - bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic MyTopic --from-beginning My file beat configuration - filebeat.prospectors: - input_type: log paths: - /root/LogData/input.log output.kafka: hosts: ["10.247.186.14:9092"] topic: MyTopic partition.round_robin: reachable_only: false required_acks: 1 compression: none

logstash splits event field values and assign to @metadata field

不羁的心 提交于 2019-12-11 05:16:34
问题 I have a logstash event , which has the following field { "_index": "logstash-2016.08.09", "_type": "log", "_id": "AVZvz2ix", "_score": null, "_source": { "message": "function_name~execute||line_no~128||debug_message~id was not found", "@version": "1", "@timestamp": "2016-08-09T14:57:00.147Z", "beat": { "hostname": "coredev", "name": "coredev" }, "count": 1, "fields": null, "input_type": "log", "offset": 22299196, "source": "/project_root/project_1/log/core.log", "type": "log", "host":

Correct regular expression for the input log

半世苍凉 提交于 2019-12-11 04:38:42
问题 Input log looks like this, which contains data which are "|" sperated. The data contains id | type | request | response 110000|read|<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:web="http://webservices.lookup.sdp.bharti.ibm.com"> <soapenv:Header/> <soapenv:Bod<web:getLookUpServiceDetails> <getLookUpService> <serviceRequester>iOBD</serviceRequester> <lineOfBusiness>mobility</lineOfBusiness> <lookupAttribute> <searchAttrValue>911425152231426</searchAttrValue>

Logstash in check for array only works with more than 1 element

谁说我不能喝 提交于 2019-12-10 23:32:35
问题 this is mainly because I could not find an answer to this and I want to know how it works/why it works. Here are my filter examples: (1): if [message] in ["a","b"] { mutate { add_field => { "tet" => "world2" } } } This works perfectly fine for messages that are "a" or "b". A new field is added. Perfect. (2) if [message] == "a" { mutate { add_field => { "tet" => "world2" } } } Works perfectly fine when the message is "a". (3) if [message] in ["a"] { mutate { add_field => { "tet" => "world2" }

logstash Input painfully slow while fetching messages from activemq topic

我的梦境 提交于 2019-12-10 22:59:51
问题 I have configured JMS input in logstash to subscribe to JMS topic messages and push messages to elastic search. input { jms { id => "my_first_jms" yaml_file => "D:\softwares\logstash-6.4.0\config\jms-amq.yml" yaml_section => "dev" use_jms_timestamp => true pub_sub => true destination => "mytopic" # threads => 50 } } filter { json{ source => "message" } } output { stdout { codec => json } elasticsearch { hosts => ['http://localhost:9401'] index => "jmsindex" } } System specs: RAM: 16 GB Type:

Logstash - import nested JSON into Elasticsearch

久未见 提交于 2019-12-08 12:33:20
I have a large amount (~40000) of nested JSON objects I want to insert into elasticsearch an index. The JSON objects are structured like this: { "customerid": "10932" "date": "16.08.2006", "bez": "xyz", "birthdate": "21.05.1990", "clientid": "2", "address": [ { "addressid": "1", "tile": "Mr", "street": "main str", "valid_to": "21.05.1990", "valid_from": "21.05.1990", }, { "addressid": "2", "title": "Mr", "street": "melrose place", "valid_to": "21.05.1990", "valid_from": "21.05.1990", } ] } So a JSON field (address in this example) can have an array of JSON objects. What would a logstash config