logstash-configuration

Drop log messages containing a specific string

[亡魂溺海] 提交于 2019-11-30 18:04:35
问题 So I have log messages of the format : [INFO] <blah.blah> 2016-06-27 21:41:38,263 some text [INFO] <blah.blah> 2016-06-28 18:41:38,262 some other text Now I want to drop all logs that does not contain a specific string "xyz" and keep all the rest. I also want to index timestamp. grokdebug is not helping much. This is my attempt : input { file { path => "/Users/username/Desktop/validateLogconf/logs/*" start_position => "beginning" } } filter { grok { match => { "message" => '%{SYSLOG5424SD

How to get logs and it's data having word “error” in then and how to configure logstashPipeLine.conf file for the same?

爱⌒轻易说出口 提交于 2019-11-30 14:49:23
Currently I am working on an application where I need to create documents from particular data from a file at specific location. I have set up logstash pipeline configuration. Here is what it looks like currently: input{ file{ path => "D:\ELK_Info\logstashInput.log" start_position => "beginning" } } #Possible IF condition here in the filter output { #Possible IF condition here http { url => "http://localhost:9200/<index_name>/<type_name>" http_method => "post" format => "json" } } I want to provide IF condition in output before calling API. The condition should be like, "If data from input

Logstash SQL Server Data Import

偶尔善良 提交于 2019-11-28 23:43:00
input { jdbc { jdbc_driver_library => "sqljdbc4.jar" jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver" jdbc_connection_string => "jdbc:sqlserver://192.168.2.126\\SQLEXPRESS2014:1433;databaseName=test jdbc_password => "sa@sa2015" schedule => "0 0-59 0-23 * * *" statement => "SELECT ID , Name, City, State,ShopName FROM dbo.Shops" jdbc_paging_enabled => "true" jdbc_page_size => "50000" } } filter { } output { stdout { codec => rubydebug } elasticsearch { protocol => "http" index => "shops" document_id => "%{id}" } } I want to import data in ElasticSearch using Logstash using

Logstash sprintf formatting for elasticsearch output plugin not working

夙愿已清 提交于 2019-11-28 14:43:44
I am having trouble using sprintf to reference the event fields in the elasticsearch output plugin and I'm not sure why. Below is the event received from Filebeat and sent to Elasticsearch after filtering is complete: { "beat" => { "hostname" => "ca86fed16953", "name" => "ca86fed16953", "version" => "6.5.1" }, "@timestamp" => 2018-12-02T05:13:21.879Z, "host" => { "name" => "ca86fed16953" }, "tags" => [ [0] "beats_input_codec_plain_applied", [1] "_grokparsefailure" ], "fields" => { "env" => "DEV" }, "source" => "/usr/share/filebeat/dockerlogs/logstash_DEV.log", "@version" => "1", "prospector" =

How should I use sql_last_value in logstash?

限于喜欢 提交于 2019-11-28 01:52:07
问题 I'm quite unclear of what sql_last_value does when I give my statement as such: statement => "SELECT * from mytable where id > :sql_last_value" I can slightly understand the reason behind using it, where it doesn't browse through the whole db table in order to update fields instead it only updates the records which were added newly. Correct me if I'm wrong. So what I'm trying to do is, creating the index using logstash as such: input { jdbc { jdbc_connection_string => "jdbc:mysql:/

logstash http_poller first URL request's response should be input to second URL's request param

那年仲夏 提交于 2019-11-28 00:25:15
I have two URLs (due to security concern i will explain by using dummy) a> https://xyz.company.com/ui/api/token b> https://xyz.company.com/request/transaction?date=2016-01-21&token=<tokeninfo> When you hit url mentioned in point 'a' it will generate a token let it be a string of 16 characters Then that token should be used in making second request of point 'b' in token param Updated The second url response is important to me i.e is a JSON response, I need to filter the json data and extract required data and output it to standard output and elastic search. is there any way of doing so in

multiple inputs on logstash jdbc

天涯浪子 提交于 2019-11-27 19:32:00
I am using logstash jdbc to keep the things syncd between mysql and elasticsearch. Its working fine for one table. But now I want to do it for multiple tables. Do I need to open multiple in terminal logstash agent -f /Users/logstash/logstash-jdbc.conf each with a select query or do we have a better way of doing it so we can have multiple tables being updated. my config file input { jdbc { jdbc_driver_library => "/Users/logstash/mysql-connector-java-5.1.39-bin.jar" jdbc_driver_class => "com.mysql.jdbc.Driver" jdbc_connection_string => "jdbc:mysql://localhost:3306/database_name" jdbc_user =>

Logstash SQL Server Data Import

北慕城南 提交于 2019-11-27 14:02:23
问题 input { jdbc { jdbc_driver_library => "sqljdbc4.jar" jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver" jdbc_connection_string => "jdbc:sqlserver://192.168.2.126\\SQLEXPRESS2014:1433;databaseName=test jdbc_password => "sa@sa2015" schedule => "0 0-59 0-23 * * *" statement => "SELECT ID , Name, City, State,ShopName FROM dbo.Shops" jdbc_paging_enabled => "true" jdbc_page_size => "50000" } } filter { } output { stdout { codec => rubydebug } elasticsearch { protocol => "http"

logstash check if field exists

妖精的绣舞 提交于 2019-11-27 10:45:34
问题 I have log files coming in to an ELK stack. I want to copy a field (foo) in order to perform various mutations on it, However the field (foo) isn't always present. If foo doesn't exist, then bar still gets created, but is assigned the literal string "%{foo}" How can I perform a mutation only if a field exists? I'm trying to do something like this. if ["foo"] { mutate { add_field => "bar" => "%{foo} } } 回答1: To check if field foo exists: 1) For numeric type fields use: if ([foo]) { ... } 2)

Logstash sprintf formatting for elasticsearch output plugin not working

感情迁移 提交于 2019-11-27 08:43:31
问题 I am having trouble using sprintf to reference the event fields in the elasticsearch output plugin and I'm not sure why. Below is the event received from Filebeat and sent to Elasticsearch after filtering is complete: { "beat" => { "hostname" => "ca86fed16953", "name" => "ca86fed16953", "version" => "6.5.1" }, "@timestamp" => 2018-12-02T05:13:21.879Z, "host" => { "name" => "ca86fed16953" }, "tags" => [ [0] "beats_input_codec_plain_applied", [1] "_grokparsefailure" ], "fields" => { "env" =>