问题
I get an error java.lang.OutOfMemoryError: Java heap space while executing logstash with a large dictionary of 353 mega bytes in translate filter.
I use it to do a lookup on my input data.
I tried to allow the JVM to use more memory (with java -Xmx2048m). suppose i do it wrong because it has no effect.
I tested my config file with "smaller" dictionary and it worked fine. Any help please ? how to give logstash enough RAM to not die ?
My config file looks like that :
input {
file {
type => "MERGED DATA"
path => "C:\logstash-1.4.1\bin\..."
start_position => "beginning"
sincedb_path => "/dev/null"}}
filter {
grok {
match => [ "message", "..." ]}
if (...") {
translate {dictionary_path => "C:\logstash-1.4.1\bin\DICTIONARY.yaml" field => "Contact_ID" destination => "DATA" fallback => "no match" refresh_interval => 60 }
grok {match => [ "DATA", "..." ]}
mutate {remove_field => ...}
else if ...
else if ...
mutate { ... }
}
output { if [rabbit] == "INFO" {
elasticsearch {
host => "localhost"
}
stdout {}
}}
Thanks a lot.
回答1:
To increase heap size set the LS_HEAP_SIZE environment variable before launching logstash.
LS_HEAP_SIZE=2048m
回答2:
I was having the similar issue. Mine looks
logstash <Sequel::DatabaseError: Java::JavaLang::OutOfMemoryError: Java heap space
To solve this, I had to add some settings in my logstash config file. I added the bellow settings in the jdc section
jdbc_paging_enabled => true
jdbc_page_size => 200000
You can have a look on this thread enter link description here
来源:https://stackoverflow.com/questions/27156145/heap-space-error-while-executing-logstash-with-a-large-dictionary-translate-fil