问题
I have a field where values are dynamic. I want to store space separated tokens in an array field for completion suggester
Let's say if my field val
is hi how are you
then I want to have an array with [hi how are you, how are you, are you, you]
I tried with split filter
as my data in csv
. I couldn't achieve that. Is there anyway to do this with only ES, Logstash.
回答1:
Based on the solution I linked to, you can achieve what you need as follows.
First create an ingest pipeline that leverages the script
processor to build the desired input array:
PUT _ingest/pipeline/csv-parser
{
"processors": [
{
"csv": {
"field": "message",
"target_fields": [
"val",
"val_type",
"id"
]
}
},
{
"script": {
"source": """
def tokens = new ArrayList(Arrays.asList(/\s+/.split(ctx.val)));
def nbTokens = tokens.size();
def input = [];
for (def i = nbTokens; i > 0; i--) {
input.add(tokens.join(" "));
tokens.remove(0);
}
ctx.val = [
'input': input,
'contexts': [
'type': [ctx.val_type]
]
]
"""
}
},
{
"remove": {
"field": "message"
}
}
]
}
Then you can index documents like this:
PUT index/_doc/1?pipeline=csv-parser
{
"message": "hi how are you,seller,10223667"
}
And the resulting document will look like this:
GET index/_doc/1
->
{
"val" : {
"input" : [
"hi how are you",
"how are you",
"are you",
"you"
],
"contexts" : {
"type" : [
"seller"
]
}
},
"val_type" : "seller",
"id" : "10223667"
}
来源:https://stackoverflow.com/questions/62281609/elasticsearch-split-by-comma-split-filter-logstash