elasticsearch-mapping

Fields not getting sorted in alphabetical order in elasticsearch

大兔子大兔子 提交于 2019-12-23 18:29:56
问题 I have a few documents with the a name field in it. I am using analyzed version of the name field for search and not_analyzed for sorting purposes. The sorting happens in one level, that is the names are sorted alphabetically at first. But within the list of an alphabet, the names are getting sorted lexicographically rather than alphabetically. Here is the mapping I have used: { "mappings": { "seing": { "properties": { "name": { "type": "string", "fields": { "raw": { "type": "string", "index"

Filebeat date field mapped as type keyword

假如想象 提交于 2019-12-13 03:28:54
问题 Filebeat is reading logs from a file, where logs are in the following format: {"logTimestamp":"2019-11-29T16:39:43.027Z","@version":"1","message":"Hello world","logger_name":"se.lolotron.App","thread_name":"thread-1","level":"INFO","level_value":40000,"application":"my-app"} So there is a field logTimestamp logged in ISO 8601 time format. The problem is that this field is mapped as a keyword In Elasticsearch filebeat index "logTimestamp": { "type": "keyword", "ignore_above": 1024 }, On the

Pypsark - Retain null values when using collect_list

此生再无相见时 提交于 2019-12-12 10:58:16
问题 According to the accepted answer in pyspark collect_set or collect_list with groupby, when you do a collect_list on a certain column, the null values in this column are removed. I have checked and this is true. But in my case, I need to keep the null columns -- How can I achieve this? I did not find any info on this kind of a variant of collect_list function. Background context to explain why I want nulls: I have a dataframe df as below: cId | eId | amount | city 1 | 2 | 20.0 | Paris 1 | 2 |

Enable _size for exist index

。_饼干妹妹 提交于 2019-12-12 04:12:56
问题 I need to enable "_size" for an exist index. This question talks that it's possible. But it provides no example how to do it. According to "Put Mapping API" I executed query curl -XPUT "localhost:9200/my_index/_mapping/my_type?pretty" -d '{ "properties": { "_size": { "enabled": true, "store" : true } } }' and got error: { "error" : { "root_cause" : [ { "type" : "mapper_parsing_exception", "reason" : "No type specified for field [_size]" } ], "type" : "mapper_parsing_exception", "reason" : "No

Elasticsearch _timestamp

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-09 16:19:48
问题 I tried to define the _timestamp property on an index. So first, I create the index curl -XPUT 'http://elasticsearch:9200/ppe/' response from the server : {"ok":true,"acknowledged":true} then I tried to define the mapping with a _timestamp curl -Xput 'http://elasticsearch:9200/ppe/log/_mapping' -d '{ "log": { "properties": { "_ttl": { "enabled": true }, "_timestamp": { "enabled": true, "store": "yes" }, "message": { "type": "string", "store": "yes" }, "appid": { "type": "string", "store":

Elasticsearch Mapping - Rename existing field

大憨熊 提交于 2019-12-04 21:59:18
问题 Is there anyway I can rename an element in an existing elasticsearch mapping without having to add a new element ? If so whats the best way to do it in order to avoid breaking the existing mapping? e.g. from fieldCamelcase to fieldCamelCase { "myType": { "properties": { "timestamp": { "type": "date", "format": "date_optional_time" }, "fieldCamelcase": { "type": "string", "index": "not_analyzed" }, "field_test": { "type": "double" } } } } 回答1: You could do this by creating an Ingest pipeline,

Elasticsearch _timestamp

亡梦爱人 提交于 2019-12-04 03:39:34
I tried to define the _timestamp property on an index. So first, I create the index curl -XPUT 'http://elasticsearch:9200/ppe/' response from the server : {"ok":true,"acknowledged":true} then I tried to define the mapping with a _timestamp curl -Xput 'http://elasticsearch:9200/ppe/log/_mapping' -d '{ "log": { "properties": { "_ttl": { "enabled": true }, "_timestamp": { "enabled": true, "store": "yes" }, "message": { "type": "string", "store": "yes" }, "appid": { "type": "string", "store": "yes" }, "level": { "type": "integer", "store": "yes" }, "logdate": { "type": "date", "format": "date_time

Elasticsearch Mapping - Rename existing field

旧时模样 提交于 2019-12-03 15:52:05
Is there anyway I can rename an element in an existing elasticsearch mapping without having to add a new element ? If so whats the best way to do it in order to avoid breaking the existing mapping? e.g. from fieldCamelcase to fieldCamelCase { "myType": { "properties": { "timestamp": { "type": "date", "format": "date_optional_time" }, "fieldCamelcase": { "type": "string", "index": "not_analyzed" }, "field_test": { "type": "double" } } } } You could do this by creating an Ingest pipeline, that contains a Rename Processor in combination with the Reindex API . PUT _ingest/pipeline/my_rename

No handler for type [string] declared on field [name]

元气小坏坏 提交于 2019-12-03 06:26:01
问题 When type is declared as string , Elasticsearch 6.0 will show this error. "name" => [ "type" => "string", "analyzer" => "ik_max_word" ] 回答1: Elasticsearch has dropped the string type and is now using text . So your code should be something like this "name" => [ "type" => "text", "analyzer" => "ik_max_word" ] 来源: https://stackoverflow.com/questions/47452770/no-handler-for-type-string-declared-on-field-name

Store Date Format in elasticsearch

一个人想着一个人 提交于 2019-12-03 05:39:15
问题 I met a problem when I want to add one datetime string into Elasticsearch. The document is below: {"LastUpdate" : "2013/07/24 00:00:00"} This document raised an error which is "NumberFormatException" [For input string: \"20130724 00:00:00\"] I know that I can use the Date Format in Elasticsearch, but I don't know how to use even I read the document on the website. {"LastUpdate": { "properties": { "type": "date", "format": "yyyy-MM-dd"} } } and {"LastUpdate": { "type": "date", "format": "yyyy