Elasticsearch “pattern_replace”, replacing whitespaces while analyzing

左心房为你撑大大i 提交于 2019-11-30 18:24:52

问题


Basically I want to remove all whitespaces and tokenize the whole string as a single token. (I will use nGram on top of that later on.)

This is my index settings:

"settings": {
 "index": {
  "analysis": {
    "filter": {
      "whitespace_remove": {
        "type": "pattern_replace",
        "pattern": " ",
        "replacement": ""
      }
    },
    "analyzer": {
      "meliuz_analyzer": {
        "filter": [
          "lowercase",
          "whitespace_remove"
        ],
        "type": "custom",
        "tokenizer": "standard"
      }
    }
  }
}

Instead of "pattern": " ", I tried "pattern": "\\u0020" and \\s , too.

But when I analyze the text "beleza na web", it still creates three separate tokens: "beleza", "na" and "web", instead of one single "belezanaweb".


回答1:


The analyzer analyzes a string by tokenizing it first then applying a series of token filters. You have specified tokenizer as standard means the input is already tokenized using standard tokenizer which created the tokens separately. Then pattern replace filter is applied to the tokens.

Use keyword tokenizer instead of your standard tokenizer. Rest of the mapping is fine. You can change your mapping as below

"settings": {
 "index": {
  "analysis": {
    "filter": {
      "whitespace_remove": {
        "type": "pattern_replace",
        "pattern": " ",
        "replacement": ""
      }
    },
    "analyzer": {
      "meliuz_analyzer": {
        "filter": [
          "lowercase",
          "whitespace_remove",
          "nGram"
        ],
        "type": "custom",
        "tokenizer": "keyword"
      }
    }
  }
}


来源:https://stackoverflow.com/questions/29873344/elasticsearch-pattern-replace-replacing-whitespaces-while-analyzing

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!