Dump all documents of Elasticsearch

后端 未结 8 1373
囚心锁ツ
囚心锁ツ 2020-12-23 11:04

Is there any way to create a dump file that contains all the data of an index among with its settings and mappings?

A Similar way as mongoDB does with mongodump

相关标签:
8条回答
  • 2020-12-23 11:58

    The data itself is one or more lucene indices, since you can have multiple shards. What you also need to backup is the cluster state, which contains all sorts of information regarding the cluster, the available indices, their mappings, the shards they are composed of etc.

    It's all within the data directory though, you can just copy it. Its structure is pretty intuitive. Right before copying it's better to disable automatic flush (in order to backup a consistent view of the index and avoiding writes on it while copying files), issue a manual flush, disable allocation as well. Remember to copy the directory from all nodes.

    Also, next major version of elasticsearch is going to provide a new snapshot/restore api that will allow you to perform incremental snapshots and restore them too via api. Here is the related github issue: https://github.com/elasticsearch/elasticsearch/issues/3826.

    0 讨论(0)
  • 2020-12-23 12:00

    We can use elasticdump to take the backup and restore it, We can move data from one server/cluster to another server/cluster.

    1. Commands to move one index data from one server/cluster to another using elasticdump.

    # Copy an index from production to staging with analyzer and mapping:
    elasticdump \
      --input=http://production.es.com:9200/my_index \
      --output=http://staging.es.com:9200/my_index \
      --type=analyzer
    elasticdump \
      --input=http://production.es.com:9200/my_index \
      --output=http://staging.es.com:9200/my_index \
      --type=mapping
    elasticdump \
      --input=http://production.es.com:9200/my_index \
      --output=http://staging.es.com:9200/my_index \
      --type=data
    

    2. Commands to move all indices data from one server/cluster to another using multielasticdump.

    Backup

    multielasticdump \
      --direction=dump \
      --match='^.*$' \
      --limit=10000 \
      --input=http://production.es.com:9200 \
      --output=/tmp 
    

    Restore

    multielasticdump \
      --direction=load \
      --match='^.*$' \
      --limit=10000 \
      --input=/tmp \
      --output=http://staging.es.com:9200 
    

    Note:

    • If the --direction is dump, which is the default, --input MUST be a URL for the base location of an ElasticSearch server (i.e. http://localhost:9200) and --output MUST be a directory. Each index that does match will have a data, mapping, and analyzer file created.

    • For loading files that you have dumped from multi-elasticsearch, --direction should be set to load, --input MUST be a directory of a multielasticsearch dump and --output MUST be a Elasticsearch server URL.

    • The 2nd command will take a backup of settings, mappings, template and data itself as JSON files.

    • The --limit should not be more than 10000 otherwise, it will give an exception.

    • Get more details here.
    0 讨论(0)
提交回复
热议问题