I am running elasticsearch from within a docker container.
While configuring elasticsearch for ssl and shield my elasticsearch.yml
file got illegal entry i
Tabs are not allowed in YML file. You can edit it with any editor nano or vim or vi.
Replacing or editing the elasticsearch.yml file wont leads to data loss.
Docker images are delivered trimmed to bare minimum - so no editor is installed with the shipped container. That's why there's a need to install it manually.
docker exec -it <container> bash
and run:
apt-get update
apt-get install vim
or use the following Dockerfile:
FROM confluent/postgres-bw:0.1
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "vim"]
For more How to edit file after I shell to a docker container?
There are several cases:
elasticsearch.yml
file resides in a volume data directoryVolume data directory is a special data storage backend for Docker containers, which is called vfs backend. The directories are essentially normal directories mapped in the host file system, thus provide no copy-on-write capability. Mainly the mapped directories locate at /var/lib/dockers/vfs/dir/{container_id}
, but this is configurable. To be sure, you can use docker inspect {container_name}
to check the location:
$> docker inspect my_container
..... (omitted output)
"Volumes": {
"/datadir": "/var/lib/docker/vfs/dir/b2479214c25cd39c901c3211ed14cb9668eef822a125ca85de81425d53c9ccee"
},
As you can see, /datadir
, which is a volume data directory in the container, is mapped to /var/lib/docker/vfs/dir/b2479214c25cd39c901c3211ed14cb9668eef822a125ca85de81425d53c9ccee
of the host file system. Under such circumstances, the answer to your question is quite easy: just copy them as normal files into the mapped host directory.
Since Docker can use multiple storage backend for non-volume directories, there is no simple answer for you question.
If you happened to use AUFS as the backend, the container file system is mounted onto the host file system, which is somehow similar to the vfs case. You can locate the mapped directory in the host file system, and access files there. For detailed information about AUFS in Docker, please refer to Docker and AUFS in practice.
If you use other backends, e.g. devicemapper, or btrfs, I guess there's no simple way to access container files from the host. Maybe you can try @VonC 's method.
You can copy files out and then back into a container (even when the container is stopped) with the docker cp $cont_name:/path/in/container /path/on/host
to copy out and then docker cp /path/on/host $cont_name:/path/in/container
.
replace it without losing data
Ideally, those data should be stored in path mounted from separate data volume containers (which do not run, they are just created). That way, your main service container (the elasticsearch
one) can crash and be replaced at will.
In that configuration (mounting data from volume containers), you could rebuild your elasticsearch
image with the new config file, and resume from there.
In your current config, if those data are not in a VOLUME declared by your Dockerfile, what you can do is:
[docker commit <stoppped_container_id>][1] newimage