I am attempting to create a container that can access the host docker remote API via the docker socket file (host machine - /var/run/docker.sock).
The answer here su
I figured it out. You can simply pass the the socket file through the volume argument
docker run -v /var/run/docker.sock:/container/path/docker.sock
As @zarathustra points out, this may not be the greatest idea however. See: https://www.lvh.io/posts/dont-expose-the-docker-socket-not-even-to-a-container.html
If one intends to use Docker from within a container, he should clearly understand security implications.
Accessing Docker from within container is simple:
docker
official image or install Docker inside the container. Or you may download archive with docker client binary as described here That's why
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
should do the trick.
Alternatively, you may expose into container and use Docker REST API
UPD: Former version of this answer (based on previous version of jpetazzo post ) advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.
Other approaches like exposing /var/lib/docker
to container are likely to cause data corruption. See do-not-use-docker-in-docker-for-ci for more details.
In this container (and probably in many other) jenkins process runs as a non-root user. That's why it has no permission to interact with docker socket. So quick&dirty solution is running
docker exec -u root ${NAME} /bin/chmod -v a+s $(which docker)
after starting container. That allows all users in container to run docker binary with root permissions. Better approach would be to allow running docker binary via passwordless sudo, but official Jenkins CI image seems to lack the sudo subsystem.
I stumbled across this page while trying to make docker socket calls work from a container that is running as the nobody
user.
In my case I was getting access denied errors when my-service
would try to make calls to the docker socket to list available containers.
I ended up using docker-socket-proxy to proxy the docker socket to my-service
. This is a different approach to accessing the docker socket within a container so I though I would share it.
I made my-service
able to receive the docker host it should talk to, docker-socker-proxy
in this case, via the DOCKER_HOST
environment variable.
Note that docker-socket-proxy
will need to run as the root
user to be able to proxy the docker socket to my-service
.
Example docker-compose.yml
:
version: "3.1"
services:
my-service:
image: my-service
environment:
- DOCKER_HOST=tcp://docker-socket-proxy:2375
networks:
- my-service_my-network
docker-socket-proxy:
image: tecnativa/docker-socket-proxy
environment:
- SERVICES=1
- TASKS=1
- NETWORKS=1
- NODES=1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- my-service_my-network
deploy:
placement:
constraints: [node.role == manager]
networks:
my-network:
driver: overlay
Note that the above compose file is swarm ready (docker stack deploy my-service
) but it should work in compose mode as well (docker-compose up -d
). The nice thing about this approach is that my-service
does not need to run on a swarm manager anymore.