I have a Docker container \"A\" that will start another container \"B\" (by volume mounting /var/run/docker.sock). Now, these containers need to share files.
Container \
When you mount the docker socket, it's not really docker in docker, but just a client making requests to the daemon on the host over the API, and that daemon doesn't know where the requests come from. So you can simplify this question to "can you mount files from one container into another container". Unfortunately there's no easy answer to that without using volumes that are external to both containers. This is because the container filesystems depend on the graph driver being used to assemble the various image and container layers, so even a solution that might work for overlay2 would break on other drivers, and it would depend on internals of docker that could change without warning.
Once you get into an external volume, there are several possible solutions I can think of.
Option A: a common host directory. I use this fairly often with what I consider transparent containers on my laptop, hiding the fact that I'm running commands inside of a container. I mount a common directory with the full path in my container, e.g. -v $HOME:$HOME
. This same technique could be used from inside of container "A" and "B" if you mounted the same host directories in each. If you use a volume mount like the above for container "A", this would work with a compose file since the path is the same inside the container as it is on the host.
Option B: volumes_from. I hesitate to even mention this as an option because it's getting phased out as users adopt swarm mode, but there is an option to mount all volumes in container "A" to container "B". This still requires that you define a volume in container "A", but now you do no care about the source of the volume, it could be a host, named, or anonymous volume.
Option C: shared named volume. Named volumes let docker manage the storage of the data, by default under /var/lib/docker/volumes on the host. You can run both containers with the same named volume, which allows you to pass data between the containers. You do need to have the name of the volume in container "A" to run your command for container "B" with the same name. Named volumes also initialize the contents of the named volume from the image when you first use a named volume, so that may be beneficial, especially for file ownership and permissions. Just be aware that on the next usage of the same named volume, it will not reinitialize over any existing data, instead the previous data will be persistent. With a compose file, you would need to define the named volume as external.
Option D: manually created named volume. If you are only trying to inject some files into container "B" from container "A", there are a variety of ways to inject that over the docker API. I've seen files saved into environment variables on "A" and then the environment variable written back out to a file in the entrypoint for "B". For larger files, or to avoid changing the entrypoint of "B", you can create a named volume and populate it by passing the data over docker's stdin/stdout pipes to a running container, and packing/unpacking that data with tar to send over the I/O pipes. This will work from inside of container "A" since one half of the tar command runs inside of that container's filesystem. Then container "B" would mount that named volume. To import data from container "A" to a named volume, that looks like:
tar -cC source_dir . | \
docker run --rm -i -v target_vol:/target busybox tar -xC /target
And to get data back out of a named volume, the process is reversed:
docker run --rm -v source_vol:/source busybox tar -cC /source . | \
tar -xC target_dir
Similar to option C, you would need to define this named volume as external in your compose file.