I\'m thinking of using Docker to build my dependencies on a Continuous Integration (CI) server, so that I don\'t have to install all the runtimes and libraries on the agents
This can also be done in the SDK for example python. If you already have a container built you can lookup the name via console ( docker ps -a
) name seems to be some concatenation of a scientist and an adjective (i.e. "relaxed_pasteur").
Check out help(container.get_archive)
:
Help on method get_archive in module docker.models.containers:
get_archive(path, chunk_size=2097152) method of docker.models.containers.Container instance
Retrieve a file or folder from the container in the form of a tar
archive.
Args:
path (str): Path to the file or folder to retrieve
chunk_size (int): The number of bytes returned by each iteration
of the generator. If ``None``, data will be streamed as it is
received. Default: 2 MB
Returns:
(tuple): First element is a raw tar data stream. Second element is
a dict containing ``stat`` information on the specified ``path``.
Raises:
:py:class:`docker.errors.APIError`
If the server returns an error.
Example:
>>> f = open('./sh_bin.tar', 'wb')
>>> bits, stat = container.get_archive('/bin/sh')
>>> print(stat)
{'name': 'sh', 'size': 1075464, 'mode': 493,
'mtime': '2018-10-01T15:37:48-07:00', 'linkTarget': ''}
>>> for chunk in bits:
... f.write(chunk)
>>> f.close()
So then something like this will pull out from the specified path ( /output) in the container to your host machine and unpack the tar.
import docker
import os
import tarfile
# Docker client
client = docker.from_env()
#container object
container = client.containers.get("relaxed_pasteur")
#setup tar to write bits to
f = open(os.path.join(os.getcwd(),"output.tar"),"wb")
#get the bits
bits, stat = container.get_archive('/output')
#write the bits
for chunk in bits:
f.write(chunk)
f.close()
#unpack
tar = tarfile.open("output.tar")
tar.extractall()
tar.close()
If you don't have a running container, just an image, and assuming you want to copy just a text file, you could do something like this:
docker run the-image cat path/to/container/file.txt > path/to/host/file.txt
I am posting this for anyone that is using Docker for Mac. This is what worked for me:
$ mkdir mybackup # local directory on Mac
$ docker run --rm --volumes-from <containerid> \
-v `pwd`/mybackup:/backup \
busybox \
cp /data/mydata.txt /backup
Note that when I mount using -v
that backup
directory is automatically created.
I hope this is useful to someone someday. :)
Another good option is first build the container and then run it using the -c flag with the shell interpreter to execute some commads
docker run --rm -i -v <host_path>:<container_path> <mydockerimage> /bin/sh -c "cp -r /tmp/homework/* <container_path>"
The above command does this:
-i = run the container in interactive mode
--rm = removed the container after the execution.
-v = shared a folder as volume from your host path to the container path.
Finally, the /bin/sh -c lets you introduce a command as a parameter and that command will copy your homework files to the container path.
I hope this additional answer may help you
As a more general solution, there's a CloudBees plugin for Jenkins to build inside a Docker container. You can select an image to use from a Docker registry or define a Dockerfile to build and use.
It'll mount the workspace into the container as a volume (with appropriate user), set it as your working directory, do whatever commands you request (inside the container). You can also use the docker-workflow plugin (if you prefer code over UI) to do this, with the image.inside() {} command.
Basically all of this, baked into your CI/CD server and then some.
Create a data directory on the host system (outside the container) and mount this to a directory visible from inside the container. This places the files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files
docker run -d -v /path/to/Local_host_dir:/path/to/docker_dir docker_image:tag