Is there a way I can download a Docker image/container using, for example, Firefox and not using the built-in docker-pull
.
I am blocked by the company f
Just an alternative - This is what I did in my organization for couchbase image where I was blocked by a proxy.
~$ $ docker save couchbase > couchbase.tar
~$ ls -lh couchbase.docker
-rw------- 1 vikas devops 556M 12 Dec 21:15 couchbase.tar
~$ xz -9 couchbase.tar
~$ ls -lh couchbase.tar.xz
-rw-r--r-- 1 vikas staff 123M 12 Dec 22:17 couchbase.tar.xz
Then, I uploaded the compressed tar ball to Dropbox and downloaded on my work machine. For some reason Dropbox was open :)
$ docker load < couchbase.tar.xz
References
So, by definition, a Docker pull client command actually needs to talk to a Docker daemon, because the Docker daemon assembles layers one by one for you.
Think of it as a POST request - it's causing a mutation of state, in the Docker daemon itself. You're not 'pulling' anything over HTTP when you do a pull.
You can pull all the individual layers over REST from the Docker registry, but that won't actually be the same semantics as a pull, because pull is an action that specifically tells the daemon to go and get all the layers for an image you care about.
I just had to deal with this issue myself - downloading an image from a restricted machine with Internet access, but no Docker client for use on a another restricted machine with the Docker client, but no Internet access. I posted my question to the DevOps Stack Exchange site:
With help from the Docker Community I was able to find a resolution to my problem. What follows is my solution.
So it turns out that the Moby Project has a shell script on the Moby GitHub account which can download images from Docker Hub in a format that can be imported into Docker:
The usage syntax for the script is given by the following:
download-frozen-image-v2.sh target_dir image[:tag][@digest] ...
The image can then be imported with tar
and docker load
:
tar -cC 'target_dir' . | docker load
To verify that the script works as expected, I downloaded an Ubuntu image from Docker Hub and loaded it into Docker:
user@host:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user@host:~$ tar -cC 'ubuntu' . | docker load
user@host:~$ docker run --rm -ti ubuntu bash
root@1dd5e62113b9:/#
In practice I would have to first copy the data from the Internet client (which does not have Docker installed) to the target/destination machine (which does have Docker installed):
user@nodocker:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user@nodocker:~$ tar -C 'ubuntu' -cf 'ubuntu.tar' .
user@nodocker:~$ scp ubuntu.tar user@hasdocker:~
and then load and use the image on the target host:
user@hasdocker:~ docker load -i ubuntu.tar
user@hasdocker:~ docker run --rm -ti ubuntu bash
root@1dd5e62113b9:/#
Use Skopeo. It is a tool specifically made for that (and others) purpose.
After install simply execute:
mkdir ubuntu
skopeo --insecure-policy copy docker://ubuntu ./ubuntu
Copy these files and import as you like.
The Answer and solution to my original question were that I found that I could download the Docker file and all the necessary support files and recreate the image my self from scratch. This is essentially the same as downloading the image.
This solution has been in the questions and comments above, I just pinned it out here.
This is although no longer an issue for me since my company have changed policy and allowed docker pull commands to work.
Another possibly might be an option for you if your company firewall (and policy) allows for connecting to a remote SSH server. In that case you can simply set up a SSH tunnel to tunnel any traffic to the Docker registry through it.