I need to map the ports on the host to the ports on the container. I can achieve this by running the \"docker run\"
command with the -p
option. How do
EXPOSE
a list of ports on your Dockerfile.
Run docker run -d -P --name app_name app_image_name
.
After the previous steps succeed,
run docker port app_name
which will display you an output like below:
80/tcp -> 0.0.0.0:32769
443/tcp -> 0.0.0.0:32768
You can't. What ports are published on the docker host is strictly a decision that should be made by the local administrator, not by the image they are trying to run; this would be (a) a security problem (hey, I just opened up ssh access to your system!) and (b) prone to failure (my webserver container can't bind on port 80 because I'm already running a server on port 80).
If you want to avoid long docker run
command lines, consider using something like docker-compose to automate the process. You can then pass docker-compose a configuration like:
mywebserver:
image: myname/mywebserver
ports:
- 80:8080
And then a simple docker-compose up
will start your container with container port 8080 bound to host port 80.
Update 2017-03-11
In response to Willa's comment:
Using docker-compose
will not help with the port collision issue. The port collision issue is a reason why images should not be able to specify host port bindings. I was simply offering docker-compose
as an alternative to long docker run
command lines with multiple port bindings. The port collision issue would potentially allow a container to perform a denial-of-service attack on your host: if, for example, a container starts and binds to port 80 before an Apache server on your host (or in another container), then you have just lost your web service.
Regarding the security issue: If an image were able to specify host port bindings, it would be possible for containers to open up access to the container without your knowledge. Permitting a remote user to access a container on your host opens you up to the possibility of a host compromise in the event that the namespace features in the kernel fail to completely isolate the container, and even if you completely trust the isolation it opens you up to potential legal problems if that container is used for illicit purposes. In either case it's a bad idea.
There's a difference between expose and publish.
Expose means to open the port on the container side, publish means to open it on the Docker host to the outside world.
For example, if your docker run command had -p 80:8080, it's exposing port 8080 on the container and publishing port 80 on the host.
You can only really expose ports in the Dockerfile, if you want the flexibility to publish the ports, you'll need to use the -P opting with docker run. Like so:
docker run -P your_app
What you can do is to EXPOSE
a list of ports on your Dockerfile
and later run the command docker run -P your_app
to publish all the ports exposed on the Dockerfile