问题
I have two Dockerfiles: one for adonis (with node docker's hub image) and another for mongo (with mongo docker's hub image).
The mongo_service must depend on adonis service because I only want to run adonis after starting all the mongo instances.
Therefore, on the end of the mongo dockerfile I run a script which in the end will run:
adonis seed
adonis serve
The error that I'm having is: adonis: command not found
I understand that somehow the mongo_service is not having access to the adonis_service which has adonis installed.
My question is how can I have access to something that I installed in another container? I did this separation for the work to be more organized.
version: '3'
services:
mongo_service:
build:
context: .
dockerfile: docker_mongo_context/Dockerfile
tty: true
hostname: mongo
ports:
- "27017:27017"
depends_on:
- adonis_service
adonis_service:
build:
context: .
dockerfile: docker_adonis_context/Dockerfile
tty: true
hostname: adonis
ports:
- "3333:3333"
volumes:
- .:/app
回答1:
For those who are interested in the way on how to use ssh
, I've added a small example which allows to use ssh
between container without
- dealing with passwords for authetication
- exposing private/public keys to the outside environment or the host
- having access from the outside (only docker container within the same docker network have access)
Description
docker-compose.yml
The docker-compose
file. It consist of some configuration.
- I have assigned my container static IPs, which allows a more easy access.
- I have added a volume (
sshdata
) to share the ssh-keys between the containers (for authentication).
version: "3.8"
services:
first-service:
build:
context: .
dockerfile: Dockerfile-1
networks:
vpcbr:
ipv4_address: 10.5.0.2
environment:
- SECOND_SERVICE=10.5.0.3
volumes:
- sshdata:/home/developer/.ssh/
second-service:
build:
context: .
dockerfile: Dockerfile-2
networks:
vpcbr:
ipv4_address: 10.5.0.3
volumes:
- sshdata:/home/developer/.ssh/
depends_on:
- first-service
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
volumes:
sshdata:
Dockerfiles
The Dockerfiles for the services are the same, only the entrypoint.sh
-scripts are different (see below).
FROM ubuntu:latest
# We need some tools
RUN apt-get update && apt-get install -y ssh sudo net-tools
# We want to have another user than `root`
RUN adduser developer
## USER SETUP
# We want to have passwordless sudo access
RUN \
sed -i /etc/sudoers -re 's/^%sudo.*/%sudo ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^root.*/root ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^#includedir.*/## **Removed the include directive** ##"/g' && \
echo "developer ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers; su - developer -c id
# Run now with user developer
USER developer
ADD ./entrypoint-1.sh /entrypoint-1.sh
RUN sudo chmod +x /entrypoint-1.sh
ENTRYPOINT [ "/entrypoint-1.sh" ]
Entrypoint-Scripts
Now we come to the important stuff: The entrypoint.sh
-scripts, which perform the needed set-up-steps. Our first container (first-service
) should be able to ssh
to our second container (second-service
).
For this there is no special setup for our first service. We just change the owner of the ~/.ssh
folder to have writing access to ~/.ssh/known_hosts
(but you can just disable strict host key checking if you dont want to do this)
#!/bin/bash
# ENTRYPOINT FOR SERVICE first-service
# We can now ssh to our other container
# Change the owner of the .ssh folder and it's content
sudo chown -R developer:developer ~/.ssh
# Perform your command
while ! ssh-keyscan -H ${SECOND_SERVICE} >> ~/.ssh/known_hosts
do
echo "Host not up, trying again..."
sleep 1;
done
# -------------------------------------
# Here we can run our command
ssh developer@${SECOND_SERVICE} "ls -l /"
echo "DONE!"
# -------------------------------------
# Here you can do other stuff
tail -f /dev/null
One remarkable line is the while-loop: We do not really know, when our second service is ready for a ssh-connection. We can wait, but thats not that elegant. Instead, we periodically try to connect to the second container until the command succeeded. Afterwards it will continue with the actual command.
The last thing is the entrypoint.sh
-Script for the second service:
#!/bin/bash
# ENTRYPOINT FOR SERVICE second-service
## -- A little bit of setup for ssh
# Starting the server
sudo service ssh start
# Generate a key
sudo ssh-keygen -t rsa -f /home/developer/.ssh/id_rsa
# Change the owner of the .ssh folder and it's content
sudo chown -R developer:developer ~/.ssh
# Add the keys
sudo echo $(cat /home/developer/.ssh/id_rsa.pub) >> ~/.ssh/authorized_keys
# -------------------------------------
# Here we can start doing the stuff
tail -f /dev/null
Maybe this help someone.
来源:https://stackoverflow.com/questions/63674004/how-can-i-use-a-command-from-another-container-using-docker-compose