问题
I have separate docker-compose files with common configurations and application configurations. I want to connect containers defined in both docker-compose files by the common network which is defined in docker-compose.yml with the application's images. I need this to connect to the database on the container host. How can I define the same network in another docker-compose-producer file or can?
My common docker-compose.yml looks like this:
version: '3.3'
services:
kafka:
image: spotify/kafka
ports:
- "9092:9092"
networks:
- docker-elk
environment:
- ADVERTISED_HOST=localhost
neo4jdb:
image: neo4j:latest
container_name: neo4jdb
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
networks:
- docker-elk
volumes:
- /var/lib/neo4j/import:/var/lib/neo4j/import
- /var/lib/neo4j/data:/datax
- /var/lib/neo4j/conf:/conf
environment:
- NEO4J_dbms_active__database=graphImport.db
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
networks:
- docker-elk
volumes:
- esdata1:/usr/share/elasticsearch/data
kibana:
image: kibana:latest
ports:
- "5601:5601"
networks:
- docker-elk
volumes:
esdata1:
driver: local
networks:
docker-elk:
driver: bridge
My docker-compose-producer file:
version: '3.3'
services:
producer-demo:
build:
context: .
dockerfile: Dockerfile
args:
- ARG_CLASS=producer
- HOST=neo4jdb
volumes:
- ./:/workdir
working_dir: /workdir
networks:
- common_docker-elk
networks:
common_docker-elk:
external: true
Dockerfile:
FROM java:8
ARG ARG_CLASS
ARG HOST
ARG SPARK_CONFIG
ARG NEO4J_CONFIG
ENV MAIN_CLASS $ARG_CLASS
ENV SCALA_VERSION 2.11.8
ENV SBT_VERSION 1.1.1
ENV SPARK_VERSION 2.2.0
ENV SPARK_DIST spark-$SPARK_VERSION-bin-hadoop2.6
ENV SPARK_ARCH $SPARK_DIST.tgz
ENV SPARK_MASTER $SPARK_CONFIG
ENV DB_CONFIG neo4j_local
ENV KAFKA_STREAMS_NUMBER 5
ENV KAFKA_EVENTS_NUMBER 10
ENV MESSAGES_BATCH_SIZE 16777216
ENV LINGER_MESSAGES_TIME 5
ENV HOSTNAME bolt://$HOST:7687
VOLUME /workdir
WORKDIR /opt
# Install Scala
RUN \
cd /root && \
curl -o scala-$SCALA_VERSION.tgz http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz && \
tar -xf scala-$SCALA_VERSION.tgz && \
rm scala-$SCALA_VERSION.tgz && \
echo >> /root/.bashrc && \
echo 'export PATH=~/scala-$SCALA_VERSION/bin:$PATH' >> /root/.bashrc
# Install SBT
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb
# Install Spark
RUN \
cd /opt && \
curl -o $SPARK_ARCH http://d3kbcqa49mib13.cloudfront.net/$SPARK_ARCH && \
tar xvfz $SPARK_ARCH && \
rm $SPARK_ARCH && \
echo 'export PATH=$SPARK_DIST/bin:$PATH' >> /root/.bashrc
EXPOSE 9851 9852 4040 9092 9200 9300 5601 7474 7687 7473
CMD /workdir/runDemo.sh "$MAIN_CLASS" "$SPARK_MASTER" "$DB_CONFIG" "$KAFKA_STREAMS_NUMBER" "$KAFKA_EVENTS_NUMBER" "$MESSAGES_BATCH_SIZE" "$LINGER_MESSAGES_TIME"
Bash script for loading the project:
#!/usr/bin/env bash
if [ "$1" = "consumer" ]
then
java -cp "jars/spark_consumer.jar" consumer.SparkConsumer $2 $3 $4
elif [ "$1" = "producer" ]
then
java -cp "jars/kafka_producer.jar" producer.KafkaCheckinsProducer $5 $3 $6 $7
else
echo "Wrong parameter. It should be consumer or producer, but it is $1"
fi
回答1:
Seems like you want to communicate between multiple docker compose
Please check https://stackoverflow.com/questions/38088279/communication-between-multiple-docker-compose-projects
update:-
Just noticed, hostname is not defined for the neo4jdb in docker-compose file.
Please add hostname: neo4jdb
under neo4jdb build section in docker-compose.yml file.
来源:https://stackoverflow.com/questions/51101748/connect-docker-compose-files-by-shared-network