I have 2 services. One containing 2 pod replicas for a web application which is dependent on another backend service having pod (2 replicas) for MySQL containers.
The we
I can suggest two solutions:
First, to attach an init container to the web servers that waits until MySQL is up and running. The deployment would be something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
selector:
matchLabels:
app: web
replicas: 2
template:
metadata:
labels:
app: web
spec:
initContainers:
- name: init-wait
image: alpine
command: ["sh", "-c", "for i in $(seq 1 300); do nc -zvw1 mysql 3306 && exit 0 || sleep 3; done; exit 1"]
containers:
- name: web
image: web-server
ports:
- containerPort: 80
protocol: TCP
It uses netcat to try to start a TCP connection to the mysql service on the port 3306 every 3 seconds. Once it achieves to connect, the init-container ends and the web server starts normally.
The second option is to use Mirantis AppController. It allows you to create dependency objects as you need between server and database deployments. Check their repo for a full documentation.
Use readiness probe or init container, refer to here
Having the same problem and as what they recommend, using the k8s initContainers solved my problem.
Added sample code.
kind: Service
apiVersion: v1
metadata:
name: postgres-service
spec:
# ...
---
apiVersion: apps/v1
kind: Deployment
# ...
spec:
# ...
template:
# ...
spec:
# wait for postgres-service to run first
initContainers:
- name: init-wait-for-db
image: alpine
command: ["/bin/sh", "-c", "for i in $(seq 1 300); do nc -zvw1 postgres-service 5432 && exit 0 || sleep 3; done; exit 1"]
containers:
- name: my-django-app
image: dockerhubuser/my-django-app
command: ["/bin/sh", "-c", "python /root/django/manage.py migrate && python /root/django/manage.py runserver 0.0.0.0:8000 --noreload"]
ports:
- containerPort: 8000
env:
# ...