I\'m looking to redirect all traffic from
http://example.com -> https://example.com like how nearly all websites do.
I\'ve looked at this link with no success:
Currently, the documentation for how to do this properly (annotations, SSL/HTTPS, health checks etc) is severely lacking, and has been for far too long. I suspect it's because they prefer you to use the App Engine, which is magical but stupidly expensive. For GKE, here's two options:
The following is steps to a working setup using the former.
nginx.conf: (ellipses represent other non-relevant, non-compulsory settings)
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
http {
...
keepalive_timeout 620s;
## Logging ##
...
## MIME Types ##
...
## Caching ##
...
## Security Headers ##
...
## Compression ##
....
server {
listen 80;
## HTTP Redirect ##
if ($http_x_forwarded_proto = "http") {
return 301 https://[YOUR DOMAIN]$request_uri;
}
location /health/liveness {
access_log off;
default_type text/plain;
return 200 'Server is LIVE!';
}
location /health/readiness {
access_log off;
default_type text/plain;
return 200 'Server is READY!';
}
root /usr/src/app/www;
index index.html index.htm;
server_name [YOUR DOMAIN] www.[YOUR DOMAIN];
location / {
try_files $uri $uri/ /index.html;
}
}
}
NOTE: One serving port only. The global forwarding rule adds the http_x_forwarded_proto header to all traffic that passes through it. Because ALL traffic to your domain is now passing through this rule (remember, one port on the container, service and ingress), this header will (crucially!) always be set. Note the check and redirect above: it only continues with serving if the header value is 'https'. The root and index and location values may differ depending for your project (this is an angular project). keepalive_timeout is set to the value recommended by google. I prefer using the main nginx.conf file, but most people add a custom.conf file to /etc/nginx/conf.d; if you do this just make sure the file is imported into the main nginx.conf http block using an includes statement. The comments highlight where other settings you may want to add when everything is working will go, like gzip/brotli, security headers, where logs are saved, and so on.
Dockerfile:
...
COPY nginx.conf /etc/nginx/nginx.conf
CMD ["nginx", "-g", "daemon off;"]
NOTE: only the final two lines. Specifying an EXPOSE port is unnecessary. COPY replaces the default nginx.conf with the modified one. CMD starts a light server.
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: uber-dp
spec:
replicas: 1
selector:
matchLabels:
app: uber
template:
metadata:
labels:
app: uber
spec:
containers:
- name: uber-ctr
image: gcr.io/uber/beta:v1 // or some other registry
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 60
httpGet:
path: /health/liveness
port: 80
scheme: HTTP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
httpGet:
path: /health/readiness
port: 80
scheme: HTTP
ports:
- containerPort: 80
imagePullPolicy: Always
NOTE: only one specified port is necessary, as we're going to point all (HTTP and HTTPS) traffic to it. For simplicity we're using the same path for liveness and readiness probes; these checks will be dealt with on the NGINX server, but you can and should add checks that probe the health of your app itself (eg a dedicated page that returns a 200 if healthy).The readiness probe will also be picked up by GCE, which by default has its own irremovable health check.
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: uber-svc
labels:
app: uber
spec:
ports:
- name: default-port
port: 80
selector:
app: uber
sessionAffinity: None
type: NodePort
NOTE: default-port specifies port 80 on the container.
On GCP in the hamburger menu: VPC Network -> External IP Addresses. Convert your auto-generated ephemeral IP or create a new one. Take note of the name and address.
In the hamburger menu: Network Service -> Load Balancing -> click 'advanced menu' -> Certificates -> Create SSL Certificate. Follow the instructions, create or upload a certificate, and take note of the name. Then, from the menu: Cloud DNS -> Create Zone. Following the instructions, create a default zone for your domain. Add a CNAME record with www as the DNS name and your domain as the canonical name. Add an A record with an empty DNS name value and your static IP as the IPV4. Save.
ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mypt-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: [NAME OF YOUR STATIC IP ADDRESS]
kubernetes.io/ingress.allow-http: "true"
ingress.gcp.kubernetes.io/pre-shared-cert: [NAME OF YOUR GOOGLE-MANAGED SSL]
spec:
backend:
serviceName: mypt-svc
servicePort: 80
NOTE: backend property points to the service, which points to the container, which contains your app 'protected' by a server. Annotations connect your app with SSL, and force-allow http for the health checks. Combined, the service and ingress configure the G7 load balancer (combined global forwarding rule, backend and frontend 'services', SSL certs and target proxies etc).
Everything needs ~10 minutes to configure. Clear cache and test your domain with various browsers (Tor, Opera, Safari, IE etc). Everything will serve over https.
What about the NGINX Ingress Controller? I've seen discussion it being better because it's cheaper/uses less resources and is more flexible. It isn't cheaper: it requires an additional deployment/workload and service (GCE L4). And you need to do more configuration. Is it more flexible? Yes. But in taking care of most of the work, the first option gives you a more important kind of flexibility — namely allowing you to get on with more pressing matters.