http -> https redirect in Google Kubernetes Engine

后端 未结 5 672
南方客
南方客 2021-02-07 23:09

I\'m looking to redirect all traffic from

http://example.com -> https://example.com like how nearly all websites do.

I\'ve looked at this link with no success:

相关标签:
5条回答
  • 2021-02-07 23:54

    Currently, the documentation for how to do this properly (annotations, SSL/HTTPS, health checks etc) is severely lacking, and has been for far too long. I suspect it's because they prefer you to use the App Engine, which is magical but stupidly expensive. For GKE, here's two options:

    • ingress with a google-managed SSL cert and additional NGINX server configuration in front of your app/site
    • the NGINX ingress controller with self-managed/third-party SSL certs

    The following is steps to a working setup using the former.

    1 The door to your app

    nginx.conf: (ellipses represent other non-relevant, non-compulsory settings)

    user  nginx;
    worker_processes  auto;
    
    events {
        worker_connections  1024;
    }
    
    http {
        ...
    
        keepalive_timeout  620s;
    
        ## Logging ##
        ...
        ## MIME Types ##
        ...
        ## Caching ##
        ...
        ## Security Headers ##
        ...
        ## Compression ##
        ....
    
        server {
            listen 80;
    
            ## HTTP Redirect ##
            if ($http_x_forwarded_proto = "http") {
                return 301 https://[YOUR DOMAIN]$request_uri;
            }
    
            location /health/liveness {
                access_log off;
                default_type text/plain;
                return 200 'Server is LIVE!';
            }
    
            location /health/readiness {
                access_log off;
                default_type text/plain;
                return 200 'Server is READY!';
            }
    
            root /usr/src/app/www;
            index index.html index.htm;
            server_name [YOUR DOMAIN] www.[YOUR DOMAIN];
    
            location / {
                try_files $uri $uri/ /index.html;
            }
        }
    }
    

    NOTE: One serving port only. The global forwarding rule adds the http_x_forwarded_proto header to all traffic that passes through it. Because ALL traffic to your domain is now passing through this rule (remember, one port on the container, service and ingress), this header will (crucially!) always be set. Note the check and redirect above: it only continues with serving if the header value is 'https'. The root and index and location values may differ depending for your project (this is an angular project). keepalive_timeout is set to the value recommended by google. I prefer using the main nginx.conf file, but most people add a custom.conf file to /etc/nginx/conf.d; if you do this just make sure the file is imported into the main nginx.conf http block using an includes statement. The comments highlight where other settings you may want to add when everything is working will go, like gzip/brotli, security headers, where logs are saved, and so on.

    Dockerfile:

    ...
    COPY nginx.conf /etc/nginx/nginx.conf
    CMD ["nginx", "-g", "daemon off;"]
    

    NOTE: only the final two lines. Specifying an EXPOSE port is unnecessary. COPY replaces the default nginx.conf with the modified one. CMD starts a light server.

    2 Create a deployment manifest and apply/create

    deployment.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: uber-dp
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: uber
      template:
        metadata:
          labels:
            app: uber
        spec:
          containers:
            - name: uber-ctr
              image: gcr.io/uber/beta:v1 // or some other registry
              livenessProbe:
                failureThreshold: 3
                initialDelaySeconds: 60
                httpGet:
                  path: /health/liveness
                  port: 80
                  scheme: HTTP
              readinessProbe:
                failureThreshold: 3
                initialDelaySeconds: 30
                httpGet:
                  path: /health/readiness
                  port: 80
                  scheme: HTTP
              ports:
                - containerPort: 80
              imagePullPolicy: Always
    

    NOTE: only one specified port is necessary, as we're going to point all (HTTP and HTTPS) traffic to it. For simplicity we're using the same path for liveness and readiness probes; these checks will be dealt with on the NGINX server, but you can and should add checks that probe the health of your app itself (eg a dedicated page that returns a 200 if healthy).The readiness probe will also be picked up by GCE, which by default has its own irremovable health check.

    3 Create a service manifest and apply/create

    service.yaml:

    apiVersion: v1
    kind: Service
    metadata:
      name: uber-svc
      labels:
        app: uber
    spec:
      ports:
        - name: default-port
          port: 80
      selector:
        app: uber
      sessionAffinity: None
      type: NodePort
    

    NOTE: default-port specifies port 80 on the container.

    4 Get a static IP address

    On GCP in the hamburger menu: VPC Network -> External IP Addresses. Convert your auto-generated ephemeral IP or create a new one. Take note of the name and address.

    5 Create an SSL cert and default zone

    In the hamburger menu: Network Service -> Load Balancing -> click 'advanced menu' -> Certificates -> Create SSL Certificate. Follow the instructions, create or upload a certificate, and take note of the name. Then, from the menu: Cloud DNS -> Create Zone. Following the instructions, create a default zone for your domain. Add a CNAME record with www as the DNS name and your domain as the canonical name. Add an A record with an empty DNS name value and your static IP as the IPV4. Save.

    6 Create an ingress manifest and apply/create

    ingress.yaml:

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: mypt-ingress
      annotations:
        kubernetes.io/ingress.global-static-ip-name: [NAME OF YOUR STATIC IP ADDRESS]
        kubernetes.io/ingress.allow-http: "true"
        ingress.gcp.kubernetes.io/pre-shared-cert: [NAME OF YOUR GOOGLE-MANAGED SSL]
    spec:
      backend:
        serviceName: mypt-svc
        servicePort: 80
    

    NOTE: backend property points to the service, which points to the container, which contains your app 'protected' by a server. Annotations connect your app with SSL, and force-allow http for the health checks. Combined, the service and ingress configure the G7 load balancer (combined global forwarding rule, backend and frontend 'services', SSL certs and target proxies etc).

    7 Make a cup of tea or something

    Everything needs ~10 minutes to configure. Clear cache and test your domain with various browsers (Tor, Opera, Safari, IE etc). Everything will serve over https.

    What about the NGINX Ingress Controller? I've seen discussion it being better because it's cheaper/uses less resources and is more flexible. It isn't cheaper: it requires an additional deployment/workload and service (GCE L4). And you need to do more configuration. Is it more flexible? Yes. But in taking care of most of the work, the first option gives you a more important kind of flexibility — namely allowing you to get on with more pressing matters.

    0 讨论(0)
  • 2021-02-07 23:54

    For what it's worth, I ended up using a reverse proxy in NGINX.

    1. You need to create secrets and sync them into your containers
    2. You need to create a configmap in nginx with your nginx config, as well as a default config that references this additional config file.

    Here is my configuration:

    worker_processes  1;
    
    events {
        worker_connections  1024;
    }
    
    
    http {
    
    default_type  application/octet-stream;
    
    # Logging Configs
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    
    access_log  /var/log/nginx/access.log  main;
    
    sendfile        on;
    keepalive_timeout  65;
    
    # Puntdoctor Proxy Config
    include /path/to/config-file.conf;
    
    # PubSub allows 10MB Files. lets allow 11 to give some space
    client_max_body_size 11M;
    
    }
    

    Then, the config.conf

    server {
    listen 80;
    server_name example.com;
    return 301 https://$host$request_uri;
    }
    
    server {
    
    listen 443;
    server_name example.com;
    
    ssl_certificate           /certs/tls.crt;
    ssl_certificate_key       /certs/tls.key;
    
    ssl on;
    ssl_session_cache  builtin:1000  shared:SSL:10m;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-RC4-SHA:AES128-GCM-SHA256:HIGH:!RC4:!MD5:!aNULL:!EDH:!CAMELLIA;
    ssl_prefer_server_ciphers on;
    
    location / {
    
      proxy_set_header        Host $host;
      proxy_set_header        X-Real-IP $remote_addr;
      proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header        X-Forwarded-Proto $scheme;
      proxy_set_header        X-Forwarded-Host $http_host;
    
      # Fix the “It appears that your reverse proxy set up is broken" error.
      proxy_pass          http://deployment-name:8080/;
      proxy_read_timeout  90;
    
      proxy_redirect      http://deployment-name:8080/ https://example.com/;
    }
    }
    
    1. Create a deployment:

    Here are the .yaml files

    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: puntdoctor-lb
    spec:
       ports:
        - name: https
          port: 443
          targetPort: 443
         - name: http
          port: 80
          targetPort: 80
      selector:
        app: puntdoctor-nginx-deployment
      type: LoadBalancer
      loadBalancerIP: 35.195.214.7
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: puntdoctor-nginx-deployment
    spec:
       replicas: 2
      template:
        metadata:
          labels:
            app: puntdoctor-nginx-deployment
        spec:
           containers:
           - name: adcelerate-nginx-proxy
            image: nginx:1.13
             volumeMounts:
            - name: certs
              mountPath: /certs/
            - name: site-config
              mountPath: /etc/site-config/
            - name: default-config
              mountPath: /etc/nginx/
            ports:
            - containerPort: 80
              name: http
            - containerPort: 443
              name: https
          volumes:
          - name: certs
            secret:
              secretName: nginxsecret
          - name: site-config
            configMap:
              name: nginx-config
           - name: default-config
            configMap:
             name: default
    

    Hope this helps someone solve this issue, thanks for the other 2 answers, they both gave me valuable insight.

    0 讨论(0)
  • 2021-02-07 23:56

    For everyone like me that searches this question about once a month, Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.

    Their comment:

    Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.

    Update: HTTP to HTTPS redirect is now Generally Available: https://cloud.google.com/load-balancing/docs/features#routing_and_traffic_management

    0 讨论(0)
  • 2021-02-08 00:02

    GKE uses GCE L7. The rules that you referenced in the example are not supported and the HTTP to HTTPS redirect should be controlled at the application level.

    L7 inserts the x-forwarded-proto header that you can use to understand if the frontend traffic came using HTTP or HTTPS. Take a look here: Redirecting HTTP to HTTPS

    There is also an example in that link for Nginx (copied for convenience):

    # Replace '_' with your hostname.
    server_name _;
    if ($http_x_forwarded_proto = "http") {
        return 301 https://$host$request_uri;
    }
    
    0 讨论(0)
  • 2021-02-08 00:04

    GKE uses its own Ingress Controller which does not support forcing https.

    That's why you will have to manage NGINX Ingress Controller yourself.

    See this post on how to do it on GKE.

    Hope it helps.

    0 讨论(0)
提交回复
热议问题