In one of my HTTP(S) LoadBalancer, I wish to change my backend configuration to increase the timeout from 30s to 60s (We have a few 502\'s that do not have any logs server-s
After many different attempts I simply deleted the ingress object and recreated it and the problem went away. There must be a bug somewhere that leaves artifacts when ingress in updated.
I'm sure the OP has resolved this by now, but for anyone else pulling their hair out, this might work for you:
There's a bug of sorts in the GCE Load Balancer UI. If you add an empty frontend IP/Port combo by accident, it will create a named port in the Instance Group called port0
with a value of 0
. You may not even realize this happened because you won't see the empty frontend mapping in the console.
To fix the problem, edit your instance group and remove port0
from the list of port name mappings.
I faced the same issue and @tmirks 's fix didn't work for me.
After experimenting with GCE for a while, I realised that the issue is with the service.
By default all services are type: ClusterIP
unless you specified otherwise.
Long story short, if your service isn't exposed as type: NodePort
than the GCE load balancer won't route the traffic to it.
From the official Kubernetes project:
nodeport is a requirement of the GCE Ingress controller (and cloud controllers in general). "On-prem" controllers like the nginx ingress controllers work with clusterip: