I'm trying to set up an HTTPS load balancer for GKE using HTTPS L7 load balancer but for some reason is not working. Even the HTTP load balancer in the HTTP Load Balancing walkthrough. The forwarding rule's IP address is created and I'm able to ping and telnet to port 80. But when request via curl it give me a error.
<title>502 Server Error</title> </head> <body text=#000000
bgcolor=#ffffff> <h1>Error: Server Error</h1> <h2>The server
encountered a temporary error and could not complete your request.
<p>Please try again in 30 seconds.</h2> <h2></h2> </body></html>
All the steps were fine and I created a firewall without any tags for the ${NODE_PORT} but it didn't work.
Has anyone encountered this problem?
I had the same problem with my application, the problem is that we did not have an endpoint returning "Success" and the health checks were always failing.
It seems that the HTTP/HTTPS load balancer will not send the request to the cluster nodes if the health checks are not passing, so my solution was to create an endpoint that always returns 200 OK, and as soon as the health checks were passing, the LB started working.
I just walked through the example and (prior to opening up a firewall for $NODE_PORT) saw the same 502 error.
If you look in the cloud console at
https://console.developers.google.com/project/<project>/loadbalancing/http/backendServices/details/web-map-backend-service
you should see that the backend shows 0 out of ${num_nodes_in_cluster} as healthy.
For your firewall definition, make sure that you set the source filter to 130.211.0.0/22
to allow traffic from the the load balancing service and set the allowed protocols and ports to tcp:$NODE_PORT
.
I use GKE, and I just walked through the example and it works fine, but when I route to my own service, it dose not work. (my service is an rest api service)
I found that, the biggest difference between my service and the example, is that: the example got an root endpoint("/"), but I do not support.
So, I solved this problem by this way: add an root endpoint("/") to my service, and just return success(an empty endpoint that return nothing), and then re-create the ingress, and waited for several minutes, and then the ingress works!!
I think this problem should be caused by healthy checker UNHEALTHY instances do not receive new connections
.
Here is an link for Healthy checks: https://cloud.google.com/compute/docs/load-balancing/health-checks
The issue resolved after a few minutes (like 5-10 minutes) in my case.
If using an ingress, there may be additional information in the events relating to the ingress. To view these:
kubectl describe ingress example
In my case, the load balancer was returning this error because there was no web server running on my instances and instance-groups to handle the network request.
I installed nginx on all the machines and then it started working.
From now on, I made a point to add nginx in my startup script while creating the vm/instance.
If you are using nginx behind your loadbalancer then it's important that the default_server is returning 200 or some other 2**. That means that if you for example have a rewrite rule that returns 301 then it will fail.
The solution is to set default_server on your main server:
server {
# Rewrite calls to www
listen 443;
server_name example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 default_server;
server_name www.example.com;
...
来源:https://stackoverflow.com/questions/32188284/https-load-balancer-in-google-container-engine