oauth2-proxy authentication calls slow on kubernetes cluster with auth annotations for nginx ingress

丶灬走出姿态 提交于 2021-02-08 07:21:20

问题


We have secured some of our services on the K8S cluster using the approach described on this page. Concretely, we have:

  nginx.ingress.kubernetes.io/auth-url: "https://oauth2.${var.hosted_zone}/oauth2/auth"
  nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.${var.hosted_zone}/oauth2/start?rd=/redirect/$http_host$escaped_request_uri"

set on the service to be secured and we have followed this tutorial to only have one deployment of oauth2_proxy per cluster. We have 2 proxies set up, both with affinity to be placed on the same node as the nginx ingress.

$ kubectl get pods -o wide -A | egrep "nginx|oauth"                                                                    
infra-system   wer-exp-nginx-ingress-exp-controller-696f5fbd8c-bm5ld        1/1     Running   0          3h24m   10.76.11.65    ip-10-76-9-52.eu-central-1.compute.internal     <none>           <none>
infra-system   wer-exp-nginx-ingress-exp-controller-696f5fbd8c-ldwb8        1/1     Running   0          3h24m   10.76.14.42    ip-10-76-15-164.eu-central-1.compute.internal   <none>           <none>
infra-system   wer-exp-nginx-ingress-exp-default-backend-7d69cc6868-wttss   1/1     Running   0          3h24m   10.76.15.52    ip-10-76-15-164.eu-central-1.compute.internal   <none>           <none>
infra-system   wer-exp-nginx-ingress-exp-default-backend-7d69cc6868-z998v   1/1     Running   0          3h24m   10.76.11.213   ip-10-76-9-52.eu-central-1.compute.internal     <none>           <none>
infra-system   oauth2-proxy-68bf786866-vcdns                                 2/2     Running   0          14s     10.76.10.106   ip-10-76-9-52.eu-central-1.compute.internal     <none>           <none>
infra-system   oauth2-proxy-68bf786866-wx62c                                 2/2     Running   0          14s     10.76.12.107   ip-10-76-15-164.eu-central-1.compute.internal   <none>           <none>

However, a simple website load usually takes around 10 seconds, compared to 2-3 seconds with the proxy annotations not being present on the secured service.

We added a proxy_cache to the auth.domain.com service which hosts our proxy by adding

        "nginx.ingress.kubernetes.io/server-snippet": <<EOF
          proxy_cache auth_cache;
          proxy_cache_lock on;
          proxy_ignore_headers Cache-Control;
          proxy_cache_valid any 30m;
          add_header X-Cache-Status $upstream_cache_status;
        EOF

but this didn't improve the latency either. We still see all HTTP requests triggering a log line in our proxy. Oddly, only some of the requests take 5 seconds.

We are unsure if: - the proxy forwards each request to the oauth provider (github) or - caches the authentications

We use cookie authentication, therefore, in theory, the oauth2_proxy should just decrypt the cookie and then return a 200 to the nginx ingress. Since they are both on the same node it should be fast. But it's not. Any ideas?

Edit 1

I have analyzed the situation further. Visiting my auth server with https://oauth2.domain.com/auth in the browser and copying the request copy for curl I found that:

  1. running 10.000 queries against my oauth server from my local machine (via curl) is very fast
  2. running 100 requests on the nginx ingress with the same curl is slow
  3. replacing the host name in the curl with the cluster IP of the auth service makes the performance increase drastically
  4. setting the annotation to nginx.ingress.kubernetes.io/auth-url: http://172.20.95.17/oauth2/auth (e.g. setting the host == cluster IP) makes the GUI load as expected (fast)
  5. it doesn't matter if the curl is run on the nginx-ingress or on any other pod (e.g. a test debian), the result is the same

Edit 2

A better fix I found was to set the annotation to the following

  nginx.ingress.kubernetes.io/auth-url: "http://oauth2.infra-system.svc.cluster.local/oauth2/auth"
  nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.domain.com/oauth2/start?rd=/redirect/$http_host$escaped_request_uri"

The auth-url is what the ingress queries with the cookie of the user. Hence, a local DNS of the oauth2 service is the same as the external dns name, but without the SSL communication and since it's DNS, it's permanent (while the cluster IP is not)


回答1:


Given that it's unlikely that someone comes up with the why this happens, I'll answer my workaround.

A fix I found was to set the annotation to the following

  nginx.ingress.kubernetes.io/auth-url: "http://oauth2.infra-system.svc.cluster.local/oauth2/auth"
  nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.domain.com/oauth2/start?rd=/redirect/$http_host$escaped_request_uri"

The auth-url is what the ingress queries with the cookie of the user. Hence, a local DNS of the oauth2 service is the same as the external dns name, but without the SSL communication and since it's DNS, it's permanent (while the cluster IP is not)




回答2:


In my opinion you observe the increased latency in response time in case of:
nginx.ingress.kubernetes.io/auth-url: "https://oauth2.${var.hosted_zone}/oauth2/auth"
setting due to the fact, that auth server URL resolves to the external service (in this case VIP of Load Balancer seating in front of Ingress Controller).

Practically it means, that you go out with the traffic outside of the cluster (so called hairpin mode), and goes back via External IP of Ingress that routes to internal ClusterIP Service (which adds extra hops), instead going directly with ClusterIP/Service DNS name (you stay within Kubernetes cluster):

nginx.ingress.kubernetes.io/auth-url: "http://oauth2.infra-system.svc.cluster.local/oauth2/auth"


来源:https://stackoverflow.com/questions/58997958/oauth2-proxy-authentication-calls-slow-on-kubernetes-cluster-with-auth-annotatio

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!