问题
Sorry for newbie question; I am new to the k8s world.The current way of deploying is to deploy the app on EC2. The new way I am trying to deploy the containerized app to VPC.
In the old way AWS would route the traffic for aaa.bbb.com
to vpc-ip:443
ELB which would further route it to ASG on private subnet:443
and app would work fine.
With k8s in the picture, how does traffic flow look like?
I'm trying to figure out if I could use multiple ports on ELB with respective dns and route traffic to on certain port on worker nodes.
i.e.
xxx.yyy.com -> vpc-ip:443/ -> ec2:443/
aaa.bbb.com -> vpc-ip:9000/ -> ec2:9000/
Is it doable with k8s on the same VPC? Any guidance and links to documentation would be of great help.
回答1:
In general, you would have a AWS Load-balancer instance that would have multiple K8s workers as backend server with a specific port. After traffic entering worker nodes, networking inside K8s would take the job.
Suppose you have setup two K8S services as load-balancer with port 38473 and 38474 for your two domains, respectively:
xxx.yyy.com -> AWS LoadBalancer1 -> Node1:38473 -> K8s service1 -> K8s Pod1
-> Node2:38473 -> K8s service1 -> K8s Pod2
aaa.bbb.com -> AWS LoadBalancer2 -> Node1:38474 -> K8s service2 -> K8s Pod3
-> Node2:38474 -> K8s service2 -> K8s Pod4
This simple solution above would need to have you create different services as load-balancer, which would increase your cost because they are actual AWS load-balancer instances. To reduce cost, you could have an ingress-controller
instance in your cluster and write ingress
config. This would only require one actual AWS load-balancer to finish your networking:
xxx.yyy.com -> AWS LoadBalancer1 -> Node1:38473 -> Ingress-service -> K8s service1 -> K8s Pod1
-> Node2:38473 -> Ingress-service -> K8s service1 -> K8s Pod2
aaa.bbb.com -> AWS LoadBalancer1 -> Node1:38473 -> Ingress-service -> K8s service2 -> K8s Pod3
-> Node2:38473 -> Ingress-service -> K8s service2 -> K8s Pod4
For more information, you could refer more information here:
- Basic Networking and K8s Services: https://kubernetes.io/docs/concepts/services-networking/service/
- Ingress & ingress controller (Nginx Implementation): https://www.nginx.com/products/nginx/kubernetes-ingress-controller
回答2:
It depends on how did you set your K8s service.
If you set a loadbalancer in AWS then you can create a service with loadbalancer type to expose a service to the internet. But it will cost much of money because it will own a ELB for each service. for more reference https://kubernetes.io/docs/concepts/services-networking/service/
Another option is ingress but it will more be complicated if you are not familiar with K8s but ingress is a more popular way to expose your K8S to internet
This article could give you a better concept of the ELB <> K8s.
https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
回答3:
What you are trying to do isn't the most cost optimal and standard approach to do this on EKS. A LoadBalancer resource in kubernetes cluster maps to a Classic Load Balancer in AWS. This approach would spin up a new ELB for every k8s service you create with a type load balancer. There are multiple approaches to do so, whichever aligns best with your use-case.
You can use an Application Load Balancer with EKS to handle ingress inside your cluster. You would have to deploy an ALB ingress controller which will manage assigning a configured ALB to every ingress resource that you create inside your K8s cluster. Though ALB integration into EKS is still relatively new, and there are certain drawbacks of using an ALB with EKS right now. One is that it doesn't work across namespaces in your cluster, i.e. for every ingress resource in a new namespace, the ALB Ingress Controller would spin up a new ALB, which is not very cost-efficient if you have multiple namespaces in your cluster.
You can expose your cluster using a single load balancer, and route all incoming requests to an internal ingress proxy. nginx is easily implementable and works great with k8s ingress resource. You would have to deploy an nginx ingress controller in your cluster. The controller will handle assigning the elb to an ingress resource. (Bonus: nginx ingress works across namespaces, unlike ALB).
You can also use a Network Load Balancer if you want to connect using a VPC Private Link. An example use-case would be an API. You can run your cluster workload inside private subnets and use an internal facing NLB. Then connect that NLB to the API Gateway service through a VPC Private Link.
Here's a git repository with some helping code on deploying an ALB with EKS:
https://github.com/pahud/eks-alb-ingress
There are plenty of resources available for the nginx approch.
来源:https://stackoverflow.com/questions/54784460/aws-vpc-k8s-load-balancing