问题
EDIT: The whole point of my setup is to achieve (if possible) the following :
- I have multiple k8s nodes
- When I contact an IP address (from my company's network), it should be routed to one of my container/pod/service/whatever.
- I should be able to easily setup that IP (like in my service .yml definition)
I'm running a small Kubernetes cluster (built with kubeadm) in order to evaluate if I can move my Docker (old)Swarm setup to k8s. The feature I absolutely need is the ability to assign IP to containers, like I do with MacVlan.
In my current docker setup, I'm using MacVlan to assign IP addresses from my company's network to some containers so I can reach directly (without reverse-proxy) like if it's any physical server. I'm trying to achieve something similar with k8s.
I found out that:
- I have to use Service
- I can't use the LoadBalancer type, as it's only for compatible cloud providers (like GCE or AWS).
- I should use ExternalIPs
- Ingress Resources are some kind of reverse proxy ?
My yaml file is :
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: k8s-slave-3
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- A.B.C.D
I was hopping that my service would get the IP A.B.C.D (which is one of my company's network). My deployment is working as I can reach my nginx container from inside the k8s cluster using it's ClusterIP.
What am I missing ? Or at least, where can I find informations on my network traffic in order to see if packets are coming ?
EDIT :
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 6d
nginx-service 10.102.64.83 A.B.C.D 80/TCP 23h
Thanks.
回答1:
If this is just for testing, then try
kubectl port-forward service/nginx-service 80:80
Then you can
curl http://localhost:80
回答2:
A solution that could work (and not only for testing, though it has its shortcomings) is to set your Pod to map the host network with the hostNetwork
spec field set to true
.
It means that you won't need a service to expose your Pod, as it will always be accessible on your host via a single port (the containerPort
you specified in the manifest). No need to keep a DNS mapping record in that case.
This also means that you can only run a single instance of this Pod on a given node (talking about shortcomings...). As such, it makes it a good candidate for a DaemonSet object.
If your Pod still needs to access/resolve internal Kubernetes hostnames, you need to set the dnsPolicy
spec field set to ClusterFirstWithNoHostNet
. This setting will enable your pod to access the K8S DNS service.
Example:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx-reverse-proxy
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
tolerations: # allow a Pod instance to run on Master - optional
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: nginx
name: nginx
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
EDIT: I was put on this track thanks to the the ingress-nginx documentation
回答3:
You can just Patch an External IP
CMD: $ kubectl patch svc svc_name -p '{"spec":{"externalIPs":["your_external_ip"]}}'
Eg:- $ kubectl patch svc kubernetes -p '{"spec":{"externalIPs":["10.2.8.19"]}}'
回答4:
First of all run this command:
kubectl get -n namespace services
Above command will return output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 <none> 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 <none> 3000:30017/TCP 13h
It is clear from the above output that External IPs are not assigned to the services yet. To assign External IPs to backend service run the following command.
kubectl patch svc backend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
and to assign external IP to frontend service run this command.
kubectl patch svc frontend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
Now get namespace service to check either external IPs assignment:
kubectl get -n namespace services
We get an output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 192.168.0.194 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 192.168.0.194 3000:30017/TCP 13h
Cheers!!! Kubernetes External IPs are now assigned .
回答5:
you can try kube-keepalived-vip configurtion to route the traffic. https://github.com/kubernetes/contrib/tree/master/keepalived-vip
回答6:
You can try to add "type: NodePort" in your yaml file for the service and then you'll have a port to access it via the web browser or from the outside. For my case, it helped.
回答7:
Just include additional option.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service --external-ip=1.1.1.1
回答8:
I don't know if that helps in your particular case but what I did (and I'm on a Bare Metal cluster) was to use the LoadBalancer
and set the loadBalancerIP
as well as the externalIPs
to my server IP as you did it.
After that the correct external IP showed up for the load balancer.
来源:https://stackoverflow.com/questions/44519980/assign-external-ip-to-a-kubernetes-service