I am hosting my application on GKE. The kubectl version installed in the server is v1.10.11-gke.1
and nginx-ingress is nginx-ingress-0.28.2
I w
I reproduced the behavior you observed in a test. In my own container logs, on a job running with an nginx-ingress controller, we can only see the internal IP address assuming that nginx-ingress-controller service YAML file is set to:
externalTrafficPolicy: Cluster
Setting traffic to 'Cluster” means that all the nodes can receive the requests. 'Cluster obscures the client source IP', the requests also could be SNAT'd to a node that has the running pod.
However, If you change:
externalTrafficPolicy: Local
The client source IP are exposed. “Local” preserves the client source IP but may cause imbalanced traffic spreading.This due to the fact that only the Nodes that are running the pods will be considered healthy by the network load balancer. The requests will be sent only to healthy nodes.
Some background explanation on how to preserve source IP in your containers and some further reading on the hops for source IP for services with Type=Nodeport can be useful to understand what is happening.