问题
I have a Kafka cluster running on the local machine with default settings outside of my minikube setup. I have created a producer in one of my web services and deployed it on minikube.
For producer to connect to Kafka I am using 10.0.2.2
IP which I am also using to connect Cassandra and DGraph outside of minikube for these it is working fine.
However Kafka producer is not working and not even throwing an error saying Broker may not be available
or any other errors while sending data. But I am not receiving anything on the consumer side.
When I run this web service outside the Kubernetes everything works.
Please if you guys have any idea what might be wrong here.
Below is the Kubernetes yaml
file that I am using.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: servicename
labels:
app: servicename
metrics: kamon
spec:
replicas: 1
template:
metadata:
labels:
app: servicename
metrics: kamon
spec:
containers:
- image: "image:app"
imagePullPolicy: IfNotPresent
name: servicename
env:
- name: CIRCUIT_BREAKER_MAX_FAILURES
value: "10"
- name: CIRCUIT_BREAKER_RESET_TIMEOUT
value: 30s
- name: CIRCUIT_BREAKER_CALL_TIMEOUT
value: 30s
- name: CONTACT_POINT_ONE
value: "10.0.2.2"
- name: DGRAPH_HOSTS
value: "10.0.2.2"
- name: DGRAPH_PORT
value: "9080"
- name: KAFKA_BOOTSTRAP_SERVERS
value: "10.0.2.2:9092"
- name: KAFKA_PRODUCER_NOTIFICATION_CLIENT_ID
value: "notificationProducer"
- name: KAFKA_NOTIFICATION_TOPIC
value: "notification"
- name: LAGOM_PERSISTENCE_READ_SIDE_OFFSET_TIMEOUT
value: 5s
- name: LAGOM_PERSISTENCE_READ_SIDE_FAILURE_EXPONENTIAL_BACKOFF_MIN
value: 3s
- name: LAGOM_PERSISTENCE_READ_SIDE_FAILURE_EXPONENTIAL_BACKOFF_MAX
value: 30s
- name: LAGOM_PERSISTENCE_READ_SIDE_FAILURE_EXPONENTIAL_BACKOFF_RANDOM_FACTOR
value: "0.2"
- name: LAGOM_PERSISTENCE_READ_SIDE_GLOBAL_PREPARE_TIMEOUT
value: 30s
- name: LAGOM_PERSISTENCE_READ_SIDE_RUN_ON_ROLE
value: ""
- name: LAGOM_PERSISTENCE_READ_SIDE_USE_DISPATCHER
value: lagom.persistence.dispatcher
- name: AKKA_TIMEOUT
value: 30s
- name: NUMBER_OF_DGRAPH_REPOSITORY_ACTORS
value: "2"
- name: DGRAPH_ACTOR_TIMEOUT_MILLIS
value: "20000"
- name: AKKA_ACTOR_PROVIDER
value: "cluster"
- name: AKKA_CLUSTER_SHUTDOWN_AFTER_UNSUCCESSFUL_JOIN_SEED_NODES
value: 40s
- name: AKKA_DISCOVERY_METHOD
value: "kubernetes-api"
- name: AKKA_IO_DNS_RESOLVER
value: "async-dns"
- name: AKKA_IO_DNS_ASYNC_DNS_PROVIDER_OBJECT
value: "com.lightbend.rp.asyncdns.AsyncDnsProvider"
- name: AKKA_IO_DNS_ASYNC_DNS_RESOLVE_SRV
value: "true"
- name: AKKA_IO_DNS_ASYNC_DNS_RESOLV_CONF
value: "on"
- name: AKKA_MANAGEMENT_HTTP_PORT
value: "10002"
- name: AKKA_MANAGEMENT_HTTP_BIND_HOSTNAME
value: "0.0.0.0"
- name: AKKA_MANAGEMENT_HTTP_BIND_PORT
value: "10002"
- name: AKKA_MANAGEMENT_CLUSTER_BOOTSTRAP_CONTACT_POINT_DISCOVERY_REQUIRED_CONTACT_POINT_NR
value: "1"
- name: AKKA_REMOTE_NETTY_TCP_PORT
value: "10001"
- name: AKKA_REMOTE_NETTY_TCP_BIND_HOSTNAME
value: "0.0.0.0"
- name: AKKA_REMOTE_NETTY_TCP_BIND_HOSTNAME
value: "0.0.0.0"
- name: AKKA_REMOTE_NETTY_TCP_BIND_PORT
value: "10001"
- name: LAGOM_CLUSTER_EXIT_JVM_WHEN_SYSTEM_TERMINATED
value: "on"
- name: PLAY_SERVER_HTTP_ADDRESS
value: "0.0.0.0"
- name: PLAY_SERVER_HTTP_PORT
value: "9000"
ports:
- containerPort: 9000
- containerPort: 9095
- containerPort: 10001
- containerPort: 9092
name: "akka-remote"
- containerPort: 10002
name: "akka-mgmt-http"
---
apiVersion: v1
kind: Service
metadata:
name: servicename
labels:
app: servicename
spec:
ports:
- name: "http"
port: 9000
nodePort: 31001
targetPort: 9000
- name: "akka-remote"
port: 10001
protocol: TCP
targetPort: 10001
- name: "akka-mgmt-http"
port: 10002
protocol: TCP
targetPort: 10002
selector:
app: servicename
type: NodePort
回答1:
I am already connecting to Cassandra and Dgraph running on the same machine as Kafka
Well, those services don't advertise their network address via Zookeeper.
My Kafka cluster is outside the K8. However, the producer is in K8.
In order for services outside of k8s to know Kafka's location, the advertised.listeners
needs to be set to an external IP or DNS address that all producer/consumer services in the k8s environment will be recognize and that's the address that your services will connect to. For example PLAINTEXT://10.0.2.2:9092
In other words, if you had not set up the listeners, and it was only listening on localhost, just because the Kafka port is externally exposed, means that while you might be able to reach one broker, the address you get back as part of the protocol is not guaranteed to be the same thing as your client's configuration, and that's where the advertised listener address comes into play.
来源:https://stackoverflow.com/questions/52097858/kafka-producer-deployed-on-kubernetes-not-able-to-produce-to-kafka-cluster-runni