I have been trying to deploy Kafka with schema registry locally using Kubernetes. However, the logs of the schema registry pod show this error message:
ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
What could be the reason of this behavior? ' In order to run Kubernetes locally, I user Minikube version v0.32.0 with Kubernetes version v1.13.0
My Kafka configuration:
apiVersion: v1
kind: Service
metadata:
name: kafka-1
spec:
ports:
- name: client
port: 9092
selector:
app: kafka
server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-1
spec:
selector:
matchLabels:
app: kafka
server-id: "1"
replicas: 1
template:
metadata:
labels:
app: kafka
server-id: "1"
spec:
volumes:
- name: kafka-data
emptyDir: {}
containers:
- name: server
image: confluent/kafka:0.10.0.0-cp1
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-1:2181
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-1
- name: KAFKA_BROKER_ID
value: "1"
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /var/lib/kafka
name: kafka-data
---
apiVersion: v1
kind: Service
metadata:
name: schema
spec:
ports:
- name: client
port: 8081
selector:
app: kafka-schema-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-schema-registry
spec:
replicas: 1
selector:
matchLabels:
app: kafka-schema-registry
template:
metadata:
labels:
app: kafka-schema-registry
spec:
containers:
- name: kafka-schema-registry
image: confluent/schema-registry:3.0.0
env:
- name: SR_KAFKASTORE_CONNECTION_URL
value: zookeeper-1:2181
- name: SR_KAFKASTORE_TOPIC
value: "_schema_registry"
- name: SR_LISTENERS
value: "http://0.0.0.0:8081"
ports:
- containerPort: 8081
Zookeeper configuraion:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
ports:
- name: client
port: 2181
selector:
app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-1
spec:
ports:
- name: client
port: 2181
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-1
spec:
selector:
matchLabels:
app: zookeeper
server-id: "1"
replicas: 1
template:
metadata:
labels:
app: zookeeper
server-id: "1"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: elevy/zookeeper:v3.4.7
env:
- name: MYID
value: "1"
- name: SERVERS
value: "zookeeper-1"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: /zookeeper/data
name: data
- mountPath: /zookeeper/wal
name: wal
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
can happen when trying to connect to a broker expecting SSL connections and the client config did not specify;
security.protocol=SSL
One time I fixed this issue by restarting my machine but it happened again and I didn't want to restart my machine, so I fixed it with this property in the server.properties file
advertised.listeners=PLAINTEXT://localhost:9092
来源:https://stackoverflow.com/questions/54254686/timeoutexception-timeout-expired-while-fetching-topic-metadata-kafka