问题
I am trying to use Kafka. All configurations are done properly but when i try to produce message from console I keep getting following error
WARN Error while fetching metadata with correlation id 39 :
{4-3-16-topic1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
kafka version: 2.11-0.9.0.0
回答1:
It could be related to advertised.host.name
setting in your server.properties
.
What could happen is that your producer is trying to find out who is the leader for a given partition, figures out its advertised.host.name
and advertised.port
and tries to connect.
If these settings are not configured correctly it then may think that the leader is unavailable.
回答2:
I tried all the recommendations listed here. What worked for me was to go to server.properties
and add:
port = 9092
advertised.host.name = localhost
Leave listeners
and advertised_listeners
commented out.
回答3:
I had kafka running as a Docker container and similar messages were flooding to the log.
And KAFKA_ADVERTISED_HOST_NAME
was set to 'kafka'.
In my case the reason for error was the missing /etc/hosts
record for 'kafka' in 'kafka' container itself.
So, for example, running ping kafka
inside 'kafka' container would fail with ping: bad address 'kafka'
In terms of Docker this problem gets solved by specifying hostname
for the container.
Options to achieve it:
- docker run --hostname ...
- docker run -it --add-host ...
- hostname in docker-compose
- hostname in AWS EC2 Task Definition
回答4:
What solved it for me is to set listeners like so:
advertised.listeners = PLAINTEXT://my.public.ip:9092
listeners = PLAINTEXT://0.0.0.0:9092
This makes KAFKA broker listen to all interfaces.
回答5:
I'm using kafka_2.12-0.10.2.1:
vi config/server.properties
add below line:
listeners=PLAINTEXT://localhost:9092
- No need to change the advertised.listeners as it picks up the value from std listener property.
Hostname and port the broker will advertise to producers and consumers. If not set,
- it uses the value for "listeners" if configured
Otherwise, it will use the value returned from java.net.InetAddress.getCanonicalHostName()
.
stop the Kafka broker:
bin/kafka-server-stop.sh
restart broker:
bin/kafka-server-start.sh -daemon config/server.properties
and now you should not see any issues.
回答6:
I have been witnessing this same issue in the last 2 weeks while working with Kafka and have been reading this Stackoverflow's post ever since.
After 2 weeks of analysis i have deduced that in my case this happens when trying to produce messages to a topic that doesn't exist.
The outcome in my case is that Kafka sends an error message back but creates, at the same time, the topic that did not exist before. So if I try to produce any message again to that topic after this event, the error will not appear anymore as the topic as been created.
PLEASE NOTE: It could be that my particular Kafka installation was configured to automatically create the topic when the same does not exist; that should explain why in my case I can see the issue only once for every topic after resetting the topics: your configuration might be different and in that case you would keep receiving the same error over and over.
Regards,
Luca Tampellini
回答7:
We tend to get this message when we try to subscribe to a topic that has not been created yet. We generally rely on topics to be created a priori in our deployed environments, but we have component tests that run against a dockerized kafka instance, which starts clean every time.
In that case, we use AdminUtils in our test setup to check if the topic exists and create it if not. See this other stack overflow for more about setting up AdminUtils.
回答8:
Another possibility for this warning (in 0.10.2.1) is that you try to poll on a topic that has just been created and the leader for this topic-partition is not yet available, you are in the middle of a leadership election.
Waiting a second between topic creation and polling is a workaround.
回答9:
For anyone trying to run kafka on kubernetes and running into this error, this is what finally solved it for me:
You have to either:
- Add
hostname
to the pod spec, that way kafka can find itself.
or
- If using
hostPort
, then you needhostNetwork: true
anddnsPolicy: ClusterFirstWithHostNet
The reason for this is because Kafka needs to talk to itself, and it decides to use the 'advertised' listener/hostname to find itself, rather than using localhost. Even if you have a Service that points the advertised host name at the pod, it is not visible from within the pod. I do not really know why that is the case, but at least there is a workaround.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper-cluster1
namespace: default
labels:
app: zookeeper-cluster1
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper-cluster1
template:
metadata:
labels:
name: zookeeper-cluster1
app: zookeeper-cluster1
spec:
hostname: zookeeper-cluster1
containers:
- name: zookeeper-cluster1
image: wurstmeister/zookeeper:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-cluster1
namespace: default
labels:
app: zookeeper-cluster1
spec:
type: NodePort
selector:
app: zookeeper-cluster1
ports:
- name: zookeeper-cluster1
protocol: TCP
port: 2181
targetPort: 2181
- name: zookeeper-follower-cluster1
protocol: TCP
port: 2888
targetPort: 2888
- name: zookeeper-leader-cluster1
protocol: TCP
port: 3888
targetPort: 3888
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-cluster
namespace: default
labels:
app: kafka-cluster
spec:
replicas: 1
selector:
matchLabels:
app: kafka-cluster
template:
metadata:
labels:
name: kafka-cluster
app: kafka-cluster
spec:
hostname: kafka-cluster
containers:
- name: kafka-cluster
image: wurstmeister/kafka:latest
imagePullPolicy: IfNotPresent
env:
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka-cluster:9092
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-cluster1:2181
ports:
- containerPort: 9092
---
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster
namespace: default
labels:
app: kafka-cluster
spec:
type: NodePort
selector:
app: kafka-cluster
ports:
- name: kafka-cluster
protocol: TCP
port: 9092
targetPort: 9092
回答10:
Adding this since it may help others. A Common problem can be a misconfiguration of advertised.host.name
. With Docker using docker-compose setting the name of the service inside KAFKA_ADVERTISED_HOST_NAME
wont work unless you set the hostname as well. docker-compose.yml
example:
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
hostname: kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
The above without hostname: kafka
can issue a LEADER_NOT_AVAILABLE
when trying to connect.
You can find an example of a working docker-compose
configuration here
回答11:
In my case, it was working fine at home, but it was failing in office, the moment I connect to office network.
So modified the config/server.properties listeners=PLAINTEXT://:9092 to listeners=PLAINTEXT://localhost:9092
In my case, I was getting while describing the Consumer Group
回答12:
If you are running kafka on local machine, try updating $KAFKA_DIR/config/server.properties with below line:
listeners=PLAINTEXT://localhost:9092
and then restarting kafka.
回答13:
I am using docker-compose to build the Kafka container using wurstmeister/kafka
image. Adding KAFKA_ADVERTISED_PORT: 9092
property to my docker-compose
file solved this error for me.
回答14:
When LEADER_NOT_AVAILABLE error throws, just restart the kafka broker:
/bin/kafka-server-stop.sh
followed by
/bin/kafka-server-start.sh config/server.properties
(Note: Zookeeper must be running by this time ,if you do otherway it wont work )
回答15:
Since I wanted my kafka broker to connect with remote producers and consumers, So I don't want advertised.listener
to be commented out. In my case, (running kafka on kubernetes), I found out that my kafka pod was not assigned any Cluster IP. By removing the line clusterIP: None
from services.yml, the kubernetes assigns an internal-ip to kafka pod. This resolved my issue of LEADER_NOT_AVAILABLE and also remote connection of kafka producers/consumers.
回答16:
This below line I have added in config/server.properties
, that resolved my issue similar above issue. Hope this helps, its pretty much well documented in server.properties file, try to read and understand before you modify this.
advertised.listeners=PLAINTEXT://<your_kafka_server_ip>:9092
回答17:
For all those struggling with the Kafka ssl setup and seeing this LEADER_NOT_AVAILABLE error. One of the reasons that might be broken is the keystore and truststore. In the keystore you need to have private key of the server + signed server certificate. In the client truststore, you need to have intermedidate CA certificate so that client can authenticate the kafka server. If you will use ssl for interbroker communication, you need this truststore also set in the server.properties of the brokers so they can authenticate each other.
That last piece I was mistakenly missing and caused me a lot of painful hours finding out what this LEADER_NOT_AVAILABLE error might mean. Hopefully this can help somebody.
回答18:
Issue is resolved after adding the listener setting on server.properties file located at config directory. listeners=PLAINTEXT://localhost(or your server):9092 Restart kafka after this change. Version used 2.11
回答19:
For me, it was happen due to a miss configuration
Docker port (9093)
Kafka command port "bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TopicName"
I checked my configuration to match port and now everything is ok
回答20:
For me, the cause was using a specific Zookeeper that was not part of the Kafka package. That Zookeeper was already installed on the machine for other purposes. Apparently Kafka does not work with just any Zookeeper. Switching to the Zookeeper that came with Kafka solved it for me. To not conflict with the existing Zookeeper, I had to modify my confguration to have the Zookeeper listen on a different port:
[root@host /opt/kafka/config]# grep 2182 *
server.properties:zookeeper.connect=localhost:2182
zookeeper.properties:clientPort=2182
回答21:
The advertised listeners as mentioned in the above answers could be one of the reason. The other possible reasons are:
- The topic might not have been created. You can check this using
bin/kafka-topics --list --zookeeper <zookeeper_ip>:<zookeeper_port>
- Check your bootstrap servers that you have given to the producer to fetch the metadata. If the bootstrap server does not contain the latest metadata about the topic (for example, when it lost its zookeeper claim). You must be adding more than one bootstrap servers.
Also, ensure that you have the advertised listener set to IP:9092
instead of localhost:9092
. The latter means that the broker is accessible only through the localhost.
When I encountered the error, I remember to have used PLAINTEXT://<ip>:<PORT>
in the list of bootstrap servers (or broker list) and it worked, strangely.
bin/kafka-console-producer --topic sample --broker-list PLAINTEXT://<IP>:<PORT>
回答22:
For me, I didn't specify broker id for Kafka instance.
It will get a new id from zookeeper sometimes when it restarts in Docker environment.
If your broker id is greater than 1000, just specify the environment variable KAFKA_BROKER_ID
.
Use this to see brokers, topics and partitions.
brew install kafkacat
kafkacat -b [kafka_ip]:[kafka_poot] -L
来源:https://stackoverflow.com/questions/35788697/leader-not-available-kafka-in-console-producer