Connect to Kafka running in Docker from local machine

和自甴很熟 提交于 2019-11-26 16:44:05

Disclaimer: The following uses confluentinc docker images, not wurstmeister/kafka, although there is a similar configuration, I have not tried it. If using that image, read their Connectivity wiki.

debezium/kafka docs on it are mentioned here.

I have tried the bitnami Kafka images, and got it to work with these settings

At the end of the day, it's all the same Apache Kafka running in a container. You're just dependent on how it is configured.

For supplemental reading and network diagrams, see this blog by @rmoff

That Confluent quickstart document assumes all produce and consume requests will be within the Docker network.

You could fix the problem by running your Kafka client code within its own container, but otherwise you'll need to add some more environment variables for exposing the container externally, while still having it work within the Docker network.

First add a protocol mapping of PLAINTEXT_HOST:PLAINTEXT that will map the listener protocol to a Kafka protocol

-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT

Then setup two advertised listeners on different ports. (kafka:9092 here refers to the docker container name). Notice the protocols match the right side values of the mappings above

-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092

When running the container, add -p 29092:29092 for the host port mapping

tl;dr (with the above settings)

When running any Kafka Client outside the Docker network (including CLI tools you might have installed locally), use localhost:29092 for bootstrap servers and localhost:2181 for Zookeeper

When running an app in the Docker network, use kafka:9092 for bootstrap servers and zookeeper:2181 for Zookeeper

See the example Compose file for the full Confluent stack

When you first connect to a kafka node, it will give you back all the kafka node and the url where to connect. Then your application will try to connect to every kafka directly.

Issue is always what is the kafka will give you as url ? It's why there is the KAFKA_ADVERTISED_LISTENERS which will be used by kafka to tell the world how it can be accessed.

Now for your use-case, there is multiple small stuff to think about:

Let say you set plaintext://kafka:9092

  • This is OK if you have an application in your docker compose that use kafka. This application will get from kafka the URL with kafka that is resolvable through the docker network.
  • If you try to connect from your main system or from another container which is not in the same docker network this will fail, as the kafka name cannot be resolved.

==> To fix this, you need to have a specific DNS server like a service discovery one, but it is big trouble for small stuff. Or you set manually the kafka name to the container ip in each /etc/hosts

If you set plaintext://localhost:9092

  • This will be ok on your system if you have a port mapping ( -p 9092:9092 when launching kafka)
  • This will fail if you test from an application on a container (same docker network or not) (localhost is the container itself not the kafka one)

==> If you have this and wish to use a kafka client in another container, one way to fix this is to share the network for both container (same ip)

Last option : set an IP in the name: plaintext://x.y.z.a:9092

This will be ok for everybody... BUT how can you get the x.y.z.a name ?

The only way is to hardcode this ip when you launch the container: docker run .... --net confluent --ip 10.x.y.z .... Note that you need to adapt the ip to one valid ip in the confluent subnet.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!