I am using dockerized Kafka and written one Kafka consumer program. It works perfectly when I run Kafka in docker and application at my local machine. But when I configured
Your problem is the networking. In your Kafka config you're setting
KAFKA_ADVERTISED_HOST_NAME: localhost
but this means that any client (including your python app) will connect to the broker, and then be told by the broker to use localhost
for any connections. Since localhost from your client machine (e.g. your python container) is not where the broker is, requests will fail.
You can read more about Kafka listeners in detail here: https://rmoff.net/2018/08/02/kafka-listeners-explained/
So to fix your issue, you can do one of two things:
Simply change your compose to use the internal hostname for Kafka (KAFKA_ADVERTISED_HOST_NAME: kafka
). This means any clients within the docker network will be able to access it fine, but no external clients will be able to (e.g. from your host machine):
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
parse-engine:
build: .
depends_on:
- "kafka"
command: python parse-engine.py
ports:
- "5000:5000"
Your clients would then access the broker at kafka:9092, so your python app would change to
consumer = KafkaConsumer('test', bootstrap_servers='kafka:9092')
Add a new listener to Kafka. This enables it to be accessed both internally and externally to the docker network. Port 29092 would be for access external to the docker network (e.g. from your host), and 9092 for internal access.
You would still need to change your python program to access Kafka at the correct address. In this case since it's internal to the Docker network, you'd use:
consumer = KafkaConsumer('test', bootstrap_servers='kafka:9092')
Since I'm not familiar with the wurstmeister
images, this docker-compose is based on the Confluent images which I do know:
(editor has mangled my yaml, you can find it here)
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
# "`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-
# An important note about accessing Kafka from clients on other machines:
# -----------------------------------------------------------------------
#
# The config used here exposes port 29092 for _external_ connections to the broker
# i.e. those from _outside_ the docker network. This could be from the host machine
# running docker, or maybe further afield if you've got a more complicated setup.
# If the latter is true, you will need to change the value 'localhost' in
# KAFKA_ADVERTISED_LISTENERS to one that is resolvable to the docker host from those
# remote clients
#
# For connections _internal_ to the docker network, such as from other services
# and components, use kafka:9092.
#
# See https://rmoff.net/2018/08/02/kafka-listeners-explained/ for details
# "`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-
#
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
Disclaimer: I work for Confluent
This line
KAFKA_ADVERTISED_HOST_NAME: localhost
Says the broker is advertising itself as being available only on localhost
, which means all Kafka clients would only get back itself, not the actual list of real broker addresses. This would be fine if your clients are only located on your host - requests always go to localhost, which is forwarded to the container.
But, for apps in other containers, they need to point at the Kafka container, so it should say KAFKA_ADVERTISED_HOST_NAME: kafka
, where kafka
here is the name of the Docker Compose Service. Then clients in other containers would try to connect to that one
That being said, then, this line
consumer = KafkaConsumer('test', bootstrap_servers='localhost:9092')
You are pointing the Python container at itself, not the kafka
container.
It should say kafka:9092
instead
In my case I wanted to access the Kafka Container from an external python client running locally (as a producer) and here is the combination of containers and python code that worked for me (Platform MAC OS and docker version 2.4.0):
zookeeper container:
docker run -d \
-p 2181:2181 \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=2181 \
confluentinc/cp-zookeeper:5.2.3
kafka container:
docker run -d \
-p 29092:29092 \
-p 9092:9092 \
--name=kafka \
-e KAFKA_ZOOKEEPER_CONNECT=host.docker.internal:2181 \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=BROKER:PLAINTEXT,PLAINTEXT:PLAINTEXT \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,BROKER://localhost:9092 \
-e KAFKA_INTER_BROKER_LISTENER_NAME=BROKER \
-e KAFKA_BROKER_ID=1 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_CREATE_TOPICS="test:1:1" \
confluentinc/cp-enterprise-kafka:5.2.3
python client:
from kafka import KafkaProducer
import json
producer = KafkaProducer(bootstrap_servers=['localhost:29092'],
value_serializer=lambda v: json.dumps(v).encode('utf-8'),
security_protocol='PLAINTEXT')
acc_ini = 523416
print("Sending message")
producer.send('test', {'model_id': '1','acc':str(acc_ini), 'content':'test'})
producer.flush()