Apache Kafka is a distributed event streaming platform used for building real-time data pipelines and streaming applications. It enables the publishing, subscription, storage, and processing of streams of records in a fault-tolerant and scalable way.
Kafka works on a producer-consumer model:
- Producers send data to Kafka topics
- Consumers subscribe to those topics and process the data
Kafka uses Zookeeper to manage:
- Broker metadata
- Leader election for partitions
- Configuration synchronization across brokers
While modern Kafka versions support a KRaft (Kafka Raft) mode that eliminates the need for Zookeeper, it's still commonly used in development and legacy setups.
This repository demonstrates how to set up and run Kafka locally with three different security configurations:
kafka-trifecta/
βββ auth/ # SASL/PLAIN Authentication
β βββ Auth-README.md
β βββ client-ui/
β β βββ kafka_ui.py
β βββ config/
β β βββ client.properties
β β βββ kafka_server_jaas.conf
β βββ docker-compose.yml
β βββ kafka_auth.py
β βββ requirements.txt
β
βββ no-auth/ # No Authentication
β βββ No-Auth-README.md
β βββ client-ui/
β β βββ kafka_ui.py
β βββ docker-compose.yml
β βββ kafka_no_auth.py
β βββ requirements.txt
β
βββ sasl_ssl_auth/ # SASL_SSL Authentication with Certificates
βββ SSL-Auth-README.md
βββ client-ui/
β βββ kafka_streamlit_app.py
β βββ ui_start_command.txt
βββ config/
β βββ consumer.properties
β βββ kafka.env
β βββ kafka_server_jaas.conf
β βββ producer.properties
β βββ server.properties
βββ cert/
β βββ ca-cert.pem
β βββ kafka.p12
β βββ keystore/
β β βββ kafka.keystore.jks
β βββ script.sh
β βββ truststore/
β βββ ca-key
β βββ kafka.truststore.jks
βββ docker-compose.yml
βββ requirements.txt
This repository demonstrates how to set up and run Kafka locally with three different security configurations:
- No Authentication - No-Auth-README.md
- Username/Password Authentication (SASL/PLAIN) - Auth-README.md
- SASL_SSL Authentication with Certificates - SSL-Auth-README.md
Each setup includes:
- Docker Compose files for Kafka + Zookeeper
- FastAPI producer endpoints
- Streamlit UI for interactive Kafka messaging
- Docker installed and running
- Docker Compose installed (v2+ recommended)
- Java JDK (for certificate generation)
- Python 3.7+ with pip
- OpenSSL (for certificate generation)
git clone https://github.com/Manikandan-t/kafka-trifecta.git
cd kafka-trifectaYou can choose to set up one of the three configurations:
- No Authentication
- SASL/PLAIN Authentication
- SASL_SSL Authentication with Certificates
Let's go through each setup in detail.
docker compose -f auth/docker-compose.yml up -dThis will start:
- Zookeeper on port
2181 - Authenticated Kafka broker on port
9092
kafka-topics.sh --bootstrap-server localhost:9092 \
--topic mock_json --create \
--partitions 1 --replication-factor 1 \
--command-config auth/config/client.propertiespython -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r auth/requirements.txt# Start FastAPI Kafka Auth API
python auth/kafka_auth.py
# Start Streamlit UI
streamlit run auth/client-ui/kafka_ui.py-
POST /produce: Send a message to Kafka
POST /produce Content-Type: application/json { "topic": "mock_json", "message": "{\"referenceId\": \"AUTH-001\", \"documentId\": \"1234567890\"}" }
-
GET /consume: Read messages from Kafka
GET /consume
Returns:
{ "messages": [ "{\"referenceId\": \"AUTH-001\", \"documentId\": \"1234567890\"}" ] }
- Refresh messages with a button
- Parse and display valid JSON messages
- Deduplicate already shown messages
- Display raw messages if JSON decoding fails
docker compose -f no-auth/docker-compose.yml up -dThis will start:
- Zookeeper on port
2181 - Kafka broker on port
9092(external),29092(Docker),9999(JMX)
kafka-topics --create --topic mock_json --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r no-auth/requirements.txt# Start FastAPI Kafka API
python no-auth/kafka_no_auth.py
# Start Streamlit UI
streamlit run no-auth/client-ui/kafka_ui.py-
POST /produce: Send a message to Kafka
POST /produce Content-Type: application/json { "topic": "mock_json", "message": "{\"referenceId\": \"REF-111\", \"documentId\": \"5354356788\"}" }
-
GET /consume: Read messages from Kafka
GET /consume
Returns:
{ "messages": [ "{\"referenceId\": \"REF-111\", \"documentId\": \"5354356788\"}" ] }
- Refresh messages with a button
- Parse and display valid JSON messages
- Deduplicate already shown messages
- Display raw messages if JSON decoding fails
First, you need to generate the SSL certificates using the provided script:
cd sasl_ssl_auth/cert
chmod +x script.sh
./script.shThis script will:
- Create a CA certificate and private key
- Generate a keystore with a key pair and self-signed certificate
- Sign the certificate with the CA
- Create truststore with the CA certificate
docker network create kafkadocker compose -f sasl_ssl_auth/docker-compose.yml up -dCheck the logs to ensure Kafka is running properly:
docker logs example.kafka.comYou should see messages indicating the broker is ready.
Edit your /etc/hosts file and add:
172.20.0.2 example.kafka.com
kafka-topics.sh \
--create \
--bootstrap-server example.kafka.com:9092 \
--command-config sasl_ssl_auth/config/producer.properties \
--replication-factor 1 \
--partitions 1 \
--topic mock_json_topicpython -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r sasl_ssl_auth/requirements.txtexport KAFKA_BROKER="example.kafka.com:9092"
export KAFKA_TOPIC="mock_json_topic"
export KAFKA_SASL_USERNAME="user"
export KAFKA_SASL_PASSWORD="bitnami"
export KAFKA_CA_LOCATION="../cert/ca-cert.pem"
streamlit run sasl_ssl_auth/client-ui/kafka_streamlit_app.pyThe FastAPI endpoints are not used in this configuration. Instead, you interact with Kafka directly through the Streamlit UI or CLI tools.
- Connect to Kafka broker with SSL authentication
- Consume messages from a specified topic
- View message details including timestamp, partition, offset, and value
- Display raw JSON messages or plain text
- Configure connection settings via sidebar
# For no-auth configuration
kafka-topics --list --bootstrap-server localhost:9092
# For auth configuration
kafka-topics.sh --list --bootstrap-server localhost:9092 --command-config auth/config/client.properties
# For SASL_SSL configuration
kafka-topics.sh --list --bootstrap-server example.kafka.com:9092 --command-config sasl_ssl_auth/config/consumer.properties# For no-auth configuration
kafka-console-producer.sh --topic mock_json --bootstrap-server localhost:9092
# For auth configuration
kafka-console-producer.sh --bootstrap-server localhost:9092 --topic mock_json --producer.config auth/config/client.properties
# For SASL_SSL configuration
kafka-console-producer.sh --topic mock_json_topic --bootstrap-server example.kafka.com:9092 --producer.config sasl_ssl_auth/config/producer.properties# For no-auth configuration
kafka-console-consumer.sh --topic mock_json --bootstrap-server localhost:9092 --from-beginning
# For auth configuration
kafka-console-consumer.sh --topic mock_json --bootstrap-server localhost:9092 --consumer.config auth/config/client.properties --from-beginning
# For SASL_SSL configuration
kafka-console-consumer.sh --topic mock_json_topic --bootstrap-server example.kafka.com:9092 --consumer.config sasl_ssl_auth/config/consumer.properties --from-beginningTo stop and remove all containers:
# For no-auth configuration
docker compose -f no-auth/docker-compose.yml down
# For auth configuration
docker compose -f auth/docker-compose.yml down
# For SASL_SSL configuration
docker compose -f sasl_ssl_auth/docker-compose.yml down-
Certificate Management: For the SASL_SSL configuration, the
ca-cert.pemfile is required for the Streamlit UI to trust the CA. Ensure this file is accessible when running the UI. -
Docker Network: The SASL_SSL configuration requires a Docker network named
kafkato be created before starting the containers. -
Host Mapping: The
/etc/hostsentry is necessary for the SASL_SSL configuration to resolve the hostname correctly. -
Security Best Practices:
- Never use the provided default credentials (
deploy/deploy@123,user/bitnami) in production - Always rotate certificates and credentials regularly
- Use separate certificates for different environments (development, staging, production)
- Never use the provided default credentials (
-
Troubleshooting Tips:
- If you encounter connection issues, check Kafka logs with
docker logs example.kafka.com - Verify that the
ssl.ca.locationis correctly pointing to the CA certificate - Ensure all required ports are open (9092 for SASL_SSL, 9092 for no-auth, 9092 for auth)
- If you encounter connection issues, check Kafka logs with
This project was inspired by the need to have a simple, secure way to test and develop with Kafka locally.