A real-world event-driven microservices simulation built with Apache Kafka and Node.js — inspired by how food delivery platforms like Zomato/Swiggy handle order pipelines internally.
Most tutorials show Kafka with a single producer and consumer. This project simulates how a real company's backend actually works — multiple independent services reacting to the same event simultaneously, without knowing about each other.
When a single order is placed:
- 7 independent services react automatically
- 8 Kafka topics carry events through the pipeline
- Each service is a separate consumer group with its own offset
- Services that fail don't affect the rest of the pipeline
┌─────────────────────────────────────────────────────────────────┐
│ API Entry Points │
│ │
│ POST /api/orders POST /api/riders/location │
│ (OrderService) (RiderLocationService) │
└────────────┬────────────────────────────┬───────────────────────┘
│ │
▼ ▼
┌─────────────────┐ ┌──────────────────────┐
│ order-placed │ │ rider-location-updates│
│ (3 partitions) │ │ (3 partitions) │
└────────┬────────┘ └──────────┬────────────┘
│ │
┌───────┼───────┐ ▼
▼ ▼ ▼ ┌─────────────────────┐
Payment Fraud Analytics │ RiderLocationService│
Service Service Service │ group:rider-location│
│ │ -consumers │
│ publishes └─────────────────────┘
▼
┌──────────────────┐
│ payment-events │
│ (3 partitions) │
└──┬───────────┬───┘
│ │
▼ ▼
Restaurant RiderAssignment
Service Service
│ │
▼ ▼
restaurant- rider-
notifications assignments
│ │
└──────┬───────┘
▼
NotificationService
(reads 3 topics)
│
▼
notification-events
POST /api/riders/location
│
▼
rider-location-updates
│
┌────┴────┐
▼ ▼
RiderLocation Analytics
Service Service
| Topic | Producer | Consumers |
|---|---|---|
order-placed |
OrderService | PaymentService, FraudService, AnalyticsService |
payment-events |
PaymentService | RestaurantService, RiderAssignmentService, NotificationService, AnalyticsService |
fraud-detection-events |
FraudService | — |
restaurant-notifications |
RestaurantService | NotificationService |
rider-assignments |
RiderAssignmentService | NotificationService |
rider-location-updates |
Rider phones (simulated) | RiderLocationService, AnalyticsService |
notification-events |
NotificationService | — |
analytics-events |
AnalyticsService | — |
- Node.js — runtime
- Express — HTTP API layer
- KafkaJS — Kafka client for Node.js
- Apache Kafka (KRaft mode) — message broker
- Docker — runs Kafka locally with no Zookeeper
src/
├── config/
│ └── kafka.js ← Kafka client singleton
├── routes/
│ ├── orderRoutes.js ← POST /api/orders
│ └── riderRoutes.js ← POST /api/riders/location
├── services/
│ ├── adminService.js ← topic management
│ ├── orderService.js ← producer: publishes to order-placed
│ ├── paymentService.js ← consumer + producer
│ ├── fraudService.js ← consumer + producer
│ ├── restaurantService.js ← consumer + producer
│ ├── riderAssignmentService.js← consumer + producer
│ ├── riderLocationService.js ← consumer + producer
│ ├── notificationService.js ← consumer + producer
│ └── analyticsService.js ← consumer + producer
└── index.js ← boots Kafka + Express
- Node.js 18+
- Docker
docker run -d \
--name kafka \
-p 9092:9092 \
-e KAFKA_NODE_ID=1 \
-e KAFKA_PROCESS_ROLES=broker,controller \
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 \
-e KAFKA_CONTROLLER_LISTENER_NAMES=CONTROLLER \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \
-e KAFKA_CONTROLLER_QUORUM_VOTERS=1@localhost:9093 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_LOG_DIRS=/var/lib/kafka/data \
apache/kafka:latestgit clone https://github.com/YOUR_USERNAME/zomato-kafka.git
cd zomato-kafka
npm installcp .env.example .envKAFKA_BROKER=localhost:9092
PORT=8080npm run devOn boot, the server will:
- Create all 8 Kafka topics automatically
- Start all 7 consumers
- Start Express on port 8080
POST http://localhost:8080/api/orders
{
"orderId": "ORD-001",
"userId": "USER-42",
"restaurant": "Pizza Palace",
"items": ["Margherita", "Coke"],
"amount": 450,
"location": "Bengaluru"
}POST http://localhost:8080/api/riders/location
{
"riderId": "RIDER-1",
"lat": 12.97,
"lng": 77.59
}GET http://localhost:8080/health🛒 OrderService → publishes to order-placed
💳 PaymentService → reads order, processes payment, publishes result
🔍 FraudService → reads same order independently, checks for fraud
🍽️ RestaurantService → reads payment success, notifies restaurant
🛵 RiderAssignment → reads payment success, assigns nearest rider
🔔 NotificationService → reads multiple topics, sends push notifications
📊 AnalyticsService → reads everything, tracks all stats silently
Try placing a suspicious order to trigger the fraud detector:
{
"orderId": "ORD-999",
"userId": "USER-NEW",
"restaurant": "Luxury Bites",
"items": ["Wagyu Steak"],
"amount": 9999,
"location": "Unknown"
}Consumer Groups — PaymentService and FraudService both read from order-placed using different group IDs. They each get every message independently, at their own pace.
Consume-Transform-Produce pattern — PaymentService consumes from one topic, processes the data, and produces to another. Each service in the chain does this.
Decoupled services — OrderService has no idea PaymentService exists. It just publishes an event. Adding a new service (e.g. LoyaltyPointsService) requires zero changes to existing code.
Fault isolation — If AnalyticsService crashes, orders keep flowing. Kafka holds the unread messages until Analytics comes back up and resumes from its last offset.
- How Kafka brokers, producers and consumers relate to each other
- Why partitions exist and how consumer groups distribute work across them
- The consume-transform-produce pattern used in real microservice pipelines
- How big companies like Zomato/Swiggy use Kafka as the central nervous system
- KRaft mode — running Kafka without Zookeeper
- Difference between synchronous HTTP calls and asynchronous event-driven architecture