Kafka + .NET Core (Enterprise) Interview Questions & Answers
1)
How do you implement Kafka in .NET Core?
Answer:
In .NET Core, I implement Kafka using the Confluent.Kafka library.
I build:
- A Singleton Kafka Producer using ProducerBuilder
- A Kafka Consumer inside a .NET BackgroundService
- I use manual offset commits to ensure
reliability
- For production reliability, I implement the Outbox
Pattern using EF Core + SQL Server, where events are stored in
an Outbox table and a relay worker publishes them to Kafka.
2)
What is your Kafka architecture in .NET?
Answer:
My Kafka architecture has:
- Producer API (Web API) → Writes DB + Outbox record
- Outbox Relay Worker
→ Reads Outbox → Publishes to Kafka
- Consumer Worker Service → Reads Kafka topic → Processes message
- DLQ topic
for poison messages
- Optional: Schema Registry for Avro/Protobuf contracts
This ensures eventual consistency
and avoids the dual-write problem.
3)
When do you choose Kafka?
Answer:
I choose Kafka when I need:
- High throughput message processing
- Event-driven microservices
- Decoupling between services
- Reliable async workflows
- Replayable event streams
- Real-time streaming pipelines
Kafka is ideal when we want durable
event logs, not just transient messaging.
4)
When should you NOT choose Kafka?
Answer:
Kafka is not ideal when:
- Simple small system with low traffic
- Only point-to-point communication is needed
- You need immediate request-response (sync)
- The team cannot manage Kafka infrastructure
- You need simple job queues (RabbitMQ may be simpler)
Kafka is powerful but comes with
operational complexity.
5)
What is the Outbox Pattern and why did you use it?
Answer:
Outbox pattern solves the dual-write problem.
If we update SQL Server and publish
to Kafka directly, Kafka publish might fail and data becomes inconsistent.
So we:
- Save business data + Outbox message in same DB
transaction
- A background worker publishes the outbox record to
Kafka
- Once published, it marks the record as processed
This guarantees eventual
consistency.
6)
How do you ensure Kafka Producer is efficient in .NET?
Answer:
I register Kafka producer as a Singleton because:
- Producer is thread-safe
- It maintains internal buffers and TCP connections
- Creating producers repeatedly increases latency and
memory usage
- Singleton gives maximum throughput
7)
How do you handle Kafka Consumer processing safely?
Answer:
I use:
- .NET BackgroundService
- EnableAutoCommit = false
- Process message fully first
- Then commit offset manually
This ensures at-least-once
delivery.
8)
How do you handle poison messages in Kafka?
Answer:
I use a Dead Letter Topic (DLT/DLQ).
If a message fails repeatedly:
- Catch exception
- Publish message + error details to DLQ topic
- Commit offset so consumer doesn’t get stuck
- Continue processing next messages
This avoids blocking partitions.
9)
How do you secure Kafka communication?
Answer:
Kafka security is done using:
✅
1. SSL/TLS
Encrypts traffic between
producer/consumer and brokers.
✅
2. SASL Authentication
Kafka supports:
- SASL/PLAIN
- SASL/SCRAM
- SASL/OAUTHBEARER
- Kerberos
✅
3. ACL Authorization
We configure ACL rules like:
- Which service can write to which topic
- Which consumer group can read
In .NET, we configure these using ProducerConfig and ConsumerConfig.
10)
How do you configure SSL/SASL in .NET Kafka?
Answer:
Using Confluent.Kafka configs like:
- SecurityProtocol = SaslSsl
- SaslMechanism = ScramSha256
- SaslUsername
- SaslPassword
- SslCaLocation
This ensures secure connection.
11)
How do you avoid duplicate messages in Kafka?
Answer:
Duplicates can happen due to retries.
Solutions:
- Use Idempotent producer (EnableIdempotence = true)
- Use unique event IDs
- Consumer should be idempotent (check if already
processed)
- Store processed event IDs in DB
12)
How do you implement Exactly Once Semantics (EOS)?
Answer:
Kafka EOS requires:
- EnableIdempotence = true
- TransactionalId
- Use transactions:
- InitTransactions()
- BeginTransaction()
- CommitTransaction()
This ensures produce + offset commit
happen atomically.
13)
What delivery guarantees does Kafka provide?
Answer:
Kafka supports:
- At most once
→ commit first, then process (risk losing messages)
- At least once
→ process first, then commit (may duplicate)
- Exactly once
→ producer transactions + idempotence
Most enterprise apps use at-least-once
with idempotent consumer logic.
14)
Why did you use BackgroundService for consumer?
Answer:
Because it:
- Runs continuously in the background
- Integrates with .NET lifecycle
- Supports graceful shutdown via CancellationToken
- Easy to deploy as a worker microservice
15)
How do you ensure graceful shutdown for consumer?
Answer:
I pass the CancellationToken into:
consumer.Consume(stoppingToken);
So when app stops:
- Consumer stops cleanly
- Commits final offsets
- Leaves consumer group properly
- Avoids unnecessary rebalance delays
16)
How do you scale Kafka consumers?
Answer:
Kafka scales via:
- Partitions
- Consumer groups
If topic has 6 partitions, and we
run 6 consumer instances in same group, Kafka assigns one partition per
consumer.
Scaling rule:
✅ Consumers in group ≤ partitions
Extra consumers stay idle.
17)
Why partition key is important?
Answer:
Kafka guarantees ordering only within a partition.
So if we need ordering per OrderId,
we set:
- Key = OrderId
This ensures all events for that
order go to the same partition.
18)
How do you monitor Kafka lag?
Answer:
Lag = produced offset - consumed offset.
We monitor lag using:
- Prometheus + Grafana
- Confluent Control Center
- Kafka UI tools
High lag means consumer is falling
behind.
19)
How do you handle retry in Outbox worker?
Answer:
In outbox worker, I maintain:
- RetryCount
- Max retry limit (ex: 5)
- If retry exceeds → mark as Failed
- Optionally publish to DLQ
This prevents infinite retry loops.
20)
How do you prevent multiple instances from publishing same Outbox message?
Answer:
In multi-instance deployments, we use:
- Row locking (UPDLOCK, READPAST)
- Or a status update with concurrency token
- Or a “ClaimedBy” column
- Or SQL Server transaction + optimistic concurrency
This ensures only one worker
processes a message.
21)
Why not publish to Kafka directly from Controller?
Answer:
Because if DB commit succeeds and Kafka publish fails, the system becomes
inconsistent.
Direct publish = dual write risk.
Outbox ensures:
- DB is source of truth
- Kafka publish is retried safely
22)
How do you ensure ordering in Kafka?
Answer:
Kafka ordering is guaranteed:
✅ within the same partition only.
To enforce ordering:
- Use a partition key (OrderId)
- Ensure consumers process sequentially per partition
23)
What is Consumer Rebalance?
Answer:
Rebalance happens when:
- Consumer joins group
- Consumer leaves/crashes
- Partitions are redistributed
During rebalance, consumers pause
processing temporarily.
24)
What is librdkafka?
Answer:
librdkafka is the high-performance C library used by Confluent.Kafka.
The .NET package is a wrapper over
librdkafka, giving near-native performance.
25)
What are the main Kafka configs you tune in production?
Answer:
Producer:
- Acks
- EnableIdempotence
- LingerMs
- BatchSize
- CompressionType
Consumer:
- MaxPollIntervalMs
- SessionTimeoutMs
- EnableAutoCommit
- AutoOffsetReset
⭐ Security Questions (Very Important)
26)
How do you secure Kafka topics in production?
Answer:
I secure Kafka using:
- TLS encryption
- SASL authentication
- ACL authorization
- Separate service accounts per microservice
- Restrict topic permissions per producer/consumer group
27)
How do you secure your .NET microservices along with Kafka?
Answer:
I secure the application by:
- Using JWT authentication for APIs
- Role-based authorization
- Secrets stored in Key Vault (Azure) or AWS Secrets
Manager
- Kafka credentials stored as environment variables
- TLS + SASL for Kafka connections
🔥 Final Interview “Story” (You can say this
confidently)
If
interviewer asks:
“Explain your Kafka implementation
end-to-end.”
Say:
In my .NET 8 microservice, the API
does not publish directly to Kafka.
Instead, it stores the business event into an Outbox table in SQL Server using
EF Core, inside the same transaction as the main business data.
A BackgroundService called OutboxRelayWorker reads pending messages, publishes
them to Kafka using a singleton Confluent producer, and marks them as
processed.
The consumer runs as another Worker Service using manual commits.
If message processing fails repeatedly, we route it to a Dead Letter topic to
prevent blocking partitions.
This design ensures reliability, scalability, and prevents the dual-write
consistency issue.
Comments
Post a Comment