Integrating Apache Kafka with .NET Core (Producer, Consumer, Outbox, Schema Registry + Best Practices)
Kafka + .NET Core is one of the best combinations for building high-performance event-driven microservices.
Even though Apache Kafka is written
mainly in Java and Scala, the .NET ecosystem has a powerful,
production-grade Kafka client maintained by Confluent, and it’s widely
used in enterprise systems.
In this article, we will cover:
✅ Kafka concepts in .NET
✅ Producer & Consumer implementation
✅ BackgroundService consumer approach
✅ Schema Registry (Avro/Protobuf)
✅ Outbox Pattern (reliability)
✅ Best practices (DLQ, Singleton producer, partition key, graceful shutdown)
✅ Interview questions (Senior-level)
⭐
Why Kafka in .NET Core?
Kafka is used when your system
needs:
- High throughput messaging
- Event-driven architecture
- Microservice communication
- Loose coupling
- Reliable asynchronous processing
- Real-time streaming
Examples:
- OrderPlaced → PaymentService → InventoryService →
EmailService
- Logs & analytics pipelines
- Notification systems
- Real-time dashboards
🛠
The Go-To Kafka Library for .NET: Confluent.Kafka
The industry standard NuGet package
for Kafka in .NET is:
✅ Confluent.Kafka
It is a wrapper around the
high-performance C library called librdkafka, meaning it provides near-native
performance.
Install
the package
dotnet
add package Confluent.Kafka
🧱
Kafka Core Components in .NET
|
Component |
Role
in .NET |
|
Producer |
Sends messages using ProducerBuilder |
|
Consumer |
Reads messages using ConsumerBuilder |
|
AdminClient |
Creates topics, deletes topics,
reads metadata |
🚀 Kafka Implementation in .NET Core Web API
In real projects, your Web API will
usually:
- Produce messages inside Controllers/Services
- Consume messages using BackgroundService
1️⃣ Kafka
Producer Example (Sending Messages)
using
Confluent.Kafka;
var
config = new ProducerConfig
{
BootstrapServers =
"localhost:9092"
};
using
var producer = new ProducerBuilder<Null, string>(config).Build();
var
result = await producer.ProduceAsync(
"my-topic",
new Message<Null, string> { Value =
"Hello Kafka!" }
);
Console.WriteLine($"Delivered
to: {result.TopicPartitionOffset}");
What
happens internally?
- Kafka producer batches messages
- Maintains connections
- Sends delivery reports
- Optimizes throughput automatically
2️⃣ Kafka
Consumer Example (Receiving Messages)
In .NET, consumers should run as a Worker
/ Background Service.
using
Confluent.Kafka;
using
Microsoft.Extensions.Hosting;
public
class KafkaWorker : BackgroundService
{
protected override Task
ExecuteAsync(CancellationToken stoppingToken)
{
var config = new ConsumerConfig
{
GroupId =
"dotnet-consumer-group",
BootstrapServers =
"localhost:9092",
AutoOffsetReset =
AutoOffsetReset.Earliest
};
using var consumer = new
ConsumerBuilder<Ignore, string>(config).Build();
consumer.Subscribe("my-topic");
while
(!stoppingToken.IsCancellationRequested)
{
var consumeResult =
consumer.Consume(stoppingToken);
Console.WriteLine($"Received:
{consumeResult.Message.Value}");
}
return Task.CompletedTask;
}
}
💡 Best Practices (Real Enterprise Guidelines)
This section is the most
important for production.
✅
1. Register Producer as Singleton (DI Best Practice)
Kafka Producer is:
- Thread-safe
- Expensive to create
- Maintains internal buffers & metadata
So always register it as Singleton.
❌ Wrong:
- Creating producer per request
✅ Correct:
- Create once, reuse forever
✅
2. Always Use BackgroundService for Consumers
Kafka consumers are long-running.
So best way is:
- IHostedService
- BackgroundService
Benefits:
- Runs with app lifecycle
- Supports graceful shutdown
- Clean integration with .NET Core
✅
3. Use Schema Registry (Avoid Raw JSON Strings)
Many developers send JSON like:
{"orderId":
100, "amount": 500}
But JSON has no contract
enforcement.
So in enterprise Kafka, use:
✅ Avro
✅ Protobuf
With Schema Registry, your
messages become versioned, validated, and safe.
✅
4. Use Dead Letter Queue (DLQ)
If a message fails processing
repeatedly, it becomes a poison pill.
So best practice is:
- Catch exception
- Send message to "topic-failed" or "topic-dlt"
- Continue processing next messages
✅
5. Use Outbox Pattern (Solve Dual Write Problem)
This is a senior-level topic
and heavily asked in interviews.
📦 Outbox Pattern in .NET Core (Reliable Kafka
Messaging)
Problem:
Dual Write
If you do:
- Save Order to DB
- Send Kafka message
And Kafka fails → DB success →
system inconsistent.
✔ Solution: Outbox Pattern
Steps:
- Save Order + OutboxMessage in same transaction
- Background worker reads Outbox table
- Publishes message to Kafka
- Marks Outbox row as processed
🧾
Outbox Table Entity
public
class OutboxMessage
{
public Guid Id { get; set; } =
Guid.NewGuid();
public string Topic { get; set; }
public string Payload { get; set; }
public DateTime CreatedAt { get; set; } =
DateTime.UtcNow;
public DateTime? ProcessedAt { get; set; }
}
🧠
Atomic Transaction Example
using
var transaction = await _context.Database.BeginTransactionAsync();
_context.Orders.Add(newOrder);
_context.OutboxMessages.Add(new
OutboxMessage
{
Topic = "orders",
Payload =
JsonSerializer.Serialize(newOrder)
});
await
_context.SaveChangesAsync();
await
transaction.CommitAsync();
🔁
Background Relay Worker Example
protected
override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while
(!stoppingToken.IsCancellationRequested)
{
var pending = await
_context.OutboxMessages
.Where(x => x.ProcessedAt ==
null)
.ToListAsync(stoppingToken);
foreach (var msg in pending)
{
await
_producer.ProduceAsync(msg.Topic,
new Message<Null, string>
{ Value = msg.Payload });
msg.ProcessedAt = DateTime.UtcNow;
}
await
_context.SaveChangesAsync(stoppingToken);
await Task.Delay(1000, stoppingToken);
}
}
🐳 Running Kafka Locally using Docker Compose
Create docker-compose.yml:
version:
'3.8'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS:
PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
schema-registry:
image:
confluentinc/cp-schema-registry:latest
depends_on:
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME:
schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS:
kafka:9092
SCHEMA_REGISTRY_LISTENERS:
http://0.0.0.0:8081
Run:
docker-compose
up -d
Stop:
docker-compose
down
🧠 Kafka Vocabulary Every .NET Developer Must
Know
|
Term |
Meaning |
|
Broker |
Kafka server |
|
Topic |
Message category |
|
Partition |
Topic shard (for scaling) |
|
Offset |
Message position |
|
Consumer Group |
Consumers working as a team |
|
Rebalance |
Kafka redistributes partitions |
|
Lag |
Consumer is behind producer |
🎓 Kafka Interview Questions (High Value)
1.
Why is Kafka Producer Singleton in .NET?
Because it is thread-safe, expensive
to create, and maintains buffers, metadata, and connections.
2.
How do you handle long processing tasks in Kafka Consumer?
Do not block the consume loop. Use
worker queues/channels and commit offsets safely.
3.
Difference between Produce and ProduceAsync?
- ProduceAsync:
waits for broker ACK
- Produce:
uses callback, faster for bulk messages
4.
How do you ensure graceful shutdown?
Use BackgroundService and pass the cancellation token into Consume(token).
5.
What is Outbox Pattern?
It solves dual-write issues by
storing messages in DB first, then publishing asynchronously.
🏁 Final Summary
Kafka + .NET Core is one of the best
stacks for:
✅ microservices
✅ event-driven architecture
✅ scalable message processing
✅ real-time streaming
To build it properly, always apply:
- Singleton Producer
- BackgroundService Consumer
- DLQ (Dead Letter Topic)
- Schema Registry (Avro/Protobuf)
- Outbox Pattern
Comments
Post a Comment