Scale in / Scale out with Kafka Streams and Kubernetes

The abstract:

Apache Kafka’s Streams API lets us process messages from different topics with very low latency. Messages may have different formats, schemas and may even be serialised in different ways. What happens when an undesirable message comes in the flow? When an error occurs, real-time applications can’t always wait for manual recovery and need to handle such failures. Kafka Streams lets you use a few techniques like sentinel value or dead letter queues—in this talk we’ll see how. This talk will give an overview of different patterns and tools available in the Streams DSL API to deal with corrupted messages. Based on a real-life use case, it also includes valuable experiences from building and running Kafka Streams projects in production. The talk includes live coding and demonstrations.

Related medium blog post:

🎊🎉📝I've just published a new article, Kafka-Streams: a road to autoscaling via #Kubernetes! It shows a method to leverage #kafkastreams JMX with #stackdriver. Check this out 👉: https://t.co/yyBaw0MItU 👈 @XebiaFr @PubSapientFR

— Loïc M. Divad (@LoicMDivad) April 15, 2019

The slides:

The video (in French 🇫🇷):