Apache Flink’s mission is simple: compute over streams of data, this is why combining it with Apache Kafka makes a lot of sense. Kafka is the solid, reliable, scalable backbone and Flink is the engine on top! In 10 minutes you’ll learn all the basics of Flink over Kafka: starting by defining the types of connectors, we’ll explore how to work with various data formats, using pre-defined schemas when appropriate, and storing the pipeline output as standard or compacted topic when needed. Finally we’ll extend the picture, by bringing in additional sources of data, demonstrating how we can join various data sources in streaming mode. If you’re into streaming and want to understand how to define data pipelines using Flink, one of the fastest growing engine in the market, this session is for you.