Data volumes continue to grow, demanding new, more scalable solutions for low-latency data processing. Previously, the default approach to deploying such systems was to throw a ton of hardware at the problem. However, that is no longer necessary, as newer technologies showcase a level of efficiency that enables smaller, more manageable clusters while handling extreme workloads. Processing billions of events per second on Kafka can now be done with a modest investment in compute resources. In this session, you will learn how to architect and build the fastest data processing applications that scale linearly, and combine streaming data and reference data data-in-motion and data-at-rest with machine learning. We will take you through the end-to-end framework and example application, built on the Hazelcast Platform, an open source software engine designed for ultra-fast performance. We will also show how you can leverage SQL to further explore the operational data in the solution including querying Kafka topics and key-value data on the in-memory data store. Attendees will also get access to the Github sample application shown.