Large networks consist of a diverse range of equipment, across private, public, hybrid clouds and partner networks. A hierarchical network has layers of infrastructure, catering to access, core, or distribution roles, managed by different organizations specialized to architect the right network hardware, software, and features for that network layer. The nature of data generated by each component can vary in type and form, including logs, events, metrics, or alarms. The diversity of data generated by a large network is beyond human scale. Apache Kafka® is a critical hub in large networks, empowering AIOps to enhance decision making, improve analysis and insights by contextualizing large volumes of operational data. Kafka solved the big problem of collecting, processing, storing and normalizing data at scale, allowing us to focus on building the AIOps pipeline. Our platform connects the dots across relevant operations data and provides operations teams with simple and powerful access to insights, from within increasingly popular collaboration environments like Slack and Microsoft teams. The pipeline must also integrate with automation solutions. This session will cover how large volumes of streaming messages can be received by parallel Kafka consumers, and turned into action by network operations teams, dramatically reducing downtime and improving performance.