The Government Track At Kafka Summit

September 14th & 15th

Kafka Summit is the premier event for data architects, engineers, devops professionals, and developers who want to learn about streaming data. It brings the Apache Kafka community together to share best practices, write code, and discuss the future of streaming technologies.

We are pleased to offer a dedicated track for government as a part of the Kafka Summit. We welcome data teams from federal, state, and local governments to join us in a tailored experience to share and learn how other agencies are modernizing to an event driven paradigm.

*To ensure your registration for the Government Track, please select Public Sector as industry when completing your registration.

REGISTER NOW

Welcome Address & Breakout Sessions

Join us for the Government Track Welcome Address hosted by Jason Schick, General Manager, Public Sector and Will LaForest, Public Sector CTO followed by government breakout sessions.

Ask the Experts Office Hours

Get your questions answered by Will LaForest, Public Sector CTO at Confluent

Breakout Sessions & Lightning Talks
Building a Modern, Scalable Cyber Intelligence Platform with Apache Kafka
speakerimage
Jac Noel, Security Solutions Architect | Intel Corp

As cyber threats continuously grow in sophistication and frequency, companies need to quickly acclimate to effectively detect, respond, and protect their environments. At Intel, we’ve addressed this need by implementing a modern, scalable Cyber Intelligence Platform (CIP) based on Splunk and Apache Kafka. We believe that CIP positions us for the best defense against cyber threats well into the future. Our CIP ingests tens of terabytes of data each day and transforms it into actionable insights through streams processing, context-smart applications, and advanced analytics techniques. Kafka serves as a massive data pipeline within the platform. It achieves economies of scale by acquiring data once and consuming it many times. It reduces technical debt by eliminating custom point-to-point connections for producing and consuming data. At the same time, it provides the ability to operate on data in-stream, enabling us to reduce Mean Time to Detect (MTTD) and Mean Time to Respond (MTTR). Faster detection and response ultimately lead to better prevention. In our session, we’ll discuss the details described in the IT@Intel white paper that was published in Nov 2020 with same title. We’ll share some stream processing techniques, such as filtering and enriching in Kafka to deliver contextually rich data to Splunk and many of our security controls.

Improving Veteran Benefit Services Through Efficient Data Streaming
speakerimage
Robert Ezekiel, Chief Technologist | Booz Allen Hamilton

Having information on our Veterans and Veteran services is not an issue. The VA (VETERANS AFFAIRS) currently has petabytes of data on Veterans, the services they receive and the level of benefit to which those services align. The challenge we are aiming to solve through Confluent is the ability to publish eventing data in a manner consumable by various products - be they in–house applications or VA mobile apps - to rapidly notify a Veteran or service provider about key information concerning a claim or services received. Such information could include a change in status of a claim, requests for additional documentation to support a claim or appeal or what information should be shared when calling into the call center or what level of benefits are due when checking in at a hospital. This technology will enable the VA to provide accurate, real-time information on a claim, appeal or rating for our Veterans.

Kafka Migration for Satellite Event Streaming Data
speakerimage
Eric Velte, Chief Technical Officer | ASRC Federal

ASRC Federal created the Mission Operator Assist (MOA) tool to extend human capabilities through AI/ML for NOAA. MOA ingests system log data from on-orbit satellite constellations and applies machine learning to greatly improve real-time situational awareness. MOA uses a collection of tools, including Kafka for multi-subscriber communications, all hosted through AWS Cloud Services and Kubernetes Containers for microservices. Like many traditional on-premises systems, satellite ground station operations are undergoing a renaissance as they increasingly become enabled by cloud. During this session, the audience will learn about the satellite communications chain, and best practices and lessons learned in creating a data pipeline with Kafka for high throughput and scalability while displaying high quality situational awareness to mission operators. We will discuss our goals centered around establishing event-driven streaming for satellite logs so our machine learning becomes real-time and supporting a multi-subscriber approach for various Kafka topics. Listeners will also learn how a multi-subscriber approach using Kafka, helped us auto scale logstash based on how many messages are in the queue and other microservices.

Kafka Powered Near Real-Time Data Pipelines @ Extreme Scale
speakerimage
Murali Kannan, Senior Manager | AFS

We propose to present the details of innovative, customized, and scalable data integration and processing pipeline powered by Kafka. We plan on detailing the design principles we have implemented to achieve high performance at extreme scale. Our data processing pipeline is equipped with a custom threading model that enables dynamic auto scaling (horizontal and vertical) in our Kafka based architecture to provide processing elasticity in case of surges in data volumes. This architecture also provides randomized and yet functionally consistent assignment of incoming data to partitions and consumer threads to ensure an even distribution of workload while maintaining data integrity and FIFO processing. These design principles help us achieve the core business operations and to provide insights and predictive analysis in near real-time. Finally, we plan on laying out our future vision for this platform that will touch upon data domains, interoperability, and multi-modal data.

Transformation During a Global Pandemic
speakerimage
Scott Lee, Enterprise Architect | University of California, San Diego

When the University of California, San Diego launched its largest investment in tech in 2018, they planned to future proof their business processes and systems. Unexpectedly, it also prepared them to handle a global pandemic that changed every norm for the campus. With shelter-in-place orders taking immediate effect, they needed to quickly set up a robust online learning platform - one with powerful analytics to track student success. And, for the times students and staff are on campus, a contact tracing application was essential for their safety. We’d like to offer a conversation with Scott Lee to tell you more about UC San Diego’s rapid transformation from a traditional, on-campus institution to one of the leading examples of remote learning, and the critical role data connectivity played in making this possible.

Transforming disparate government stovepipes into a unified, connected, & consumer specific HR event
speakerimage
Jonathan Wallace, Owner & Chief Architect | Wallace Squared

This session explains the numerous challenges of developing an outbound Kafka pipeline from an Oracle PeopleSoft HR / Payroll application to the end consumer. It will briefly describe the overall flow, the technology explored and discarded, and why. It will go into the information dissemination approach and interfaces provided to consumers

Securing the Message Bus with Kafka Streams
speakerimage
Paul Otto, Principal Engineer | Raft LLC

Organizations have a need to protect Personally Identifiable Information (PII). As Event Streaming Architecture (ESA) becomes ubiquitous in the enterprise, the prevalence of PII within data streams will only increase. Data architects must be cognizant of how their data pipelines can allow for potential leaks. In highly distributed systems, zero-trust networking has become an industry best practice. We can do the same with Kafka by introducing message-level security. A DevSecOps Engineer with some Kafka experience can leverage Kafka Streams to protect PII by enforcing role-based access control using Open Policy Agent. Rather than implementing a REST API to handle message-level security, Kafka Streams can filter, or even transform outgoing messages in order to redact PII data while leveraging the native capabilities of Kafka. In our proposed presentation, we will provide a live demonstration that consists of two consumers subscribing to the same Kafka topic, but receiving different messages based on the rules specified in Open Policy Agent. At the conclusion of the presentation, we will provide attendees with a GitHub repository, so that they can enjoy a sandbox environment for hands-on experimentation with message-level security.

Kafka for Connected Vehicle Research
speakerimage
Samir Tabriz, Software Engineer | Leidos in support for FHWA research

Connected and Automated Vehicle education (CAVe)-in-a-box is an educational tool developed under the FHWA Workforce Development project. CAVe-in-a-box is an interconnected set of Intelligent transportation system equipment that was developed to serve as training and educational resources for the emerging Intelligent transportation system workforce. (CAVe)-in-a-box uses Kafka for real-time data streaming, data processing, visualization, and data collection. In this talk, we explore how Kafka is being used in cutting-edge connected and automated vehicle research.

Government Track Sponsored Talks
Driving a Digital Thread Program in Manufacturing with Apache Kafka
speakerimage
Anu Mishra, Director of Application Engineering | Mercury Systems

Forward-looking manufacturing companies have recognized the value of digital threads that bring together design and product information across the product life cycle, connecting the dots as information flows from design to manufacturing and on to services. Creating a reliable, scalable infrastructure to support digital thread programs can be a significant challenge, given the wide variety of legacy systems involved.

At Mercury Systems we are using Kafka and Confluent to drive our digital thread program and put in place a product lifecycle management process for Industry 4.0. With the substantial year-on-year growth we were seeing, we needed a cloud-ready solution that goes beyond a basic, API-based integration layer based on Mulesoft or similar technology. If you’re wondering why Kafka makes sense for a digital thread, join us to learn how a real-time event streaming platform enables core strategies around ML/AI, microservices, model-based system engineering, and continuous improvement

Safer Commutes & Streaming Data
speakerimage
George Padavik, IT Manager | Ohio Department of Transportation

The Ohio Department of Transportation has adopted Confluent as the event driven enabler of DriveOhio, a modern Intelligent Transportation System. DriveOhio digitally links sensors, cameras, speed monitoring equipment, and smart highway assets in real time, to dynamically adjust the surface road network to maximize the safety and efficiency for travelers. Over the past 24 months the team has increased the number and types of devices within the DriveOhio environment, while also working to see their vendors adopt Kafka to better participate in data sharing.

App Modernization - Mainframe to Mainstream
speakerimage
Craig Molina, Chief Technology Officer | Trilogy Innovations, Inc.

Trilogy is helping a government organization modernize their application from a mainframe to the cloud and are using Confluent as a component. 30+ years of legacy data and code are being converted to a combination of custom written microservices with some COTS and Confluent as the pipeline that processes and moves data quickly. Confluent is used to process real-time information and convert legacy data to provide the government agency with greater functionality, reliability, and connectability.