Kafka Summit Hackathon sponsored by Confluent

We've seen Apache Kafka® used on premises, but what about in the cloud? Confluent Cloud, fully managed Apache Kafka as a service, enables developers to make the most of Kafka without the operational burden of maintaining clusters.

This year's challenge theme is Cloud-Native DevOps for Apache Kafka. The goal of this challenge is to create an event streaming application with Confluent Cloud.

First Place Winner

Netacea Lumberer

This project aims to aid in the generation of pseudo fake data, and aids in streaming that to multiple sinks (including Kafka).

There was a need inside the business to start benchmaking downstream workflows behind Kafka, and the processing of data within the Kafka, and loosely the processes put in front of Kafka like AWS S3 bucket triggered lambda functions… read more

Second Place Winner

Nuage Learning

Federated Learning is an alternative to data centralization when training a machine learning model on data held by multiple owners. It consists in distributing the computations between data owners, which therefore only have to share local updates to the model's weights, without revealing the individual data samples. Thus, Federated Learning can be leveraged to address data privacy issues, which in some contexts would otherwise prevent learning from multi-centres data… read more.

Third Place Winner

Metadata_re: Dataflow in Motion

There are many challenges to build and maintain data flow for a large scale of a variety of data sources. Kafka has simplified a lot from distributed computing in the cloud environment for volume and velocity. However, Developer, User or Data engineer needs to think about many different parts of data flow like error route, distinct result, equal partition, connector configuration for mask, encoding etc… read more.

Fourth Place Winner

Real time health care monitoring for Dialysis patients

The project setups real-time monitoring of patient vital statistics and dialysis device parameters. Intercept real-time data feeds, correlate data, apply rules to process the data and detect any anomalies during the dialysis treatment. Feed data to central healthcare repository (EHR/EMR) using industry standards HL7/FHIR for analysis and provide a unified view to all stakeholders. Generate real-time alerts to the patient’s care team enable them to provide timely attention to the patient.

A promise to the developers of this hackathon: All the code you create during this hackathon is yours, and we will take no ownership of it. You are encouraged to do whatever you want with your project after this event.