Polar stations offer a unique and strategic position on earth when dealing with satellite missions. Getting data from network isolated scientific stations poses a series of unique challenges. The only way to ensure a trusted data transit is through secured geostationary satellites 36 thousand kilometers above our heads. Sending and deduplicating high value information through a non conventional worldwide network with limited bandwidth and high latency brings interesting challenges. We tried to solve this challenge leveraging dynamic routing with BGP, the protocol that rules the internet. However BGP and Kafka alone could not resolve the whole problem, especially when keeping in mind the financial impact of secured satellite telecommunications for ensuring data transit. As a global Kafka stretch cluster was clearly out of question, we needed to be more creative and answer specific questions. For example, what do you need to think through when sending sensitive information from an isolated scientific polar station to a central hub on the other side of the globe? How should this architecture be constructed while making it resilient, fast … and cost sensitive? We came up with an innovative design based on Kafka, and using several components of the Kafka ecosystem. In this talk we’ll show you the solution we built and talk about the refinements that we had to make in configuring Kafka. This is a story about networking, high availability… and Kafka.