Yet another Kubernetes platform?

5 min readSep 5, 2019


Why we decided to build a new DevOps tool.

Containers and Kubernetes usage on the rise

With containers production usage on the rise (up to 38% late 2018), Kubernetes is increasingly becoming the first-choice orchestrator, even considered a Cloud OS by some (though definitely a framework), fueling a very lively ecosystem on the way with countless Meetups, talks, and conferences like KubeCon and CloudNative Con.


While delivering an invaluable service to DevOps teams, facilitating the deployment of micro-services and containers at cluster scale to production environments, many agree on the fact that the learning curve to mastering Kubernetes is a very steep one, turning into an even harsher experience in multi-cloud environments.

Accelerating the Kubernetes learning curve, to empower DevOps

Prone to sharing their frustration on social media, developers even started creating memes to illustrate their difficulties on their journey to Kubernetes (Kube or K8S to his friends) implementation, some of them going as far as creating dedicated websites like the infamous “Kubernetes failure stories” to share their poor experience.


That is why we decided to build a new DevOps tool : to empower teams and help them accelerate the Kubernetes learning curve and time to value via infrastructure automation.

Infrastructure-as-Code is the very first benefit we provide, by giving DevOps easy access to the value of documented and repeatable infrastructures through our solution.

We industrialize Kubernetes deployment by managing the lifecycle of their infrastructure stack based on Terraform, so that they can focus on building and running their apps!

Enabling multi-cloud

With a whooping 63% of IaaS users now deploying to multiple clouds, multi-vendor public cloud is a standard, often pushed by DevOps looking for the best solutions for their projects.

Hashicorp founder, Mitchell Hashimoto, clearly states:

What is multi-cloud and why enterprises are adopting it

“…enabling your teams to use best-of-breed services. So at any given moment, certain clouds have better AI services or better data processing. And you want your teams that are working on those problems to be able to use the best tools possible.

And saying, “You can only use this one cloud platform,” is just so restrictive. And so the multi-cloud will find you.”

There are two major obstacles on the way, though…

Gaining new skills for each additional cloud and going beyond cloud vendors lock-in.

Indeed, leveraging multi-cloud does have some impact on resources. Kubernetes deployments imply different implementations for each public cloud with renewed learning time and difficulties for each.

Cloud Vendors lock-in is a reality fueled by many technical details, be it on the data side as moving databases around is always a risky business, the application field (reconfiguration of services, lack of standard interfaces and open APIs), and of course the infrastructure itself.

Deploy Kubernetes in production

Moreover, deploying Kubernetes clusters (even managed ones like EKS), on AWS does not make you automatically skilled enough to do the same on Microsoft Azure or GCP. How do you get the most of each Cloud specificity without time to add new skills when your needs are immediate, and your projects have to be delivered now?

We believe that developers and teams should be able to choose their cloud provider depending on their project needs and the added value delivered by the cloud provider on this specific need. Therefore, we decided to tackle this issue with a new DevOps tool to help them launch production-ready Infrastructure-as-Code and deploy managed Kubernetes Clusters and their workers like EKS, AKS, or GKE with Terraform.

Even more at stake

We have now reached a situation where data centers emit as much CO2 as the air travel industry. Kubernetes adoption can help minimize our impact. Cheryl Hung (@oicheryl), Director of Ecosystem at Cloud Native Computing Foundation (Linux Foundation) recently gave a great talk about computing resources optimization and climate change.

Data centers climate impact

Done right, Kubernetes has some impacts on computing efficiency and thus energy consumption. Spotify’s CPU utilization improved by 2 to 3X, or even the city of Montreal cutting hundreds of VMs down to 8 machines through containers / better orchestration management do show us the way towards what is at stake beyond business value of deploying Kubernetes.

This definitely is a topic we intend to look deeper into!

The future of cloud is full of promises.

V1 launch is planned for December 2019.

We do not know how this is going to end. We came here to tell you how it’s going to begin ;) We’re building a Skiff that will help you navigate swiftly through the Clouds in the coming years.

CloudSkiff is an Infrastructure as Code that allows Terraform automation and collaboration for growing teams.
Join our Beta here