Growing with Kubernetes: Why We Chose to Migrate our Apps to EKS

When launching Thyme Care, we deployed our MVP to AWS Elastic Beanstalk (EBS). While it helped us hit the ground running, we quickly outgrew this solution. Just three months in, we found ourselves facing slow deployments, jumping through hoops to debug errors, and lacking the ability to implement more advanced configurations. Our plans for the future required a more robust platform – one that was highly observable, general enough for any number of workloads, and able to support our scale well into the future.

Why Kubernetes?

After careful consideration, we decided to migrate our applications to Kubernetes. There are a few main reasons that we hope it will support our growth:

Kubernetes abstracts over complex deployments. Kubernetes is a collection of abstractions, which together mitigate the complexity of building a distributed system. Kubernetes provides us with a common language to describe all kinds of workloads, from stateful applications to cronjobs. Many of our requirements, such as rolling upgrades and self-healing deployments, are handled out-of-the-box. Others, such as autoscaling under load, are easily configurable.

Scalability and resiliency. Kubernetes orchestrates workloads on top of a highly available cluster, which makes it easy to add and remove capacity without ever thinking about physical servers. At Thyme Care, we wanted to find a single platform that would support all our needs: internal apps, CI jobs, data pipelines, and future unknowns. We knew that almost none of these deployments would experience constant demand. Our internal apps would be used most heavily during daytime hours. Our data pipelines could balloon in size at any time. We need to react to these changes, and Kubernetes lets us react quickly and without manual intervention.

We can also be confident knowing that if at any time one of the nodes in our cluster experiences a failure, our workloads will be seamlessly moved to another with no downtime. While we’re not particularly worried today about building for high availability or multi-region, we know that Kubernetes will support us when we get there.

Tight cloud provider integration where we want it. When using a managed Kubernetes offering like Elastic Kubernetes Service (EKS) on AWS, it becomes easy to integrate applications with the cloud provider. If we want to grant our app permission to an S3 bucket, it’s easy for it to assume an IAM Role without messing with credentials or key rotation. If we want to provision storage or load balancers, with a bit of configuration, that’s easy to do as well.

…and a large open-source ecosystem where we don’t. Many of the off-the-shelf applications that we plan to deploy have official or community-supported Helm charts, which makes installation a simple process. Kubernetes is also an incredibly active community with a growing landscape of projects. Thanks to this, we know it will be easy to find resources that answer the questions we’ll have in the future.

Why now?

Incrementally adoptable. It’s no secret that Kubernetes, having been built by Google, solves problems for companies whose scale dwarfs our own. We found that it’s not necessary to run at this scale to make Kubernetes useful. Many of the needs that we have for our applications today are handled with the most basic deployment. As these needs grow, so will our usage of Kubernetes — as we implement application autoscaling, assign workloads different priorities, and improve our logging and monitoring.

Low adoption cost. When we rolled out Kubernetes, we had only one application that needed a new home immediately. Its needs were tiny and it was already containerized, making adoption easier. We were able to stand up a basic cluster on EKS using the community Terraform module. This avoided the need to individually create each component of a typical EKS architecture. In the future, if need arises, we can manage each of these components individually – but we haven’t had to, yet.

Challenges and mitigations

Accessibility. Kubernetes is a powerful tool, but it also has a steep learning curve. At Thyme Care, we’re mitigating this in two ways. First, we’re automating common tasks to ensure that Kubernetes knowledge is not a prerequisite of our development workflow. Second, we’re investing in training and documentation around Kubernetes to provide a clear path for those that want to learn more about each part of our infrastructure.

Too much of a good thing. Kubernetes hides the complexity of a scalable and highly available cluster behind a wall of abstractions. Most of the time, teams can lean on these abstractions to simplify how they think about deployments. When things go wrong, a good understanding of what is happening under the hood goes a long way towards troubleshooting. The same is true when building applications for Kubernetes. While deployments may be simpler, the challenge of building an application capable of running as a distributed system still exists.

Tips for implementation

Adopt incrementally, adopt early. Migrating tens or hundreds of applications to a new platform is more difficult than migrating one. Choose a platform that can grow with your team and find strategies to start small.

Get comfortable with containerization first. Our adoption path was easier because our team already had familiarity with containers. We used Docker both for local development, and our deployments on EBS. Introducing containers can be a huge win for developer productivity as they ensure the portability of your application across environments. It can also require lots of new infrastructure, including CI pipelines to build them and container registries to store them. If you are already deploying to a service that supports Docker, consider containerizing your application first. This will help avoid the adoption of too many new tools at once.

Embrace infrastructure-as-code. A key benefit of infrastructure-as-code is the ability to reuse existing solutions to problems. We already mentioned leveraging the community Terraform module for EKS to quickly deploy our Kubernetes cluster. Terraform is just one option for infrastructure-as-code, but we like it for its multi-provider support and its large library of community supported modules for the AWS ecosystem.


We’ve been using Kubernetes at Thyme Care for six months. Our greatest challenge has been finding ways to manage its complexity as we started out. As we deploy new services and increase our usage of the platform, we’re beginning to see the returns on our investment. It’s worked well for us, but it may not work well for everyone. Each team’s experience is different. We share ours with the hope that it can help inform your own decisions around infrastructure.

In the next part, we’ll talk about how we’re managing the complexity of Kubernetes by automating our deploy process.

Previous
Previous

View From the Ground: Onboarding at Thyme Care

Next
Next

*args, **kwargs, and custom class wrappers in Python