Using Kubernetes and passing the CKA exam are two completely different things. Here's what I learned after weeks of study and finally passing one of the hardest exams I've ever taken.
How load testing counter-intuitively proved that giving Kubernetes pods more CPU resulted in using fewer replicas, and how we implemented environment-specific sizing using advanced Helm templating.
How we replaced external CI automation with native Atlantis workflows for faster, more robust documentation generation using custom Docker images
Recently, I tackled a significant challenge at work: optimizing the performance of a Kubernetes microservice running on Google Cloud Platform (GCP).
To automate a key workflowβsyncing our master branch configuration to our dev-* environmentsβI created a robust Google Cloud Build pipeline.
Running event-driven microservices on GKE using KEDA is fantastic for cost efficiency and scalability.
The goal was simple: for every open pull request, automatically rebase its feature branch onto `master` and force-push the result.
When you have K8s microservices, you want them to be fast, efficient, and cost-effective. But how do you know exactly where your code is spending its time or consuming resources? That's where a profiler comes in.
How I automated terraform-docs for our platform infrastructure repo using GCP Cloud Build and Docker
In this post, Iβll document how I approached organizing Terraform code for managing Auth0 tenants independently of the broader platform infrastructure.