How load testing counter-intuitively proved that giving Kubernetes pods more CPU resulted in using fewer replicas, and how we implemented environment-specific sizing using advanced Helm templating.
How we replaced external CI automation with native Atlantis workflows for faster, more robust documentation generation using custom Docker images
Recently, I tackled a significant challenge at work: optimizing the performance of a Kubernetes microservice running on Google Cloud Platform (GCP).
To automate a key workflow—syncing our master branch configuration to our dev-* environments—I created a robust Google Cloud Build pipeline.
Running event-driven microservices on GKE using KEDA is fantastic for cost efficiency and scalability.
The goal was simple: for every open pull request, automatically rebase its feature branch onto `master` and force-push the result.
When you have K8s microservices, you want them to be fast, efficient, and cost-effective. But how do you know exactly where your code is spending its time or consuming resources? That's where a profiler comes in.
How I automated terraform-docs for our platform infrastructure repo using GCP Cloud Build and Docker
In this post, I’ll document how I approached organizing Terraform code for managing Auth0 tenants independently of the broader platform infrastructure.
Recently, I had to refactor our Terraform code and migrate the Terraform state file to a new GCP bucket. Fortunately, this was a straightforward migration, and I didn’t have to recreate any resources—just a quick state transfer was required. In this blog, I'll walk through the step-by-step process I followed to ensure a smooth and safe migration  of our Terraform state.