Kubernetes
Disclaimer: My good friend Claude wrote this post. I only created a hack and told him about it.
Ever needed to connect to AWS services like RDS or DocumentDB from your local machine, but they’re locked away in private subnets? Instead of doing something reasonable like setting up a VPN, here’s a solution that involves using your production Kubernetes cluster as an impromptu bastion host. What could possibly go wrong?
The Problem
At my current $job, we recently migrated to ArgoCD from Terraform for application deployments 🙏. With that came a challenge: how do we pass Terraform outputs into Kubernetes manifests?
For example, our AWS Managed Prometheus endpoint lives in Terraform state, but our apps deployed via ArgoCD need that URL. Sure, we could use External Secrets Operator (and we do!), but it adds an extra layer of indirection when you just want to see what values are being injected into pods.
IRSA leverages OpenID Connect to authenticate service accounts, providing a secure and convenient method to access AWS resources. This approach simplifies the management of credentials in your Kubernetes environment.
To test AWS IRSA (IAM Roles for Service Accounts) in a selfhosted Kubernetes cluster, I used a spare Raspberry Pi 4 and Talos (Kubernetes Operating System). This could have easily been done with docker on my mac but I wanted to do a real test so I can migrate the config over to my homelab cluster.