Replicating AWS IRSA Workflow for my Homelab with Talos and a Raspberry Pi
Dumping access keys for oidc! π
IRSA leverages OpenID Connect to authenticate service accounts, providing a secure and convenient method to access AWS resources. This approach simplifies the management of credentials in your Kubernetes environment.
To test AWS IRSA (IAM Roles for Service Accounts) in a selfhosted Kubernetes cluster, I used a spare Raspberry Pi 4 and Talos (Kubernetes Operating System). This could have easily been done with docker on my mac but I wanted to do a real test so I can migrate the config over to my homelab cluster.
Word of warning, I only ran through these steps once so most likely there’s quite some errors. I’ll update this post as I refine the process.
You can find all the referenced files here
Booting Talos on Raspberry Pi
- Download and Prepare the Talos Image: https://factory.talos.dev/ Talos provides a straightforward process for installation. Here’s how to prepare the image on MacOS.
diskutil list
# Identify the external drive, in my case /dev/disk2
diskutil unmount /dev/disk2
curl -LO https://factory.talos.dev/image/ee21ef4a5ef808a9b7484cc0dda0f25075021691c8c09a276591eedb638ea1f9/v1.7.4/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
sudo dd if=metal-arm64.raw of=/dev/disk2 conv=fsync bs=4M
- Boot and Configure Talos:
Insert the SD card and boot the Raspberry Pi. After assigning a static IP via DHCP (mine is
192.168.0.191
), itβs ready for configuration.
Creating the Talos Cluster
- Generate Cluster Configuration:
Create a controlplane.patch
file to set custom kube-apiserver settings, using a GitHub repo as the OIDC Provider server.
controlplane.patch
cluster:
apiServer:
extraArgs:
service-account-issuer: https://raw.githubusercontent.com/<github_org>/<repo>/<branch>/<path>
service-account-jwks-uri: https://<node_ip>:6443/openid/v1/jwks
allowSchedulingOnControlPlanes: true
Create a machine.patch
file to set a pet name for the server.
machine.patch
machine:
network:
hostname: master-01
- Generate and Apply Machine Config:
talosctl gen config sitower https://192.168.0.191:6443 --config-patch-control-plane @./controlplane.patch --output ./clusterconfig
cd clusterconfig
talosctl machineconfig patch ./controlplane.yaml --patch @../machine.patch --output ./master-01.yaml
talosctl apply-config --insecure --nodes 192.168.0.191 --file ./master-01.yaml
talosctl --talosconfig ./talosconfig config endpoint 192.168.0.191
talosctl --talosconfig ./talosconfig bootstrap --nodes 192.168.0.191
talosctl --talosconfig ./talosconfig kubeconfig --nodes 192.168.0.191
Verify the setup:
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64b67fc8fd-j6hnb 1/1 Running 0 88s
kube-system coredns-64b67fc8fd-nj8hw 1/1 Running 0 88s
kube-system kube-apiserver-master-01 1/1 Running 0 9s
kube-system kube-controller-manager-master-01 1/1 Running 2 (2m2s ago) 32s
kube-system kube-flannel-rqrn2 1/1 Running 0 87s
kube-system kube-proxy-4llwx 1/1 Running 0 87s
kube-system kube-scheduler-master-01 1/1 Running 2 (2m4s ago) 25s
Setting Up OIDC Provider
- Export and Modify OIDC Configuration: Export the OIDC configuration from the cluster and store it in your GitHub repository.
kubectl get --raw /.well-known/openid-configuration | jq > .well-known/openid-configuration
kubectl get --raw /openid/v1/jwks | jq > .well-known/jwks
Replace the issuer
and jwks_uri
fields appropriately in the openid-configuration
file.
{
"issuer": "https://raw.githubusercontent.com/swibrow/aws-pod-identity-webhoo/main",
"jwks_uri": "https://raw.githubusercontent.com/swibrow/aws-pod-identity-webhook/main/.well-known/jwks",
"response_types_supported": [
"id_token"
],
"subject_types_supported": [
"public"
],
"id_token_signing_alg_values_supported": [
"RS256"
]
}
Deploy Cert-Manager and AWS Pod Identity Webhook:
Deploy Cert manager to make use of the cainjector.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.0/cert-manager.yaml
Create the following files
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
helmCharts:
- name: amazon-eks-pod-identity-webhook
repo: https://jkroepke.github.io/helm-charts
version: 2.1.3
releaseName: aws-identity-webhook
namespace: aws-identity-webhook
valuesFile: values.yaml
naemspace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: aws-identity-webhook
values.yaml
image:
tag: v0.5.4
config:
annotationPrefix: eks.amazonaws.com
defaultAwsRegion: ""
stsRegionalEndpoint: false
pki:
certManager:
enabled: true
securityContext:
runAsNonRoot: true
runAsUser: 65534
runAsGroup: 65534
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
Creating IAM Roles with Terraform
- Define OIDC Provider and IAM Role:
data "tls_certificate" "kubernetes_oidc_staging" {
url = "https://raw.githubusercontent.com/swibrow/aws-pod-identity-webhook/main"
}
resource "aws_iam_openid_connect_provider" "kubernetes_oidc_staging" {
url = "https://raw.githubusercontent.com/swibrow/aws-pod-identity-webhook/main"
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.kubernetes_oidc_staging.certificates[0].sha1_fingerprint]
}
resource "aws_iam_role" "pitower_test" {
name = "pitower-test"
assume_role_policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Principal" : {
"Federated" : [
"${aws_iam_openid_connect_provider.kubernetes_oidc_staging.arn}"
]
},
"Action" : "sts:AssumeRoleWithWebIdentity",
"Condition" : {
"StringEquals" : {
"${aws_iam_openid_connect_provider.kubernetes_oidc_staging.url}:sub" : "system:serviceaccount:test:pitower-test",
"${aws_iam_openid_connect_provider.kubernetes_oidc_staging.url}:aud" : "sts.amazonaws.com"
}
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "pitower_test" {
role = aws_iam_role.pitower_test.name
policy_arn = aws_iam_policy.pitower_test.arn
}
resource "aws_iam_policy" "pitower_test" {
name = "list-buckets"
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Action" : "s3:ListAllMyBuckets",
"Effect" : "Allow",
"Resource" : "*"
}
]
})
}
Testing the Setup
- Deploy AWS CLI Test Pod:
apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pitower-test
namespace: test
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::1111111111111:role/pitower-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pitower-test
namespace: test
spec:
selector:
matchLabels:
app: pitower-test
template:
metadata:
labels:
app: pitower-test
spec:
serviceAccountName: pitower-test
containers:
- name: pitower-test
image: amazon/aws-cli
command: ["aws", "s3api", "list-buckets", "--no-cli-pager"]
securityContext:
runAsNonRoot: true
runAsUser: 65534
runAsGroup: 65534
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
volumeMounts:
- name: aws
mountPath: /.aws
readOnly: false
volumes:
- name: aws
emptyDir: {}
Confirm successful authentication and access
kubectl get pods -n test
kubectl logs pitower-test-<pod-id> -n test
The logs should show your S3 buckets, indicating that the IRSA setup is working correctly.
π
pitower-test-58d6d5f8bf-mdpg5
[pitower-test-58d6d5f8bf-mdpg5] {
[pitower-test-58d6d5f8bf-mdpg5] "Buckets": [
[pitower-test-58d6d5f8bf-mdpg5] {
[pitower-test-58d6d5f8bf-mdpg5] "Name": "wibrow.net",
[pitower-test-58d6d5f8bf-mdpg5] "CreationDate": "2023-02-28T06:07:00+00:00"
[pitower-test-58d6d5f8bf-mdpg5] }
[pitower-test-58d6d5f8bf-mdpg5] ],
[pitower-test-58d6d5f8bf-mdpg5] "Owner": {
[pitower-test-58d6d5f8bf-mdpg5] "DisplayName": "sam.wibrow",
[pitower-test-58d6d5f8bf-mdpg5] "ID": "a21d7bb1a598ed1f67ebcea6370c14dbdc39060a1e718d6be16e011faaf22f7e"
[pitower-test-58d6d5f8bf-mdpg5] }
[pitower-test-58d6d5f8bf-mdpg5] }
Conclusion
Now I can authenticate any pod running in my homelabs to AWS using the IRSA method.
I will refine the setup before adding support into my public homelab cluster pitower
Feel free to reach out if you have any questions or need help with the setup. linkedin or any of the main CNCF/Kubernetes slack channels, home operations discord, you wont find another Wibrow kicking around.