
Google Cloud offers a clever way of allowing Google Kubernetes Engine (GKE) workloads to safely and securely authenticate to Google APIs with minimal credentials exposure. I will illustrate this method using a tool called kaniko.
kaniko is an open source tool that allows you to build and push container images from Kubernetes pods when a Docker daemon is not easily accessible and you have no root access to the underlying machine. kaniko executes the build commands entirely in the userspace and has no dependency on the Docker daemon. This makes it a popular tool in continuous integration (CI) pipeline toolkits.
Suppose you want to access some Google Cloud services from your GKE workload such as a secret from Secret Manager, or in our case here: build and push a container image to Google’s Artifact Registry (GAR). However, it requires authorization of a Google service account (GSA) governed by Cloud IAM. This is different from a Kubernetes service account (KSA) which provides an identity for pods and is dictated by its own Kubernetes Role-Based Access Control (RBAC). So how would you go about providing access to your GKE workloads to said Google Cloud services in a secure manner?
The first option is to leverage the IAM service account used by the node pool(s). By default, this would be the Compute Engine default service account. The downside to this method is that the permissions of the service account is shared by all workloads, violating the principle of least privilege. Because of this, it is recommended that you use a custom service account with the least privileged role and opt for a more granular approach when providing access to your workloads.
The more secure second option is the tried, tested, and true method to generate account keys for a Google SA with the permissions that you need and mount them in your pod as a Kubernetes secret. The pod manifest to build and push an image to GAR would look something like the following:
apiVersion: v1
kind: Pod
metadata:
name: kaniko-k8s-secret
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.9.1
args: ["--dockerfile=Dockerfile",
"--context=gs://${GCS_BUCKET}/path/to/context.tar.gz",
"--destination=${LOCATION}-docker.pkg.dev/${PROJECT_ID}/${REPO_NAME}/${IMAGE}:${TAG}",
"--cache=true"]
volumeMounts:
- name: kaniko-secret
mountPath: /secret
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /secret/kaniko-secret.json
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: kaniko-secret
The environment variable, GOOGLE_APPLICATION_CREDENTIALS contains the path to a Google Cloud credentials JSON file that is mounted at the path /secret inside the pod. It is through this service account key that the Kubernetes pod is able to access the build context files and push the image to GAR.
The downside to this method is you have live, non-expiring keys floating around with a constant risk of being leaked, stolen or accidentally committed to a public code repository.
The third option uses Workload Identity to provide the link between Google SA and Kubernetes SA. This grants the KSA the ability to act as the GSA when interacting with Google Cloud services and resources. This method still provides the granular access from IAM without requiring any service account keys to be generated and thus closing the gap.
You will need to enable Workload Identity on your GKE cluster as well as configure the metadata server for your node pool(s). You will also need a GSA (I called mine kaniko-wi-gsa) and assign it the proper roles it needs:
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--role roles/artifactregistry.writer \
--member "serviceAccount:kaniko-wi-gsa@${PROJECT_ID}.iam.gserviceaccount.com"
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--role roles/storage.objectViewer \
--member "serviceAccount:kaniko-wi-gsa@${PROJECT_ID}.iam.gserviceaccount.com"
On the Kubernetes side, create a KSA (I called mine kaniko-wi-ksa) and assign it the following binding which will allow it to impersonate your GSA that has the permissions to access the Google Cloud services you need:
gcloud iam service-accounts add-iam-policy-binding kaniko-wi-gsa@${PROJECT_ID}.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/kaniko-wi-ksa]"
The last thing you need to do is annotate your KSA with the full email of your GSA:
kubectl annotate serviceaccount kaniko-wi-ksa \
iam.gke.io/gcp-service-account=kaniko-wi-gsa@${PROJECT_ID}.iam.gserviceaccount.com
Here is the pod manifest for the same image build job, but using Workload Identity instead:
apiVersion: v1
kind: Pod
metadata:
name: kaniko-wi
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.9.1
args: ["--dockerfile=Dockerfile",
"--context=gs://${GCS_BUCKET}/path/to/context.tar.gz",
"--destination=${LOCATION}-docker.pkg.dev/${PROJECT_ID}/${REPO_NAME}/${IMAGE}:${TAG}",
"--cache=true"]
restartPolicy: Never
serviceAccountName: kaniko-wi-ksa
nodeSelector:
iam.gke.io/gke-metadata-server-enabled: "true"
Although using Workload Identity requires a little more initial setup, you no longer need to generate or rotate any security account keys.
Sometimes you may want to push your images to a central artifact registry located in a Google Cloud project that is different from the one your GKE cluster is in. Can you still use Workload Identity in this case?
Absolutely! Your GSA and necessary IAM binding are created from your external Google Cloud project, but you still reference the Workload Identity pool and KSA your GKE workload is running from.
By using kaniko, we illustrated Workload Identity and how it allows more secure access when authenticating to Google APIs. Use recommended security practices to harden your GKE cluster and stop using node service accounts or exporting service account keys as Kubernetes secrets.
For more information on Workload identity for GKE and how we can help, contact us.