GKE Guide
This guide covers everything specific to running k8s-multitenant on Google Kubernetes Engine (GKE).
Prerequisites
1. NetworkPolicy enforcement
GKE Standard
NetworkPolicy must be enabled at cluster creation time — it cannot be added to an existing Standard cluster:
gcloud container clusters create my-cluster \
--enable-network-policy \
--region us-central1
Verify it is enabled:
gcloud container clusters describe my-cluster \
--region us-central1 \
--format="value(networkConfig.enableNetworkPolicy)"
# Expected: True
GKE Autopilot
NetworkPolicy is enforced by default on Autopilot clusters. No extra configuration needed.
2. RBAC — Google Groups as Kubernetes subjects
Enable Google Groups for RBAC so Google Workspace group email addresses can be used as Kubernetes subjects.
Step 1: Create a parent security group in Google Workspace: [email protected]
Step 2: Add your team groups as members of that parent group.
Step 3: Enable at cluster creation:
gcloud container clusters create my-cluster \
--security-group="[email protected]" \
--region us-central1
Or update an existing cluster:
gcloud container clusters update my-cluster \
--security-group="[email protected]" \
--region us-central1
Step 4: Use the Google Group email as the subject name:
rbac:
subjects:
- kind: Group
name: team-alpha-[email protected]
apiGroup: rbac.authorization.k8s.io
Finding your vpcCidr
GKE VPC-native clusters use alias IP ranges for pods. Set networkPolicy.vpcCidr to the node/pod IP range:
# List subnets and their secondary IP ranges
gcloud compute networks subnets list \
--filter="region:us-central1" \
--format="table(name,ipCidrRange,secondaryIpRanges)"
For most GKE clusters, 10.0.0.0/8 covers both node and pod CIDRs. This allows:
- GKE internal load balancer health check traffic
- Egress to Cloud SQL (private IP), Memorystore (Redis), Pub/Sub private service connect endpoints
GKE uses 130.211.0.0/22 and 35.191.0.0/16 for load balancer health checks. If you set a strict vpcCidr that doesn't cover those ranges, health checks will fail. Either use 10.0.0.0/8 as the CIDR, or add a dedicated ingress rule for those ranges.
Example values
global:
labels:
managed-by: platform-team
cloud: gcp
tools:
create: true
namespace: k8s-tools
tenants:
- name: team-alpha
labels:
environment: production
cost-center: eng-platform
rbac:
subjects:
# Google Group email (must be under gke-security-groups parent)
- kind: Group
name: team-alpha-[email protected]
apiGroup: rbac.authorization.k8s.io
networkPolicy:
enabled: true
- name: team-beta
labels:
environment: production
cost-center: eng-product
rbac:
subjects:
- kind: Group
name: team-beta-[email protected]
apiGroup: rbac.authorization.k8s.io
networkPolicy:
enabled: true
rbac:
create: true
serviceAccountName: default
resourceQuota:
enabled: true
default:
requests.cpu: "2"
requests.memory: 4Gi
limits.cpu: "4"
limits.memory: 8Gi
pods: "20"
services: "5"
networkPolicy:
enabled: true
vpcCidr: "10.0.0.0/8" # GCP VPC range
allowInternetEgress: false
The full example is at examples/gke/values.yaml.
Workload Identity
For pod-level access to GCP services (Cloud Storage, Pub/Sub, BigQuery), use GKE Workload Identity:
# Enable on the cluster
gcloud container clusters update my-cluster \
--workload-pool=my-project.svc.id.goog \
--region us-central1
Annotate the pod's Kubernetes ServiceAccount to impersonate a GCP Service Account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
namespace: team-alpha
annotations:
iam.gke.io/gcp-service-account: team-alpha-sa@my-project.iam.gserviceaccount.com
Then grant the GCP binding:
gcloud iam service-accounts add-iam-policy-binding \
[email protected] \
--member="serviceAccount:my-project.svc.id.goog[team-alpha/my-app]" \
--role="roles/iam.workloadIdentityUser"
Troubleshooting on GKE
NetworkPolicy has no effect (Standard cluster)
gcloud container clusters describe my-cluster \
--region us-central1 \
--format="value(networkConfig.enableNetworkPolicy)"
If False, the cluster must be recreated with --enable-network-policy.
Google Group binding not working
# Confirm security group is set
gcloud container clusters describe my-cluster \
--region us-central1 \
--format="value(authenticatorGroupsConfig)"
Ensure the team group is a member of the gke-security-groups parent group, not just a standalone group.
Pods can't reach Cloud SQL
Verify Cloud SQL is configured with a private IP in your VPC and that networkPolicy.vpcCidr covers the Cloud SQL private IP range.
gcloud sql instances describe my-instance --format="value(ipAddresses)"