Skip to main content

EKS Guide

This guide covers everything specific to running k8s-multitenant on Amazon EKS.


Prerequisites

1. NetworkPolicy enforcement

NetworkPolicy objects are only enforced if your CNI plugin supports them. On EKS you have two options:

OptionHow to enableNotes
VPC CNI network policy (recommended)aws eks create-addon --addon-name vpc-cni then set ENABLE_NETWORK_POLICY=true on the aws-node DaemonSetEKS 1.25+, VPC CNI add-on v1.14+
CalicoInstall via Helm: helm install calico projectcalico/tigera-operatorWorks on any EKS version; does not use AWS networking for policy

If neither is installed, NetworkPolicy resources are created but have no effect.

2. RBAC — mapping IAM roles to Kubernetes groups

To use rbac.subjects with kind: Group, the IAM identity must be mapped to that group name in Kubernetes.

Option A — aws-auth ConfigMap (EKS < 1.29):

# kubectl edit configmap aws-auth -n kube-system
mapRoles: |
- rolearn: arn:aws:iam::123456789012:role/team-alpha-role
username: team-alpha-user
groups:
- team-alpha-admins

Option B — EKS access entries (EKS 1.29+, recommended):

aws eks create-access-entry \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/team-alpha-role \
--kubernetes-groups team-alpha-admins

Finding your vpcCidr

# Get the VPC CIDR of your EKS cluster's VPC
aws ec2 describe-vpcs \
--filters "Name=tag:kubernetes.io/cluster/my-cluster,Values=owned" \
--query 'Vpcs[0].CidrBlock' \
--output text
# Example output: 10.0.0.0/16

Set this value in networkPolicy.vpcCidr. This allows:

  • ALB (Application Load Balancer) health-check traffic into pods
  • Egress to RDS, ElastiCache, MSK, and other VPC-internal endpoints

Example values

global:
labels:
managed-by: platform-team
cloud: aws

tools:
create: true
namespace: k8s-tools

tenants:
- name: team-alpha
labels:
environment: production
cost-center: eng-platform
annotations:
eks.amazonaws.com/cluster-name: my-eks-cluster
rbac:
subjects:
- kind: Group
name: team-alpha-admins # matches the group in aws-auth / access entry
apiGroup: rbac.authorization.k8s.io
networkPolicy:
enabled: true

rbac:
create: true
serviceAccountName: default

resourceQuota:
enabled: true
default:
requests.cpu: "2"
requests.memory: 4Gi
limits.cpu: "4"
limits.memory: 8Gi
pods: "20"
services: "5"

networkPolicy:
enabled: true
vpcCidr: "10.0.0.0/16" # your EKS VPC CIDR
allowInternetEgress: false

The full example is at examples/eks/values.yaml in the repository.


IRSA (IAM Roles for Service Accounts)

If tenant workloads need to access AWS services (S3, DynamoDB, SQS, etc.), annotate the namespace and use IRSA:

tenants:
- name: team-alpha
annotations:
# Optional — used by tooling that needs to know the cluster name
eks.amazonaws.com/cluster-name: my-cluster

Then annotate the pod's ServiceAccount:

apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
namespace: team-alpha
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/team-alpha-s3-role

Troubleshooting on EKS

NetworkPolicy has no effect Check that VPC CNI network policy is enabled:

kubectl get daemonset aws-node -n kube-system -o jsonpath='{.spec.template.spec.containers[*].env}' | jq '.[] | select(.name=="ENABLE_NETWORK_POLICY")'

Pods can't reach RDS Verify networkPolicy.vpcCidr matches your VPC CIDR, and that port 5432 (PostgreSQL) or 3306 (MySQL) is listed in the egress rules.

Group binding not working Confirm the IAM role is mapped to the correct group name in aws-auth or via access entries:

kubectl get configmap aws-auth -n kube-system -o yaml