Back to BlogCareer Development

CKA Certification Study Guide: Everything You Need to Pass in 2026

Complete study guide for the Certified Kubernetes Administrator (CKA) exam. Covers exam domains, essential kubectl commands, practice scenarios, and study strategies.

January 22, 202616 min readBy CloudaQube Team
CKA certification badge with Kubernetes cluster diagram

Introduction: Why CKA Matters for Your Career

Kubernetes has become the standard for container orchestration across every major cloud provider and most enterprise organizations. As adoption has grown, so has the demand for professionals who can administer, troubleshoot, and maintain Kubernetes clusters. The Certified Kubernetes Administrator (CKA) certification, offered by the Cloud Native Computing Foundation (CNCF), is the industry-recognized credential that validates these skills.

According to the 2025 CNCF Annual Survey, 84% of organizations use Kubernetes in production, and certified Kubernetes professionals command salaries 20-30% higher than their non-certified peers. The CKA is not just a line on a resume. It is a hands-on, performance-based exam that proves you can solve real problems in live Kubernetes clusters.

This guide covers everything you need to pass the CKA exam in 2026: the exam format, a breakdown of every domain with key concepts and commands, a four-week study plan, and battle-tested exam day strategies.

Exam Format and Logistics

Before you start studying, understand exactly what you are preparing for:

DetailInformation
Exam versionKubernetes 1.30+ (updated regularly)
Duration2 hours
FormatPerformance-based (hands-on tasks in live clusters)
Number of questions15-20 tasks
Passing score66%
ProctoringOnline via PSI Bridge
Retake policyOne free retake included
Cost$395 USD (includes one retake)
Validity2 years
i

Key Exam Details

  • The exam is open-book: you may access the official Kubernetes documentation at kubernetes.io/docs during the exam.
  • You work in real Kubernetes clusters, not multiple choice questions.
  • You are given a browser-based terminal environment and must switch between multiple cluster contexts.
  • A notepad is provided within the exam interface for scratch notes.

PSI Bridge Environment

The exam runs in a remote desktop environment through your browser. You will need:

  • A stable internet connection (minimum 1 Mbps)
  • A webcam and microphone
  • A clean, quiet room with no second monitors
  • A government-issued photo ID
!

Environment Requirements

The proctor will ask you to show your workspace via webcam before the exam begins. Remove all papers, books, phones, and extra monitors from your desk. Only one browser tab (the exam interface) is permitted, plus additional tabs for the Kubernetes documentation.

Exam Domain Breakdown

The CKA exam covers five domains, each weighted differently. Allocate your study time proportionally:

DomainWeight
Cluster Architecture, Installation & Configuration25%
Workloads & Scheduling15%
Services & Networking20%
Storage10%
Troubleshooting30%

Troubleshooting is the largest domain at 30%. This reflects the reality that most of a Kubernetes administrator's time is spent diagnosing and fixing problems, not building clusters from scratch.

Domain 1: Cluster Architecture, Installation & Configuration (25%)

This domain tests your understanding of how Kubernetes clusters are constructed and configured.

Control Plane Components

Know the role of every control plane component:

  • kube-apiserver: The front door to the cluster. All communication goes through the API server.
  • etcd: The key-value store that holds all cluster state. Backup and restore etcd is a common exam task.
  • kube-scheduler: Assigns pods to nodes based on resource requirements and constraints.
  • kube-controller-manager: Runs controllers (ReplicaSet, Deployment, Node, etc.) that maintain desired state.

etcd Backup and Restore

This is a frequently tested task. Practice it until you can do it from memory:

# Backup etcd
ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key

# Verify the backup
ETCDCTL_API=3 etcdctl snapshot status /tmp/etcd-backup.db --write-table

# Restore etcd from backup
ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db \
  --data-dir=/var/lib/etcd-restored

Cluster Upgrades with kubeadm

Upgrading a cluster is another common exam scenario:

# Step 1: Upgrade the control plane node
# Check available versions
apt-cache madison kubeadm

# Upgrade kubeadm
apt-get update && apt-get install -y kubeadm=1.30.0-1.1
kubeadm upgrade plan
kubeadm upgrade apply v1.30.0

# Upgrade kubelet and kubectl
apt-get install -y kubelet=1.30.0-1.1 kubectl=1.30.0-1.1
systemctl daemon-reload
systemctl restart kubelet

# Step 2: Upgrade worker nodes (repeat for each)
kubectl drain node-1 --ignore-daemonsets --delete-emptydir-data
# SSH to the worker node and upgrade kubeadm, kubelet, kubectl
kubeadm upgrade node
systemctl daemon-reload
systemctl restart kubelet
# Back on the control plane:
kubectl uncordon node-1

RBAC Configuration

Role-Based Access Control is essential for securing clusters:

# Create a Role that allows read access to pods
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: development
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
# Bind the role to a user
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods-binding
  namespace: development
subjects:
- kind: User
  name: developer-jane
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Domain 2: Workloads & Scheduling (15%)

Deployments and Rolling Updates

# Create a deployment
kubectl create deployment nginx --image=nginx:1.25 --replicas=3

# Update the image (triggers a rolling update)
kubectl set image deployment/nginx nginx=nginx:1.26

# Check rollout status
kubectl rollout status deployment/nginx

# View rollout history
kubectl rollout history deployment/nginx

# Rollback to previous revision
kubectl rollout undo deployment/nginx

# Rollback to a specific revision
kubectl rollout undo deployment/nginx --to-revision=2

Resource Requests and Limits

Understanding how the scheduler uses resource requests and how limits enforce resource boundaries is critical:

apiVersion: v1
kind: Pod
metadata:
  name: resource-demo
spec:
  containers:
  - name: app
    image: nginx:1.26
    resources:
      requests:
        memory: "128Mi"
        cpu: "250m"
      limits:
        memory: "256Mi"
        cpu: "500m"

Resource Management Rules

  • Requests determine scheduling: the scheduler places pods on nodes with enough allocatable resources.
  • Limits enforce maximums: containers exceeding memory limits are OOM-killed; those exceeding CPU limits are throttled.
  • Always set requests. Set limits for memory (to prevent OOM). CPU limits are debated but are generally recommended for multi-tenant clusters.

LimitRanges and ResourceQuotas

# Enforce default resource requests/limits per container
apiVersion: v1
kind: LimitRange
metadata:
  name: default-limits
  namespace: development
spec:
  limits:
  - default:
      memory: "256Mi"
      cpu: "500m"
    defaultRequest:
      memory: "128Mi"
      cpu: "250m"
    type: Container
---
# Cap total resource consumption per namespace
apiVersion: v1
kind: ResourceQuota
metadata:
  name: dev-quota
  namespace: development
spec:
  hard:
    requests.cpu: "4"
    requests.memory: "8Gi"
    limits.cpu: "8"
    limits.memory: "16Gi"
    pods: "20"

Node Affinity and Taints/Tolerations

# Taint a node (prevent scheduling unless tolerated)
kubectl taint nodes node-1 gpu=true:NoSchedule

# Label a node for affinity rules
kubectl label nodes node-2 disktype=ssd
# Pod with node affinity and toleration
apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd
  tolerations:
  - key: "gpu"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"
  containers:
  - name: app
    image: nvidia/cuda:12.0-base

Domain 3: Services & Networking (20%)

Service Types

# ClusterIP (default) - Internal only
kubectl expose deployment nginx --port=80 --target-port=80 --type=ClusterIP

# NodePort - Exposes on each node's IP
kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort

# LoadBalancer - Provisions a cloud load balancer
kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer

Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 80
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend-service
            port:
              number: 80

Network Policies

Network policies control traffic flow between pods. This is a high-value exam topic:

# Deny all ingress traffic to pods in the 'secure' namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
  namespace: secure
spec:
  podSelector: {}
  policyTypes:
  - Ingress

---
# Allow traffic only from pods with label app=frontend to pods with label app=api
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-api
  namespace: secure
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080

CoreDNS

Know how DNS resolution works within the cluster:

# Test DNS resolution from within a pod
kubectl run dns-test --image=busybox:1.36 --rm -it --restart=Never -- \
  nslookup kubernetes.default.svc.cluster.local

# Service DNS format: <service-name>.<namespace>.svc.cluster.local
# Pod DNS format: <pod-ip-dashed>.<namespace>.pod.cluster.local

Domain 4: Storage (10%)

Persistent Volumes and Claims

# PersistentVolume (provisioned by admin)
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-data
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /data/pv-data

---
# PersistentVolumeClaim (requested by user)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-data
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

---
# Pod using the PVC
apiVersion: v1
kind: Pod
metadata:
  name: storage-pod
spec:
  containers:
  - name: app
    image: nginx:1.26
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: data-volume
  volumes:
  - name: data-volume
    persistentVolumeClaim:
      claimName: pvc-data

StorageClasses

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-storage
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp3
  iopsPerGB: "50"
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
i

Storage Access Modes

  • ReadWriteOnce (RWO): Volume can be mounted read-write by a single node
  • ReadOnlyMany (ROX): Volume can be mounted read-only by many nodes
  • ReadWriteMany (RWX): Volume can be mounted read-write by many nodes (requires NFS or similar)

Domain 5: Troubleshooting (30%)

This is the most heavily weighted domain. The exam will present broken clusters, failing pods, and misconfigured resources for you to diagnose and fix.

Systematic Debugging Approach

Always follow a top-down approach:

# 1. Check cluster health
kubectl get nodes
kubectl get componentstatuses  # Deprecated but may still appear
kubectl cluster-info

# 2. Check the namespace and workloads
kubectl get all -n <namespace>

# 3. Describe the failing resource for events and conditions
kubectl describe pod <pod-name> -n <namespace>

# 4. Check pod logs
kubectl logs <pod-name> -n <namespace>
kubectl logs <pod-name> -n <namespace> --previous  # Previous container (after crash)
kubectl logs <pod-name> -n <namespace> -c <container>  # Specific container

# 5. Execute into the pod for interactive debugging
kubectl exec -it <pod-name> -n <namespace> -- /bin/sh

Common Pod Issues and Fixes

StatusCommon CauseFix
ImagePullBackOffWrong image name or tag, private registry authVerify image exists, check imagePullSecrets
CrashLoopBackOffApplication crash, wrong command/args, missing configCheck logs with kubectl logs --previous
PendingInsufficient resources, no matching nodesCheck events, node resources, taints
ContainerCreatingVolume mount issues, secret/configmap missingCheck events, verify referenced resources exist
OOMKilledContainer exceeds memory limitIncrease memory limit or optimize application

Debugging Node Issues

# Check node status and conditions
kubectl describe node <node-name>

# Look for these conditions:
#   Ready: True (healthy)
#   MemoryPressure: False (OK)
#   DiskPressure: False (OK)
#   PIDPressure: False (OK)

# SSH to the node and check kubelet
systemctl status kubelet
journalctl -u kubelet -f --no-pager | tail -50

# Check kubelet configuration
cat /var/lib/kubelet/config.yaml

# Restart kubelet if needed
systemctl restart kubelet

Debugging Networking Issues

# Verify service endpoints
kubectl get endpoints <service-name>

# Test connectivity from within the cluster
kubectl run test-pod --image=busybox:1.36 --rm -it --restart=Never -- \
  wget -qO- http://<service-name>.<namespace>.svc.cluster.local

# Check if kube-proxy is running
kubectl get pods -n kube-system -l k8s-app=kube-proxy

# Verify network policies are not blocking traffic
kubectl get networkpolicies -n <namespace>

Essential kubectl Commands Cheat Sheet

These commands cover the most common operations you will need during the exam:

# CONTEXT AND CONFIGURATION
kubectl config get-contexts              # List all contexts
kubectl config use-context <context>      # Switch context
kubectl config set-context --current --namespace=<ns>  # Set default namespace

# RESOURCE CREATION (imperative - faster for exams)
kubectl run nginx --image=nginx:1.26                         # Create a pod
kubectl create deployment app --image=app:v1 --replicas=3    # Create deployment
kubectl expose deployment app --port=80 --type=NodePort      # Create service
kubectl create configmap my-config --from-literal=key=value  # Create configmap
kubectl create secret generic my-secret --from-literal=pw=s3cret  # Create secret
kubectl create serviceaccount my-sa                          # Create SA

# DRY RUN (generate YAML without creating)
kubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml
kubectl create deployment app --image=app --dry-run=client -o yaml > deploy.yaml

# RESOURCE MANAGEMENT
kubectl get pods -A                      # All namespaces
kubectl get pods -o wide                 # Extra details (node, IP)
kubectl get pods --sort-by=.metadata.creationTimestamp  # Sort by creation time
kubectl get pods -l app=nginx            # Filter by label
kubectl top pods                         # Resource usage (requires metrics-server)
kubectl top nodes                         # Requires metrics-server; see our [Kubernetes monitoring with Prometheus and Grafana](/blog/kubernetes-prometheus-grafana) guide for full observability

# EDITING AND PATCHING
kubectl edit deployment nginx            # Edit in-place
kubectl scale deployment nginx --replicas=5  # Scale
kubectl label pod nginx env=prod         # Add label
kubectl annotate pod nginx description="web server"  # Add annotation

# LOGS AND DEBUGGING
kubectl logs pod-name -f                 # Follow logs
kubectl logs pod-name --tail=100         # Last 100 lines
kubectl exec -it pod-name -- /bin/bash   # Shell into pod
kubectl cp pod-name:/path/file ./file    # Copy from pod
kubectl port-forward pod-name 8080:80    # Port forwarding

Speed Tips for the Exam

  • Set up aliases at the start of the exam: alias k=kubectl, export do="--dry-run=client -o yaml"
  • Use kubectl explain <resource> to quickly look up field specifications instead of searching documentation.
  • Generate YAML with --dry-run=client -o yaml and modify it, rather than writing YAML from scratch.
  • Use kubectl run and kubectl create for speed; only use YAML when you need fields those commands do not support.

4-Week Study Plan

Week 1: Foundations (Cluster Architecture + Workloads)

  • Days 1-2: Set up a practice cluster (minikube, kind, or a cloud lab). Review cluster architecture and control plane components.
  • Days 3-4: Practice etcd backup/restore and cluster upgrades with kubeadm.
  • Days 5-7: Deployments, rolling updates, rollbacks, resource requests/limits, LimitRanges, and ResourceQuotas.

Week 2: Networking + Storage

  • Days 1-3: Services (ClusterIP, NodePort, LoadBalancer), Ingress resources, and CoreDNS troubleshooting.
  • Days 4-5: Network Policies. Practice writing policies that allow/deny traffic between namespaces and pods.
  • Days 6-7: PersistentVolumes, PersistentVolumeClaims, StorageClasses, and volume expansion.

Week 3: Troubleshooting + Security

  • Days 1-3: Pod debugging: ImagePullBackOff, CrashLoopBackOff, Pending, OOMKilled. Practice with intentionally broken manifests.
  • Days 4-5: Node troubleshooting: kubelet issues, certificate problems, resource pressure.
  • Days 6-7: RBAC (Roles, ClusterRoles, Bindings), ServiceAccounts, and SecurityContexts.

Week 4: Practice Exams and Review

  • Days 1-2: Take a full-length practice exam (killer.sh, included with your exam registration).
  • Days 3-4: Review weak areas identified in the practice exam. Redo failed tasks.
  • Days 5-6: Take a second practice exam. Focus on speed and efficiency.
  • Day 7: Light review, rest, and prepare your exam environment.
i

How Much Study Time?

Most candidates who pass report 40-80 hours of total study time. If you already work with Kubernetes daily, aim for the lower end. If Kubernetes is new to you, allow 80+ hours and consider starting with the CKAD (which focuses on application development rather than cluster administration). If you need to build a stronger foundation with containers first, our guide on Docker vs Kubernetes covers the basics and recommended learning path.

Practice Resources and Labs

  • killer.sh: Two free practice exam sessions included with your CKA registration. The most realistic simulation of the actual exam.
  • Kubernetes documentation: The only resource you can use during the exam, so get comfortable navigating it.
  • kubectl explain: Built into kubectl, provides field-level documentation for any resource type.
  • kind (Kubernetes in Docker): Free, runs locally, great for quick cluster setup and teardown.
  • CloudaQube Labs: AI-generated hands-on labs that let you practice CKA scenarios in real cloud environments without managing your own infrastructure.

Exam Day Tips

  1. Set up aliases immediately: alias k=kubectl and export do="--dry-run=client -o yaml" save significant time over two hours.
  2. Read the question carefully: Identify which cluster context you need, which namespace, and what the exact deliverable is.
  3. Switch context first: Every task specifies a context. The first thing you do for each task is kubectl config use-context <context>.
  4. Do not get stuck: If a task is taking more than 8-10 minutes, flag it and move on. Return to it if time permits.
  5. Use imperative commands: kubectl create and kubectl run are much faster than writing YAML from scratch. Use --dry-run=client -o yaml to generate base YAML, then modify.
  6. Verify your work: After completing each task, run a quick kubectl get or kubectl describe to confirm the resource exists and is in the expected state.
  7. Manage your time: With 15-20 tasks in 120 minutes, you have roughly 6-8 minutes per task. Simple tasks should take 2-3 minutes, leaving more time for complex ones.
  8. Use the notepad: Jot down task numbers you skipped or want to double-check.
!

Common Exam Mistakes

  • Forgetting to switch cluster contexts between tasks (your answer ends up in the wrong cluster)
  • Creating resources in the wrong namespace
  • Not reading the full question (missing requirements hidden in the middle of the text)
  • Spending too long on low-weight questions while skipping high-weight ones

Conclusion

The CKA certification is one of the most valuable credentials in cloud computing today. Unlike multiple-choice exams, it proves you can actually operate Kubernetes clusters under pressure. The exam is challenging, but it is entirely passable with structured preparation and hands-on practice.

Focus your study time on the areas with the highest weight (troubleshooting at 30% and cluster architecture at 25%), practice with realistic exam simulations, and build speed with imperative kubectl commands. The four-week study plan in this guide will get you from foundational knowledge to exam-ready.

The single most important factor in passing the CKA is practice. Reading documentation and watching videos will only take you so far. You need to build muscle memory by working in real Kubernetes clusters, breaking things, and fixing them. For more on why hands-on labs are the most effective way to learn cloud skills, see our deep dive into the science of learning by doing.

Want to practice this hands-on?

CloudaQube generates complete labs from a simple description. Try it free.

Get Started Free
Share:
C

CloudaQube Team

Certified Kubernetes Experts

Level up your cloud skills

Get hands-on with AI-generated labs tailored to your skill level. Practice AWS, Azure, Kubernetes, and more.

Start Learning Free