CKAD Exam Study Guide: Certified Kubernetes Application Developer

Hi! In this blog post, I will share my Certified Kubernetes Application Developer (CKAD) preparation journey that led to successfully passing the exam.

In this preparation guide, I will cover CKAD Exam Details, the syllabus, practice exams you can take before the actual exam, recommended online courses, and provide some helpful tips.

Finally, if you’re looking to save on your CKAD exam cost, I have a fantastic CKAD Exam coupon that offers a 20% discount on the exam voucher.

Let’s embark on this CKAD certification journey together!

What is the Certified Kubernetes Application Developer (CKAD) exam?

The Certified Kubernetes Application Developer (CKAD) certification stands as one of the most sought-after credentials in today’s industry landscape. Geared towards engineers with a passion for conceiving, creating, constructing, and overseeing applications within Kubernetes.

This program, the Certified Kubernetes Application Developer (CKAD), serves to validate the aptitude, understanding, and proficiency of individuals in fulfilling the role of Kubernetes application developers.

🔥[40 % Off] The Linux Foundation, Oct 25, 2023 (5PM EST) – Oct 31, 2023 ! [RUNNING NOW]

Shop Certifications (Kubernetes CKAD , CKA and CKS) , Bootcamps, SkillCreds, or Courses .

Certified Kubernetes Application Developer (CKAD) Exam Preparation Guide

In this section of CKAD Exam Study Guide, we will explore a comprehensive collection of resources and official CKAD Kubernetes documentation pages that will assist you in enhancing your exam preparation.

CKAD Exam Prerequisites

The CKAD (Certified Kubernetes Application Developer) exam does not have any specific prerequisites for candidates to fulfill before taking the exam. However, having some experience in software development will be beneficial.

CKAD Exam Details

Exam Duration2 hours
Pass Percentage66%
CKAD Exam Kubernetes VersionKubernetes 1.28
CKAD Validity3 Years
Exam Cost$395 ( GET 20 % OFF using Coupon TECK20)

CKAD exam is an open book exam i.e. you can use the following websites while you are taking the exam (Resources Allowed):

Documentation
Github
Blog

CKAD Exam User Interface

The online, proctored exam is delivered through PSI’s Proctoring Platform “Bridge” using PSI’s Secure Browser. It’s important to familiarize yourself with the system and testing environment requirements

Read more about the system and testing environment requirements.

The remote desktop is configured with all the tools and software needed to complete the tasks. This includes:

  • Terminal Emulator
  • Firefox browser to access “Resources Allowed”
  • Virtual Keyboard

CKAD Exam Syllabus

In this section of CKAD Exam Study Guide , we will introduce The CKAD syllabus which outlines the different domains and competencies you’ll need to master in order to pass the exam and become a certified CKAD. Let’s embark on this exciting journey through the CKAD_Curriculum_v1.28

TopicConceptsWeightage
Application Design and Build1. Define, build, and modify container images
2. Understand Jobs and CronJobs
3. Understand multi-container Pod design patterns (e.g. sidecar, init, and others)
4. Utilize persistent and ephemeral volumes
20 %
Application Environment, Configuration, and Security1. Discover and use resources that extend Kubernetes (CRD)
2. Understand authentication, authorization, and admission control
3. Understanding and defining resource requirements, limits, and quotas
4. Understand ConfigMaps
5. Create & consume Secrets
6. Understand ServiceAccounts
7. Understand SecurityContexts
25%
Services & Networking1. Demonstrate basic understanding of NetworkPolicies
2. Provide and troubleshoot access to applications via services
3. Use Ingress rules to expose applications
20%
Application Deployment1. Use Kubernetes primitives to implement common deployment strategies (e.g. blue/green or canary)
2. Understand Deployments and how to perform rolling updates
3. Use the Helm package manager to deploy existing packages
20%
Application Observability and Maintenance1. Understand API deprecations
2. Implement probes and health checks
3. Use provided tools to monitor Kubernetes applications
4. Utilize container logs
5. Debugging in Kubernetes
15%
CKAD Exam Syllabus

We will look at each section in detail below.

CKAD Preparation Course

Investing in a CKAD course will help you understand all the concepts for the CKAD exam in an easier manner. If you are a beginner and have no experience working on Kubernetes environments, I strongly suggest you invest in a good guided CKAD course of your choice.

I recommend going for the CKAD preparation course by Mumshad. His course has a lot of quizzes and the quality is top-notch.

CKAD Practice Exams

To practice for the CKAD exam, you can try the Mock exams. It will help you build confidence and practice many scenarios for the exam.

Personally, I think that this course is the only thing necessary to pass the exam.

CKAD Exam Practice Labs

Practice Labs are online, self-paced, hands-on labs that give you the opportunity to practice and prepare for the CKAD exam.

These labs provide a real-world environment where you can apply the concepts and techniques learned in the CKAD course and improve your skills in using Kubernetes to develop, deploy, and manage applications. The Practice Labs are a great way to reinforce your learning and increase your confidence in taking the CKAD exam.

KillercodaIt is an interactive learning platform and a shell script for the CKAD (Certified Kubernetes Application Developer) exam
Play with Kubernetes (PWK)It is an online learning platform to practice and test your Kubernetes skills. PWK provides hands-on experience through real-world examples.

CKAD Exam Study Resources

Here, I will be discussing official Kubernetes resources that can be used to prepare for each topic of the CKAD exam. You can use these documentation pages during the exam for reference.

Application Design and Build [ 20% ]

This section of the Kubernetes CKAD curriculum will account for 20% of the questions in the actual exam.

TopicConceptsWeightage
Application Design and Build 1. Define, build, and modify container images
2. Understand Jobs and CronJobs
3. Understand multi-container Pod design patterns (e.g. sidecar, init, and others)
4. Utilize persistent and ephemeral volumes
20 %

Define, Build and Modify Container Images

Docker is the most popular container-runtime and container-solution, but there are other runtimes like runccri-ocontainerd, etc.

container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A container-runtime, which relies on the host kernel, is required to run a container.

A docker image consist of layers, and each image layer is its own image. An image layer is a change on an image – every command (FROM, RUN, COPY, etc.) in your Dockerfile (aka Containerfile by OCI) causes a change, thus creating a new layer.

It is recommended reduce your image layers as best possible, e.g. replace multiple RUN commands with “command chaining” apt update && apt upgrade -y.

# run container with port, see `docker run --help`
docker run -d -p 8080:80 httpd # visit localhost:8080

# run container with mounted volume
docker run -d -p 8080:80 -v ~/html:/usr/local/apache2/htdocs httpd

# run container with environment variable
docker run -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=secret mongo

# inspect container, see `docker container inspect --help | docker inspect --help`
docker inspect $CONTAINER_NAME_OR_ID | less # press Q key to quit from less

docker container inspect $CONTAINER_NAME_OR_ID
# format inspect output to view container network information

docker inspect --format="{{.NetworkSettings.IPAddress}}" $CONTAINER_NAME_OR_ID

# format inspect output to view container status information
docker inspect --format="{{.State.Pid}}" $CONTAINER_NAME_OR_ID

# view container logs, see `docker logs --help`
docker logs $CONTAINER_NAME_OR_ID

# remove all unused data (including dangling images)
docker system prune

# remove all unused data (including unused images, dangling or not, and volumes)
docker system prune --all --volumes

# manage images, see `docker image --help`
docker image ls # or `docker images`
docker image inspect $IMAGE_ID

docker image rm $IMAGE_ID
# see `docker --help` for complete resources

Pods are the basic objects where your images/code run.

ReferencePod Concepts
TaskConfigure Pods and Containers

Understand Jobs and CronJobs

Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate – a Completed status. Deleting a Job will clean up the Pods it created. Suspending a Job will delete its active Pods until the Job is resumed again.

CronJob creates Jobs on a repeating schedule. It runs a job periodically on a given schedule, written in Cron format. This isn’t very different from the Linux/Unix crontab (cron table).

# view resource types you can create in kubernetes
kubectl create -h

# create a job `myjob` that runs `date` command, see `kubectl create job -h`
kubectl create job myjob --image=busybox -- date

# generate a job manifest
kubectl create job myjob --image=busybox --dry-run=client -o yaml -- date

# list jobs
kubectl get jobs
# list jobs and pods
kubectl get jobs,pods
# view the manifest of an existing job `myjob`
kubectl get jobs myjob -o yaml
# view details of a job `myjob`
kubectl describe job myjob
# view the job spec
kubectl explain job.spec | less

# create cronjob
kubectl create cronjob my-job --image=busybox --schedule="*/1 * * * *" -- date

Reference: create Jobs and CronJobs

Understand Multi-Container Pod design patterns

Multi-Container Pod design patterns refer to a method of organizing containers within a single Pod in a way that achieves a specific goal or solves a particular problem.

Always create single container Pods! However, some special scenarios require a multi-container Pod pattern:

  • To initialise primary container (Init Container)
  • Running a batch job within a Pod to process a large amount of data (Batch processing pattern )
  • To enhance primary container, e.g. for logging, monitoring, etc. (Sidecar Container)
  • To prevent direct access to primary container, e.g. proxy (Ambassador Container)
  • To match the traffic/data pattern in other applications in the cluster (Adapter Container)

Each design pattern has its own benefits and use cases, and it’s important to choose the right pattern for the task at hand.

# view logs of pod `mypod`
kubectl logs mypod

# view logs of specific container `mypod-container-1` in pod `mypod`
kubectl logs mypod -c mypod-container-1

Official Reference: Multicontainer pod patterns

Utilize Persistent and Ephemeral Volumes

The CKAD Exam requires a solid understanding of both persistent and ephemeral volumes in Kubernetes. Persistent volumes provide durable storage for important data, such as databases, while ephemeral volumes offer temporary storage for non-critical data like logs.

PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes, with a lifecycle independent of any individual Pod that uses the PV.

PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Claims can request specific size and access modes (ReadWriteOnce, ReadOnlyMany, ReadWriteMany, or ReadWriteOncePod).

Here are links to official Kubernetes documentation and examples for persistent and ephemeral volumes:

Persistent Volumes (PV) & Persistent Volume Claims (PVC)
Examples of Persistent Volumes and Claims
Ephemeral Volumes

Application Environment, Configuration, and Security [ 25 % ]

This section of the Kubernetes CKAD curriculum will account for 25 % of the questions in the actual exam.

TopicConceptsWeightage
Application Environment, Configuration, and Security1. Discover and use resources that extend Kubernetes (CRD)
2. Understand authentication, authorization, and admission control
3. Understanding and defining resource requirements, limits, and quotas
4. Understand ConfigMaps
5. Create & consume Secrets
6. Understand ServiceAccounts
7. Understand SecurityContexts
25%

Discover and use resources that extend Kubernetes (CRD)

Resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind; for example, the Pods resource contains a collection of Pod objects.

Custom Resource is an extension of the Kubernetes API that is not necessarily available in a default Kubernetes installation. Many core Kubernetes functions are now built using custom resources, making Kubernetes more modular.

Although, we only focus on one, there are two ways to add custom resources to your cluster:

  • CRDs allows user-defined resources to be added to the cluster. They are simple and can be created without any programming. In reality, Operators are preferred to CRDs.
  • API Aggregation requires programming, but allows more control over API behaviors like how data is stored and conversion between API versions.

Here is the official documentation for Custom Resource Definitions in Kubernetes: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/

And here is an example of creating a custom resource definition in YAML format:

# CRD example "resourcedefinition.yaml"
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: crontabs.stable.example.com # must match `<plural>.<group>` spec fields below
spec:
  group: stable.example.com # REST API: /apis/<group>/<version>
  versions: # list of supported versions
    - name: v1
      served: true # enabled/disabled this version, controls deprecations
      storage: true # one and only one version must be storage version.
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                cronSpec:
                  type: string
                image:
                  type: string
                replicas:
                  type: integer
  scope: Namespaced # or Cluster
  names:
    plural: crontabs # REST API: /apis/<group>/<version>/<plural>
    singular: crontab # used for display and as alias on CLI
    kind: CronTab # CamelCased singular type for resource manifests.
    shortNames:
    - ct # allow `crontab|ct` to match this resource on CLI

Understand authentication, authorization and admission control

Authentication, authorization and admission control in Kubernetes play a critical role in ensuring the security of a cluster and its resources.

  • Authentication refers to the process of verifying the identity of a user, application or system trying to access the cluster. In Kubernetes, authentication can be achieved through various methods such as client certificates, bearer tokens, and authentication proxies.
  • Authorization refers to the process of determining whether a user, application or system is allowed to perform a specific action in the cluster. Kubernetes provides several authorization modules, including Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC), and Webhook.
  • Admission control refers to the process of controlling access to the cluster resources by validating and mutating incoming API requests before they are persisted in the cluster. Kubernetes provides several admission controllers, including NamespaceLifecycle, LimitRanger, and ResourceQuota, to control the access and enforce policies.

Understanding and defining resource requirements, limits and quotas

For the CKAD exam, it is important to understand the concept of resource requirements, limits, and quotas. Resource requirements and limits define the minimum and maximum amount of compute resources (e.g. CPU, memory) that a pod can consume. Quotas are used to limit the total amount of resources that can be consumed by a namespace.

You can define resource requirements and limits in the pod specification file, and apply a resource quota to a namespace using the kubectl apply command. The following example shows how to define resource requirements and limits for a pod:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: mycontainer
      image: nginx
      resources:
        requests:
          memory: "128Mi"
          cpu: "500m"
        limits:
          memory: "512Mi"
          cpu: "1"

And the following example shows how to apply a resource quota to a namespace:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: resource-quota
spec:
  hard:
    cpu: "2"
    memory: 4Gi
# view container resources object within the pod spec
kubectl explain pod.spec.containers.resources

# pod resource update is forbidden, but you can generate YAML, see `kubectl set -h`
kubectl set resources pod --help

# generate YAML for pod `mypod` that requests 0.2 CPU and 128Mi memory
kubectl set resources pod mypod --requests=cpu=200m,memory=128Mi --dry-run=client -oyaml|less

# generate YAML for requests 0.2 CPU, 128Mi memory, and limits 0.5 CPU, 256Mi memory
kubectl set resources pod mypod --requests=cpu=200m,memory=128Mi --limits=cpu=500m,memory=256Mi --dry-run=client -oyaml|less

For more information on resource requirements, limits and quotas, see the official Kubernetes documentation: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

Understand ConfigMaps

ConfigMaps are used to decouple configuration data from application code. The configuration data may be variables, files or command-line args.

  • ConfigMaps should be created before creating an application that relies on it
  • A ConfigMap created from a directory includes all the files in that directory and the default behaviour is to use the filenames as keys
# create configmap `mycm` from file or directory, see `kubectl create cm -h`
kubectl create configmap mycm --from-file=path/to/file/or/directory

# create configmap from file with specified key
kubectl create configmap mycm --from-file=key=path/to/file

# create configmap from a varibales file (file contains KEY=VALUE on each line)
kubectl create configmap mycm --from-env-file=path/to/file.env

# create configmap from literal values
kubectl create configmap mycm --from-literal=KEY1=value1 --from-literal=KEY2=value2

# display details of configmap `mycm`
kubectl describe cm mycm
kubectl get cm mycm -o yaml

# use configmap `mycm` in deployment `web`, see `kubectl set env -h`
kubectl set env deploy web --from=configmap/mycm

# use specific keys from configmap with mutliple env-vars, see `kubectl set env deploy -h`
kubectl set env deploy web --keys=KEY1,KEY2 --from=configmap/mycm

# remove env-var KEY1 from deployment web
kubectl set env deploy web KEY1-

To learn more about ConfigMaps and how to use them in the CKAD exam, you can refer to the official Kubernetes documentation: https://kubernetes.io/docs/concepts/configuration/configmap/

Here’s an example of a ConfigMap defined in YAML format:

apiVersion: v1
kind: ConfigMap
metadata:
  name: example-config
data:
  key1: value1
  key2: value2

Kubernetes Secrets

Secrets are similar to ConfigMaps but specifically intended to hold sensitive data such as passwords, auth tokens, etc. By default, Kubernetes Secrets are not encrypted but base64 encoded.

# secret `myscrt` as file for tls keys, see `kubectl create secret tls -h`
kubectl create secret tls myscrt --cert=path/to/file.crt --key=path/to/file.key

# secret as file for ssh private key, see `kubectl create secret generic -h`
kubectl create secret generic myscrt --from-file=ssh-private-key=path/to/id_rsa

# secret as env-var for passwords, ADMIN_PWD=shush
kubectl create secret generic myscrt --from-literal=ADMIN_PWD=shush

# secrets as image registry creds, `docker-registry` works for other registry types
kubectl create secret docker-registry myscrt --docker-username=dev --docker-password=shush --docker-email=dev@ckad.io --docker-server=localhost:3333

# view details of the secret, shows base64 encoded value
kubectl describe secret myscrt
kubectl get secret myscrt -o yaml

# view the base64 encoded contents of secret `myscrt`
kubectl get secret myscrt -o jsonpath='{.data}'

# for secret with nested data, '{"game":{".config":"yI6eyJkb2NrZXIua"}}'
kubectl get secret myscrt -o jsonpath='{.data.game.\.config}'

Official Reference: Secrets

Understand ServiceAccounts

Kubernetes service account provides an identity for processes that run in a Pod.

Imperative commands for service account:

# create a service account imperatively
kubectl create service account $SERVICE_ACCOUNT_NAME

# assign service account to a deployment
kubectl set serviceaccount deploy $DEPLOYMENT_NAME $SERVICE_ACCOUNT_NAME

# create a role that allows users to perform get, watch and list on pods, see `kubectl create role -h`
kubectl create role $ROLE_NAME --verb=get --verb=list --verb=watch --resource=pods

# grant permissions in a Role to a user within a namespace
kubectl create rolebinding $ROLE_BINDING_NAME --role=$ROLE_NAME --user=$USER --namespace=$NAMESPACE

# grant permissions in a ClusterRole to a user within a namespace
kubectl create rolebinding $ROLE_BINDING_NAME --clusterrole=$CLUSTERROLE_NAME --user=$USER --namespace=$NAMESPACE

# grant permissions in a ClusterRole to a user across the entire cluster
kubectl create clusterrolebinding $CLUSTERROLE_BINDING_NAME --clusterrole=$CLUSTERROLE_NAME --user=$USER

# grant permissions in a ClusterRole to an application-specific service account within a namespace
kubectl create rolebinding $ROLE_BINDING_NAME --clusterrole=$CLUSTERROLE_NAME --serviceaccount=$NAMESPACE:$SERVICE_ACCOUNT_NAME --namespace=$NAMESPACE

# grant permissions in a ClusterRole to the "default" service account within a namespace
kubectl create rolebinding $ROLE_BINDING_NAME --clusterrole=$CLUSTERROLE_NAME --serviceaccount=$NAMESPACE:default --namespace=$NAMESPACE

Understand SecurityContexts

A security context defines privilege and access control settings for a Pod or Container. We define the security context at the pod level and at the container level.

To create a SecurityContext, you need to specify the security settings in the Pod definition file (also known as a manifest file). Here is an example of how to define a SecurityContext in the spec section of a Pod definition file:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mycontainer
    image: nginx
    securityContext:
      runAsUser: 1000
      runAsGroup: 3000
      allowPrivilegeEscalation: false
      capabilities:
        add:
        - NET_BIND_SERVICE

In this example, we are setting the user ID to 1000 and the group ID to 3000, which the container will run as. We also disallow privilege escalation and add the NET_BIND_SERVICE capability.

Official Reference:  security context

Services & Networking [ 20 % ]

This section of the Kubernetes CKAD curriculum will account for 20 % of the questions in the actual exam.

TopicConceptsWeightage
Services & Networking1. Demonstrate basic understanding of NetworkPolicies
2. Provide and troubleshoot access to applications via services
3. Use Ingress rules to expose applications
20%

Demonstrate Basic understanding of Network Policies

A network policy in Kubernetes is a set of rules that control the flow of traffic within a cluster. The policies are implemented using the NetworkPolicy resource, which defines which pods can communicate with each other.

Here’s an example of a basic network policy that allows traffic only from pods in the same namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-same-namespace
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          namespace: default

Here’s an example of a network policy that implements this scenario:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-specific-pods
spec:
  podSelector:
    matchLabels:
      app: myapp
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: myapp
          tier: frontend

Imperative Command:

kubectl create ingress <ingress-name> --rule=<"rule-name">=<service-name>:<port> -n <namespace-name>

Reference : https://kubernetes.io/docs/concepts/services-networking/network-policies/

Provide and troubleshoot access to applications via services

Kubernetes offers a powerful way to manage and access your applications via services. Services provide a stable endpoint for accessing your applications, allowing you to access them consistently, even when the underlying pods and nodes change.

Service provides access to applications running on a set of Pods. A Deployment creates and destroys Pods dynamically, so you cannot rely on Pod IP. This is where Services come in, to provide access and load balancing to the Pods.

Like Deployments, Services targets Pods by selector but exists independent from a Deployment – not deleted during Deployment deletion and can provide access to Pods in different Deployments.

Service Types

  • ClusterIP: this is a service inside a cluster responsible for routing traffic between apps running in the cluster – no external access
  • NodePort: as the name implies, a specific port is opened on each Worker Node‘s IP to allow external access to the cluster at $NodeIP:$NodePort – useful for testing purposes
  • LoadBalancer: Exposes the Service using a cloud provider (not for CKAD)
  • ExternalName: Uses DNS records (not for CKAD)

Use Ingress rules to expose applications

Kubernetes Ingress is a powerful resource that allows you to expose your applications to the outside world. It works by allowing incoming HTTP/HTTPS traffic to be routed to the correct service within your cluster. Here are the steps and resources you need to create and manage Ingress rules in your cluster:

  • Create an Ingress resource: Ingress resources are defined in YAML files that you can apply to your cluster using the kubectl apply command. Here is an example Ingress resource definition:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: myapp-service
            port:
              name: http
  • Create a Service resource: The Ingress resource references a Service resource that represents the backend application that you want to expose. Here is an example Service resource definition:
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  selector:
    app: myapp
  ports:
  - name: http
    port: 80
    targetPort: 8080
  • Deploy your application: To deploy your application, you will need to create a Deployment resource that creates replicas of your application containers. Here is an example Deployment resource definition:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:latest
        ports:
        - containerPort: 8080
  • Verify your Ingress rules: Once you have applied your Ingress, Service, and Deployment resources, you can verify that your Ingress rules are working as expected. Use the kubectl get ingress command to check the status of your Ingress resource and make sure that it has been assigned an IP address. You can also use the curl command to test the Ingress from the outside world.

Reference: Ingress Overview and Ingress Resource Definition Reference

Application Deployment [ 20 % ]

This section of the Kubernetes CKAD curriculum will account for 20 % of the questions in the actual exam.

The Application Deployment section of the CKAD syllabus covers 20% of the exam and requires that you understand key concepts and practices related to deploying applications in Kubernetes.

TopicConceptsWeightage
Application Deployment1. Use Kubernetes primitives to implement common deployment strategies (e.g. blue/green or canary)
2. Understand Deployments and how to perform rolling updates
3. Use the Helm package manager to deploy existing packages
20%

Use Kubernetes primitives to implement common deployment strategies (e.g. blue/ green or canary)

Blue/green deployment is a update-strategy used to accomplish zero-downtime deployments. The current version application is marked blue and the new version application is marked green. In Kubernetes, blue/green deployment can be easily implemented with Services.blue/green update strategy

Canary deployment is an update strategy where updates are deployed to a subset of users/servers (canary application) for testing prior to full deployment. This is a scenario where Labels are required to distinguish deployments by release or configuration.canary update strategy

Reference Kubernetes documentation: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ https://kubernetes.io/docs/concepts/services-networking/service/

Understand Deployments and how to perform rolling updates

For the Certified Kubernetes Application Developer (CKAD) exam, it is important to have a strong understanding of Deployments and how to perform rolling updates in Kubernetes.

# view the update strategy field under deployment spec
kubectl explain deployment.spec.strategy

# view update strategy field recursively
kubectl explain deployment.spec.strategy --recursive

# edit the image of deployment `myapp` by setting directly, see `kubectl set -h`
kubectl set image deployment myapp nginx=nginx:1.24

# edit the environment variable of deployment `myapp` by setting directly
kubectl set env deployment myapp dept=MAN
# show recent update history - entries added when fields under `deploy.spec.template` change
kubectl rollout history deployment myapp -h

# show update events
kubectl describe deployment myapp
# view rolling update options
kubectl get deploy myapp -o yaml

# view all deployments history, see `kubectl rollout -h`
kubectl rollout history deployment
# view `myapp` deployment history
kubectl rollout history deployment myapp

# view specific change revision/log for `myapp` deployment (note this shows fields that affect rollout)
kubectl rollout history deployment myapp --revision=n

# revert `myapp` deployment to previous version/revision, see `kubectl rollout undo -h`
kubectl rollout undo deployment myapp --to-revision=n

For more information, you can check the official Kubernetes documentation on Deployments.

For more information on rolling updates, see the official Kubernetes documentation: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment

Use the Helm package manager to deploy existing packages

Helm is a package manager for Kubernetes, used for managing and deploying applications on a Kubernetes cluster. To use Helm for deploying existing packages, you need to perform the following steps:

  1. Install Helm: You can install Helm by following the instructions on the official Helm website (https://helm.sh/docs/intro/install/).
  2. Initialize Helm: Once Helm is installed, you can initialize it on your Kubernetes cluster by running the following command: helm init.
  3. Search for a Package: You can search for a package to deploy using the following command: helm search <package-name>.
  4. Install a Package: Once you have found the package you want to install, you can install it using the following command: helm install <package-name>.
  5. Upgrade a Package: You can upgrade an existing package by using the following command: helm upgrade <release-name> <package-name>.

Note: In these commands, replace <package-name> with the name of the package you want to install or upgrade and replace <release-name> with the name of the release you want to upgrade.

You can find more information on how to use Helm for deploying packages in the official Helm documentation (https://helm.sh/docs/).

Application Observability and Maintenance [15 %]

This section of the Kubernetes CKAD curriculum will account for 15 % of the questions in the actual exam.

TopicConceptsWeightage
Application Observability and Maintenance1. Understand API deprecations
2. Implement probes and health checks
3. Use provided tools to monitor Kubernetes applications
4. Utilize container logs
4. Debugging in Kubernetes
15%

Understand API Deprecations

API deprecation in Kubernetes refers to the process of marking an API version as outdated and encouraging users to adopt a newer version. This process helps to ensure that the Kubernetes API evolves in a backwards-compatible manner.

Here is an example of an API deprecation in Kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.0
        ports:
        - containerPort: 80

In the above example, the apiVersion field specifies the version of the Kubernetes API that the deployment uses. The apps/v1 API version is the latest version and supersedes previous versions like apps/v1beta1 and apps/v1beta2. If a newer version of the API becomes available, it would be considered the latest version, and the apps/v1 API version would become deprecated.

Official Reference: API deprecation Guide

Implement probes and health checks

Implementing Probes and Health Checks is an important part of ensuring the reliability and availability of your applications in a Kubernetes environment. These features allow you to monitor the status of your applications and take appropriate actions in case of any issues.

In Kubernetes, you can implement health checks using two types of probes:

  1. Liveness probes: These probes determine if the application is running and responsive. If the liveness probe fails, the container is restarted.
  2. Readiness probes: These probes determine if the application is ready to accept traffic. If the readiness probe fails, the container is not included in the load balancer pool.

You can configure probes in your application deployment manifests using the following fields:

  1. livenessProbe
  2. readinessProbe

Here is an example of how to configure a liveness probe in a deployment manifest using HTTP GET requests:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-app:1.0.0
        ports:
        - containerPort: 80
        livenessProbe:
          httpGet:
            path: /health
            port: 80

This YAML configuration file is used to create a Deployment in Kubernetes. The file specifies the deployment of an application named “my-app” with 3 replicas.

The Deployment uses the label selector to identify the pods that belong to the deployment, with the label “app: my-app”.

The pods run a container named “my-app” that is built from the image “my-app:1.0.0”. The container listens on port 80 and has a liveness probe defined, which performs an HTTP GET request to the “/health” endpoint on port 80 to determine if the container is still running and healthy.

For more information on probes and health checks in Kubernetes, see the official Kubernetes documentation at https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/Use provided tools to monitor Kubernetes applications

Use provided tools to monitor Kubernetes applications

Monitoring the performance and health of applications running on a Kubernetes cluster is critical for ensuring a smooth and stable user experience. There are several tools provided by Kubernetes that you can use to monitor your applications. Here’s a look at some of the most commonly used tools, with links to the official Kubernetes documentation for more information:

  1. kubectl top: This command allows you to view the resource usage of your applications, such as CPU and memory utilization. For more information, see the official documentation here: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#top
  2. kubectl logs: This command allows you to view the logs generated by your applications, which can be useful for troubleshooting and debugging. For more information, see the official documentation here: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs
  3. Kubernetes Dashboard: The Kubernetes Dashboard is a web-based UI that provides an overview of your applications and allows you to manage and monitor them. For more information, see the official documentation here: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
  4. Prometheus: Prometheus is an open-source monitoring solution that is widely used for monitoring Kubernetes applications. It allows you to monitor key metrics such as resource usage, request latencies, and error rates. For more information, see the official documentation here: https://prometheus.io/docs/prometheus/latest/getting_started/
  5. Grafana: Grafana is a popular open-source data visualization and analytics platform that can be used with Prometheus to visualize your monitoring data. For more information, see the official documentation here: https://grafana.com/docs/grafana/latest/getting-started/

These tools will help you to monitor the performance and health of your applications, and detect any issues early on, allowing you to take proactive measures to prevent downtime and ensure a smooth user experience.

Utilize container logs

Container logs in Kubernetes refer to the standard output and error streams produced by a container running in a pod. These logs can provide valuable information about the state and behavior of the container and its applications, which can be used for debugging, troubleshooting, and performance analysis.

The logs are stored as text files on the nodes where the containers are running and can be accessed through the Kubernetes API or using command-line tools such as kubectl logs.

To utilize container logs in Kubernetes, you can follow these steps:

  1. Retrieve logs using the kubectl logs command:
kubectl logs [pod-name] [container-name]
  1. Stream logs using the kubectl logs -f command:
kubectl logs -f [pod-name] [container-name]
  1. Retrieve logs for all containers in a pod using the kubectl logs command:
kubectl logs [pod-name] --all-containers

Reference : Retrieving Logs and Configure Centralized Logging

Debugging in Kubernetes

Debugging in Kubernetes can be a complex task due to the distributed and dynamic nature of the system. However, there are several tools and strategies that can help make the process easier. Here are some of the most common methods for debugging in Kubernetes:

  1. Logs: Kubernetes provides logs for each component of the system, including nodes, controllers, and individual pods. To access logs, you can use the kubectl logs command. For example, to retrieve the logs for a pod named “my-pod”, run kubectl logs my-pod.
  2. Describing objects: The kubectl describe command provides detailed information about a Kubernetes object, including its current state, events, and configuration. For example, to describe a pod named “my-pod”, run kubectl describe pod my-pod.
  3. Debug Containers: Debug containers are special containers that run in the same pod as the application and provide a shell environment for debugging purposes. Debug containers can be used to inspect the file system, environment variables, and logs of the application.
  4. Executing commands in a pod: The kubectl exec command allows you to run a command in a running pod. For example, to run a ls command in a pod named “my-pod”, run kubectl exec my-pod -- ls.
  5. Resource utilization monitoring: Kubernetes provides resource utilization metrics for nodes, pods, and containers, including CPU, memory, and network usage. These metrics can be used to identify performance bottlenecks and resource constraints.

Commands:

kubectl describe deployment <deployment-name> 

kubectl describe pod <pod-name>

kubectl logs deployment <deployment-name> 

kubectl logs pod <pod-name>

kubectl logs deployment <deployment-name> --tail=10

kubectl logs deployment <deployment-name> --tail=10 -f 

kubectl top node

kubectl top pod

For more information on these and other debugging techniques, refer to the official Kubernetes documentation: https://kubernetes.io/docs/tasks/debug-application-cluster/

Also, the Kubernetes Troubleshooting Guide: https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/ provides a comprehensive list of common problems and how to resolve them.

Official Reference: Debug Running Pods

Kubernetes Object’s Shortcut

Use the following Kubernetes object shortcuts to save time:

ObjectsShortcuts
podspo
deploymentsdeploy
servicessvc
serviceaccountssa
nodesno
configmapscm
namespacesns
ingressesing
persistentvolumespv
persistentvolumeclaimspvc
replicasetsrs

Top 5 Tips for CKAD Exam

Practice , Practice , Practice …

This exam is hands-on in nature, emphasizing the importance of proficiency with the Kubernetes command line interface (kubectl).

It is essential to cultivate a high level of comfort and familiarity with kubectl, practicing the art of typing commands swiftly and accurately.

As mentioned earlier, please ensure that you review the practice exam provided in Mumshad Mannambeth’s Udemy course. It is highly recommended to enroll in the two killer,sh, hands-on sessions and aim for outstanding scores in order to thoroughly prepare yourself before attempting the actual exam.

Time Management

Since you will be executing the kubectl command multiple times, setting up aliases can save you valuable seconds with each entry. For instance, assigning an alias like ‘k’ for ‘kube-control’ can potentially grant you an additional minute or two towards the end of the exam

alias k=kubectl

In the exam, you have the privilege to access and consult the Kubernetes documentation pages for obtaining crucial information. This unique aspect sets the Kubernetes certification exam apart from others, as it assesses your capability to effectively utilize the documentation rather than relying solely on memorization.

To excel in the exam, it is essential to become well-acquainted with the documentation’s structure and practice efficient searching techniques. Please be aware that using bookmarks is not allowed during the exam, so it is advised to refrain from attempting to do so.

During the exam, managing your time efficiently is crucial. With approximately 15 to 20 questions of varying difficulty levels, it’s essential to make strategic decisions regarding time allocation. Don’t get trapped on a single challenging question and exhaust all your time.

Do not begin your exam from Question 1! Each question has a Task Weight and you should aim to complete higher score questions first.

Remember that achieving a perfect score is not necessary to pass the exam. A minimum score of 66% or above is sufficient

Review Completed Tasks

After each question, it is crucial to review your work meticulously to ensure accuracy. Avoid the risk of spending 10-15 minutes on a question and unintentionally overlooking potential errors

For example, if you have created a pod , it is highly recommended to check its status before moving on to another task. This verification step ensures that the pod is created and started.

kubelet get pod <podName>

Stress Management

You will be able to complete the exam in 2 hours.
PLEASE DON’T get panic because :

  • First: if it is your first attempt then you have the other left.
  • Second: is that you only need 66 % to crack the exam 🙂

Configuration Management during the Exam

As mentioned previously, the CKA exam environment consists of six clusters, each with its own dedicated set of nodes. It is essential to emphasize the significance of switching contexts correctly between these clusters before attempting any tasks in the exam.

One common mistake individuals make is performing actions on the wrong cluster. To avoid this, ensure that you carefully switch the context to the intended cluster before executing any commands or tasks. Paying close attention to this detail will help maintain accuracy throughout the exam and prevent errors caused by working on the wrong cluster

At the start of each task you’ll be provided with the command to ensure you are on the correct cluster to complete the task , for example :

kubectl config use-context k8s

An example of command to ssh to a master node :

ssh mk8s-master-0 

Us elevated privileges on the master node :

sudo -i

CKAD Exam Sample Question

CKAD EXAM QUESTION 

# Create a Pod named nginx-ckad in the existing "teckbootcamps-namespace" namespace.

# Specify a single container using nginx:stable image.

# Specify a resource request of 400m cpus and 1G1 of memory for the Pod’s container.

CKAD Exam Sample Solution

kubectl config use-context ckad-k8s

controlplane $ kubectl run nginx-ckad --image=nginx:stable --namespace=teckbootcamps-namespace  --dry-run=client -o yaml > solution-ckad.yaml

vi solution-ckad.yaml

Change the solution-ckad,yaml to add resources request ( cpu & memory )

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx-ckad
  name: nginx-ckad
  namespace: teckbootcamps-namespace
spec:
  containers:
  - image: nginx:stable
    name: nginx-resources
    resources:
      requests:
        cpu: 400m
        memory: 1Gi
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
controlplane $ k apply -f solution-ckad.yaml 
pod/nginx-ckad created

controlplane $ k get pods -n teckbootcamps-namespace 
NAME         READY   STATUS    RESTARTS   AGE
nginx-ckad   1/1     Running   0          8s

Conclusion

Congratulations on completing our comprehensive CKAD exam study guide. By following the roadmap we’ve provided and mastering the essential concepts, you’re well on your way to becoming a Certified Kubernetes Application Developer. Remember to practice regularly, explore additional resources, and stay up to date with the latest Kubernetes developments. Best of luck in your CKAD exam journey!

Do check out the CKA & CKS certification guides as well.

Leave a Reply
You May Also Like