Kubernetes CKA hands-on challenge #3 Advanced Scheduling

Kim Wuestkamp
codeburst
Published in
3 min readJan 16, 2020

--

CKS Exam Series | CKA Exam Series | CKAD Exam Series

#####################################

THIS CHALLENGE WON’T BE UPDATED HERE AND MOVED TO:

https://killercoda.com/killer-shell-cka

######################################

Content

Overview and Tips

  1. Multi Container Issue
  2. Scheduler Playground
  3. Advanced Scheduling
  4. Node Management
  5. Manage Certificates
  6. Pod Priority
  7. RBAC
  8. >>> MORE <<<

Rules!

  1. be fast, avoid creating yaml manually from scratch
  2. use only kubernetes.io/docs for help.
  3. check my solution after you did yours. You probably have a better one!

Notices

Scenario Setup

You will start a two node cluster on your machine, one master and one worker. For this you need to install VirtualBox and vagrant, then:

git clone git@github.com:wuestkamp/cka-example-environments.git
cd cka-example-environments/cluster1
./up.sh
vagrant ssh cluster1-master1
vagrant@cluster1-master1:~$ sudo -i
root@cluster1-master1:~# kubectl get node

You should be connected as root@cluster1-master1 . You can connect to other worker nodes using root, like ssh root@cluster1-worker1

If you want to destroy the environment again run ./down.sh . You should destroy the environment after usage so no more resources are used!

Todays Task: Advanced Scheduling

  1. How many pods of deployment coredns are running on which nodes.
  2. Why are coredns pods running the nodes they are?
  3. Show the config of coredns . The actual Corefile not the k8s yaml.
  4. Create a deployment of image httpd:2.4.41-alpine with 3 replicas which can run on all nodes. How many are running on each node?
  5. Now change the pods to only run on master nodes

Solution

The following commands will be executed as root@cluster1-master1 :

alias k=kubectl

1.

k get pod --all-namespaces | grep coredns # two pods
k get pod --all-namespaces -o wide | grep coredns # both on master

Next we check how these pods are created:

k -n kube-system get all | grep coredns # deployment!

So we have two pods because of the amount of replicas defined in the deployment.

2.

But why are both running on the master node, and only the master?

k -n kube-system get pod coredns-5644d7b6d9-4hp5t -o yaml | grep tolerations -A 20

Here we see the pods have a toleration to run on master, but this only means that they can run on a master node. There is no node selector or something else set to control that they have to run on a master node.

The reason here is simply that during cluster creation the coredns deployment was created before the worker node was ready. You can delete pod:

k -n kube-system delete pod coredns-5644d7b6d9-4hp5t coredns-5644d7b6d9-fswhw

which should result in one on the master and one on the worker node.

3.

It uses a configmap to manage the config:

k -n kube-system get cm coredns -o yaml

4.

k create deploy deploy --image=httpd:2.4.41-alpine -o yaml --dry-run=client > deploy.yamlvim deploy.yaml

Add the master toleration:

apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: deploy
name: deploy
spec:
replicas: 3
selector:
matchLabels:
app: deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: deploy
spec:
containers:
- image: httpd:2.4.41-alpine
name: httpd
resources: {}
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master

status: {}

Then run:

k -f deploy.yaml create
k get pod -l app=deploy -o wide

In my case: 2 on master and 1 on worker node. If no pod get’s scheduled on the worker you could set the .spec.template.spec.nodeName to cluster1-worker1 to see that a pod can still run on a worker.

5.

Alter the yaml to:

apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: deploy
name: deploy
spec:
replicas: 3
selector:
matchLabels:
app: deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: deploy
spec:
containers:
- image: httpd:2.4.41-alpine
name: httpd
resources: {}
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
nodeSelector:
node-role.kubernetes.io/master: ""

status: {}

Now all 3 are running on the master node.

Clean up

Run: ./down.sh .

All CKA challenges

More challenges on

https://killer.sh

--

--

killercoda.com | killer.sh (CKS CKA CKAD Simulator) | Software Engineer, Infrastructure Architect, Certified Kubernetes