Manual Scheduling
• We can decide on in which node the pod should be scheduled or should be running
• The pod.yaml contains nodeName under spec where we can specify the node name
• We cannot modify the nodeName of any running pod
• If we want to run the pod in a different node then we need to use the Binding object
Taints and Tolerance
• Decides which pod should be placed in which node
• It it to enforce restriction on which pods can be placed in which node only but not like security etc
• tolerant pod can be placed on any non tainted nodes
• only tolerant pod be placed in a tainted nodes
• Taints are set to Nodes and Tolerance are set to Pods
• Only the Tolerant pod can be deployed into a tainted nodes\
• 3 traint effect that can be set to a pod when it is tainted
○ NoSchedule - no new pod will be allowed to schedule but existing running pod will continue to run
○ PreferNoSchedule - no guarrentee
○ NoExecute - Once the taint is applied then if the running pod cannot tolerate the taint then it will be evicted
• By default the master node has the taint, so the scheduler will not place any pod in the master node
spec:
tolerations:
- key : "app"
operator: "Equal"
value: "blue"
effect: "NoSchedule"
Node selector
• Setting the limitation on the pod so it will be executed in a desired node
• Node will be set with the labels and which will be used in pods yaml file
○ kubectl label nodes node01 size=large
• It has the limitation that we cannot have OR condition or NOT condition in nodeSelector section
spec:
nodeSelector:
size:large
Node Affinity
• Ensure pods are hosted in particular node
• We can define complex conditions/rules on in which node the pod should run
• 3 types of affinity can be defined
○ requiredDuringSchedulingIgnoredDuringExecution
○ preferredDuringSchedulingIgnoredDuringExecution
○ requiredDuringSchedulingRequiredDuringExecution
• There are different operators exists like Equal, In, Exists, NoEqual etc
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: color
operator: In
values:
- blue
Resource Requests
• There are 3 resource that will be utilized by a POD
○ CPU
○ Memory
○ Disk
• We can define two entity for a container
○ requests - a minimum reqt
○ limits - max limit the container can use
• Default limit for any container in kubernetes can be defined using LimitRange resource
○ CPU - 1vCPU
○ Memory - 512 Mi
spec:
containers:
- name: publisher
resources:
requests:
memory: "1Gi"
cpu: 1
limits:
memory: "2Gi"c
cpu: 2
Daemon Set
• Ensures one copy of the pod is always present in each node in the cluster
• Usecases:
○ Monitoring agent
○ Logging
• kube-proxy is one example of DaemonSet
• The definition of DaemonSet is same as ReplicaSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: monitoring-daemon
spec:
selector:
matchLabels:
app: monitoring-agent
template:
metadata:
labels:
app: monitoring-agent
spec:
containers:
- name: monitoring-agent
image: monitoring-agent
Static POD
• Ability to create a pod within the worker node without any intervention/help by the master node (kube-apiserver)
• All the pod definition yaml should be kept in /etc/kubernetes/manifests
• kubelet in the worker node periodically checks the files in this directory and create/recreate/delete a pod
• Only pod can be created, we cannot create ReplicaSet, Deployement etc by placing the yaml in the above directory
• We can mention the manifest directory
○ while bringing up the POD using the --pod-manifest-path args OR
○ we can pass --config=kubeconfig.yaml. And this kubeconfig.yaml will have a key 'staticPodPath' with the location of manifest file
• Usecases
○ To deploy the control-plane components like etcd.yaml, controller-manager, apiserver etc
• ps -aux | grep kubelet
• ps auxw | grep kubelet
Multiple Scheduler
• We can run additional scheduler in the master node
• Also we can specify what the pod placement algorithm the scheduler should follow
• While creating the pod we can tell which scheduler should be used for deployment
spec:
containers:
- image: nginx
name: nginx-app
schedulerName: my-custom-scheduler