Monday, 18 April 2022

CKA Kubernetes ( K8S ) Security

 The preparation for the CKA (Certified Kubernetes Administrator) requires lots of practice and practice. Fortunately, we have lots of online playgrounds to keep practicing, there are lots of free courseware available and lots of paid as well are available. In addition to that, we get two attempts to clear the exam




kube-apiserver security

• It is the center of k8s and we need to secure it
• Who can access?
○ Authentication using password, token or 3rd party authentication
○ Service account for third party services
• What they can do?
○ Authorization using - RBAC (Role Based Access Control), ABAC (Attribute Based Access Control)
• Communication with other components like etcd-cluster, kube-controller, kube-scheduler etc are controlled by using the TLS certificate
• Communication between pod in the cluster can be restricted using NetworkPolicy

Authentication

• Different Users
○ Administrator
○ Developer
○ End-User
○ Third part apps
• Two types accounts we need to take care
○ User - Human
○ Service Account - other process or services
• User account is managed through the kube-apiserver and
○ different authentication method supported by kube-apiserver are
§ static password file
§ static token file
§ certificate
§ using third party like LDAP, Kerberos
○ While starting the kube-apiserver we need to pass the argument '--basic-auth-file=/var/user-credentials.csv'

TLS Authentication

• Uses symmetric and asymmetric keys for passing information between client and server
• Certificate is issued by Certificate Authority to confirm whether the server that sends the certificate is actually them
• PKI - Public Key Infrastructure
• Naming convention for keys
○ Public Key - *.crt, *.pem
○ Private Key - *.key, *-key.pem

TLS in Kubernetes

• 3 types of Certificates
○ Server Certificate
○ Client Certificate
○ Root Certificate (Signing Authority Cert)
• In K8S, we generate client and server key for components like
○ k8s-apiserver
○ etcd
○ kubelet
○ kube-proxy
○ controller-manager
○ scheduler
○ kubectl
○ ca-authority
• K8S mandates to have one Certificate Authority per cluster, we can have more than one CA but atleast one should be there

Generating Certificates in K8S

• There are many tools to create private key and certificate like
○ openssl
○ easyrsa
○ cfssl
• Commands
○ openssl genrsa -out ca.key 2048   => For creating private key
○ openssl req -new -key ca.key -subj "/CN=KUBERNETES-CA" -out ca.csr   => For creating a CSR or Certificate Signing Request file
○ openssl x509 -req -in ca.csr -signkey ca.key -out ca.crt => For creating the signed certificate, for CA we use the its own private key for signing the certificate
○ openssl x509 -in apiserver.crt -text -noout => for viewing the certificate details

Certificate API

• Its a k8s managed object which takes care of signing the CSR (Certificate Signing Requests)
• The controller manager is responsible for managing the csr requests and approving etc

kubeconfig

• It contains 3 informations
○ Clusters - dev, prod, test etc
○ Contexts - which user account uses for which cluster etc
○ Users - user account with we we have access to
• $HOME/.kube/config is a Config object in k8s and we need to define in a YAML file

API Groups

• There are multiple different APIs K8S provides
○ /version API
○ /log API
○ /healthz API
○ /api  -> core groups like namespaces, node, pods, configmaps, secret etc
○ /apis -> named groups are more organized based on apps, extensions, networking, storage, certificate, authentication etc
○ /metrics
• We can use http://master-node:6443 -k --key= --cert= --cacert=
• Or we can start the 'kubectl proxy' and we can access with http://127.0.0.1:8001 without any certificate, key etc

Authorization

• Different AuthorizationMode which we can set via the --authorization-mode attribue while starting the kube-apiserver. We can have multiple authorization mode defined and the check will happen in a chain
○ Node Authorizer - Node can access the kube-api server based on the certificate configuration
○ ABAC (Attribute Based Access Control) - A policy file needs to be created in a JSON format which tells which user or user-group has what access etc and then we need to restart the kube-api server
○ RBAC (Role Based Access Control) - Here we first define the role and then map user or user-group to the defined role. Will be easy to manage
○ Webhook - kube-apiserver checks with third party tool like open-policy-agent which manages the authorization level and will respond back
○ AlwaysAllow
○ AlwaysDeny
• We can configure in a way like a particular user has access to a certain namespace only

RBAC

• Role is a kubernetes object and we can define using a yaml definition
• RoleBinding is another object which links the user with Role
• Command to check whether the user can access or not
○ kubectl auth can-i create deployments

Cluster Role and RoleBinding

• Role and RoleBindings are a namespace resource and we cannot use Role and RoleBinding for allowing/restricting node access
• For a node resource, we need to use ClusterRole 
• Kubernetes resources can be in one of the below scope
○ Namespace scoped
○ Cluster scoped
• Example of Cluster scoped resources
○ nodes
○ PV (Persistence Volume)
○ clusterroles
○ clusterrolebinding
○ certificatesigningrequest
○ namespaces
○ ..
• The ClusterRole allows access to all the namespaces

Security Contexts

• We can specify what is the userId the container is running or the unix capability
• We can set these context at the POD level or at the container level. However the capability can be set only at container level

securityContext:
    runAsUser: 1000
    capabilities:
       add: ["MAC_ADMIN"]

Network Policy

• By default the network rule set by kubernetes is 'AllAllow' which means any pod can reach to any other pod without restriction
• NetworkPolicy is another object in kubernetes namespace
• We can link NetworkPolicy to one or more pods using the label selector
• There are many implementation for networking in k8s
○ flannel
○ Calico
○ Romana

CKA Kubernetes ( K8S ) Cluster Maintenance

 The preparation for the CKA (Certified Kubernetes Administrator) requires lots of practice and practice. Fortunately, we have lots of online playgrounds to keep practicing, there are lots of free courseware available and lots of paid as well are available. In addition to that, we get two attempts to clear the exam




In the CKA exam definitely we can expect questions on the cluster maintenance, also this will be high score question. So if we practice well on this then we can save time and score high on these questions.

OS Upgrades

• When a node in the cluster is brought down then the pods inside will become inaccessible
• The pods will be accessible again if the node comes back within 5 mins
• Same way to do is
○ kubectl drain node01 -> This will move the pods to a different node and mark the node as cordon which means no new scheduling of pod can happen in this node
○ kubectl uncordon node01 -> after the OS upgrade, the node should be uncordon so scheduling of pod can happen in this node
○ kubectl cordon node01 -> Like without draining the pod, we can simply mark as cordon so no new scheduling happens here

Kubernetes release

• The versioning is v1.13.0 -> major.minor.bug_fixes
• https://github.com/kubernetes/kubernetes
• Only latest 3 versions of k8s will be supported and the older releases will become unsupported
• When master node is upgraded the control plane component is not accessible
• Upgrade Strategy to worker node
○ All at once - will have a down time
○ Rolling updates 
○ Bring new nodes with new version and remove old nodes

Backup and Restore

• Can take backup in three different ways
○ Resource Configuration
§ kubectl get all -A -o yaml > all-resources.yaml
§ There are tools like ARK / Velero
○ ETCD 
§ the storage directory can be backed up as it is
§ ETCD comes up with its snapshot tool
§ export ETCDCTL_API=3
§ etcdctl snapshot save snapshot.db
§ service kube-apiserver stop
§ etcdctl snapshot restore snapshot.db
§ systemctl daemon-reload
§ service etcd restart
§ service kube-apiserver start

CKA Kubernetes ( K8S ) Application Lifecycle Management

 The preparation for the CKA (Certified Kubernetes Administrator) requires lots of practice and practice. Fortunately, we have lots of online playgrounds to keep practicing, there are lots of free courseware available and lots of paid as well are available. In addition to that, we get two attempts to clear the exam




Rolling updates and rollback

• Rollout is the process of updating all the pods/replicas for a defined deployment
• Revision is a change happened in the deployment
• Deployment Strategey
○ Recreate - Destroys all the existing pod and then start creating new version of pods
○ Rolling Update - Each pod gets deleted and recreated with new version in a sequence 
• Under the hood, the deployment actually creates another ReplicaSet with the new version of the image and will bring down the pods in older ReplicaSet
• For reverting the new changes, we need to execute 'kubectl rollout undo deployment/my-deployment'. Now it actually switches back to old ReplicaSet

Command and Argument

• Difference between CMD and ENTRYPOINT in docker is
○ CMD will be executed as soon as the container starts. And we can override the whole command by passing in the 'docker run'
○ ENTRYPOINT - if given, the final command that will be executed will be the concatenation of ENTRYPOINT and CMD
• We can override both of these properties in docker run command
• We can override these in kubernetes container by specifying the 'command' and 'args' fields

spec:
  containers:
   -  image: ubuntu
      command: ["sleep"]
      args: ["1000"]
     

Environment Variables

• We can define using the 'env' property
• We can get the env var value from ConfigMap and Secret too

spec:
  containers:
   -  image: ubuntu
      command: ["sleep"]
      args: ["1000"]
      env:
      - name: APP_COLOR
         value: BLUE

ConfigMaps

• We can create using imperative and declarative way
• 2 steps
○ First we need to create the ConfigMap
○ Refer it in the POD

apiVersion: v1
kind: ConfigMap
metadata:
   name: app-config
data:
  APP_COLOR: blue
  APP_MODE: prod

apiVersion: v1
kind: pod
metadata:
spec:
   containers:
   - name: webapp
     image: webapp:v1
     envFrom:
     - configMapRef:
         name: app-config

Secret

• Secret are used to store sensitive information
• Can be Created in both imitative and declarative way
• In declarative way we should encode the value in base64 format
• The ConfigMap and Secret can be mounted as volume as well. In that case the properties needs to be accessed like a file /opt/app-secret-volume/DB_Host
• The value in the file should be encoded like echo -n 'root' | base64

apiVersion: v1
kind: Secret
metadata:
    name: app-secret
data:
   DB_PASSWORD: hjuvhw=

apiVersion: v1
kind: pod
metadata:
spec:
   containers:
   - name: webapp
     image: webapp:v1
     envFrom:
     -secretRef:
         name: app-secret
     volumes:
     - name: app-secret-volume
       secret: 
         secretName: app-secret

Multicontainer pod

• All the containers in the pod shares the same
○ network - accessible by localhost
○ storage
○ lefecycle
• InitContainers
○ The InitContainers are special containers which will be executed before the actual container starts
○ We can define multiple InitContainers and all of them will run in a sequence 

CKA Kubernetes ( K8S ) Logging and Monitoring

 The preparation for the CKA (Certified Kubernetes Administrator) requires lots of practice and practice. Fortunately, we have lots of online playgrounds to keep practicing, there are lots of free courseware available and lots of paid as well are available. In addition to that, we get two attempts to clear the exam




Monitor Cluster

• How many nodes are there
• How many are healthy
• CPU, Memory, disk utilization
• pod metrics
• Varieties of tools are available for monitoring
○ Metric server
○ Prometheus
○ Elastic Stack
○ DataDog
○ dynatrace
• There is a component called as cAdvisor inside kubelet which periodically gets status of the pod and pushes to monitoring services

Monitoring POD

• We can see the sysout logs using the kubectl logs command

CKA Kubernetes ( K8S ) Scheduling

 The preparation for the CKA (Certified Kubernetes Administrator) requires lots of practice and practice. Fortunately, we have lots of online playgrounds to keep practicing, there are lots of free courseware available and lots of paid as well are available. In addition to that, we get two attempts to clear the exam




Manual Scheduling

• We can decide on in which node the pod should be scheduled or should be running
• The pod.yaml contains nodeName under spec where we can specify the node name
• We cannot modify the nodeName of any running pod
• If we want to run the pod in a different node then we need to use the Binding object

Taints and Tolerance

• Decides which pod should be placed in which node
• It it to enforce restriction on which pods can be placed in which node only but not like security etc
• tolerant pod can be placed on any non tainted nodes
• only tolerant pod be placed in a tainted nodes
• Taints are set to Nodes and Tolerance are set to Pods
• Only the Tolerant pod can be deployed into a tainted nodes\
• 3 traint effect that can be set to a pod when it is tainted
○ NoSchedule - no new pod will be allowed to schedule but existing running pod will continue to run
○ PreferNoSchedule - no guarrentee
○ NoExecute - Once the taint is applied then if the running pod cannot tolerate the taint then it will be evicted
• By default the master node has the taint, so the scheduler will not place any pod in the master node

spec:
  tolerations:
      - key : "app"
         operator: "Equal"
         value: "blue"
         effect: "NoSchedule"

Node selector

• Setting the limitation on the pod so it will be executed in a desired node
• Node will be set with the labels and which will be used in pods yaml file
○ kubectl label nodes node01 size=large
• It has the limitation that we cannot have OR condition or NOT condition in nodeSelector section

spec:
    nodeSelector:
         size:large

Node Affinity

• Ensure pods are hosted in particular node
• We can define complex conditions/rules on in which node the pod should run
• 3 types of affinity can be defined
○ requiredDuringSchedulingIgnoredDuringExecution
○ preferredDuringSchedulingIgnoredDuringExecution
○ requiredDuringSchedulingRequiredDuringExecution
• There are different operators exists like Equal, In, Exists, NoEqual etc

spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: color
                operator: In
                values:
                - blue

Resource Requests

• There are 3 resource that will be utilized by a POD
○ CPU
○ Memory
○ Disk
• We can define two entity for a container
○ requests - a minimum reqt
○ limits - max limit the container can use
• Default limit for any container in kubernetes can be defined using LimitRange resource
○ CPU - 1vCPU
○ Memory - 512 Mi

spec:
   containers:
   - name: publisher
      resources:
          requests:
             memory: "1Gi"
             cpu: 1
         limits:
             memory: "2Gi"c
             cpu: 2

Daemon Set

• Ensures one copy of the pod is always present in each node in the cluster
• Usecases:
○ Monitoring agent
○ Logging
• kube-proxy is one example of DaemonSet
• The definition of DaemonSet is same as ReplicaSet

apiVersion: apps/v1
kind: DaemonSet
metadata:
   name: monitoring-daemon
spec:
   selector:
     matchLabels:
        app: monitoring-agent
   template:
     metadata:
        labels:
           app: monitoring-agent
     spec:
        containers:
           - name: monitoring-agent
              image: monitoring-agent

Static POD

• Ability to create a pod within the worker node without any intervention/help by the master node (kube-apiserver)
• All the pod definition yaml should be kept in /etc/kubernetes/manifests
• kubelet in the worker node periodically checks the files in this directory and create/recreate/delete a pod
• Only pod can be created, we cannot create ReplicaSet, Deployement etc by placing the yaml in the above directory
• We can mention the manifest directory
○ while bringing up the POD using the  --pod-manifest-path args OR
○ we can pass --config=kubeconfig.yaml. And this kubeconfig.yaml will have a key 'staticPodPath' with the location of manifest file
• Usecases
○ To deploy the control-plane components like etcd.yaml, controller-manager, apiserver etc
• ps -aux | grep kubelet
• ps auxw | grep kubelet

Multiple Scheduler

• We can run additional scheduler in the master node
• Also we can specify what the pod placement algorithm the scheduler should follow
• While creating the pod we can tell which scheduler should be used for deployment

spec:
   containers:
   - image: nginx
     name: nginx-app
   schedulerName: my-custom-scheduler