Monday 18 April 2022

CKA Kubernetes ( K8S ) Networking

 The preparation for the CKA (Certified Kubernetes Administrator) requires lots of practice and practice. Fortunately, we have lots of online playgrounds to keep practicing, there are lots of free courseware available and lots of paid as well are available. In addition to that, we get two attempts to clear the exam




Switching and Routing

• Switching enables communication between hosts within the network
• Commands to enable
○ ip link
○ ip addr add 192.168.1.10/24 dev eth0
• Router connected two networks together
○ Command to know the routing table
§ route
○ Command to add a routing
§ ip route add 192.168.2.0/24 via 192.168.1.1
§ ip route add default 192.168.1.1
• For forwarding traffic from one eth0 to another eth1
○ cat /proc/sys/net/ipv4/ip_forward
○ modify the file /etc/sysctl.conf

DNS

• /etc/hosts
• Each node can have its own domain-name mapping in /etc/hosts, however soon it will become cumbersome thats why we use DNS host
• The location of DNS host is defined in /etc/resolv.conf
• If the are the two definition of host name available in local /etc/hosts as well as in DNS then the selection will be based /etc/ns-switch.conf
• Within our nameserver, we can point to public nameserver ex: 8.8.8.8 nameserver which is hosted by Google
• There is 'search' property we can set in /etc/resolv.conf so the nameresolver intelligently appends the search text with the user provided name
• coreDNS is one implementation of DNS Server which is opensource

Network Namespaces

• Namespaces are used by containers like docker to create a network isolation
• Command to create namespaces
○ ip netns add green
• To list down all the interfaces in the host
○ ip link
• To list down all the interfaces visible inside the network namspace
○ ip netns exec red ip link

Docker networking

• There are different networking type when we run a docker container
○ none - the container cannot reach the container and no one can reach the container
○ host - uses the host's IP as its own ip
○ bridge - an internal private network is created
• Command
○ docker run --network <type> nginx
○ docker network ls
§ lists down the network
○ docker inspect <container_id>
§ Under the NetworkSettings we can see what is the network namesapce the container is using

Kubernetes networking

• The CNI (Container Network Interface) is a plugin used by k8s to establish a network connectivity 
• As per K8S reqt, CNI should assign each pod with a IP address and each of them should be accessible by other POD and it should reach all other PODs
• There are many flavors of CNI plugin
○ bridge
○ flannel
○ weave-net 
○ ipvlan
○ ...
• kubelet will point to the CNI configuration file while bringing up the kubelet in the pod
• 'ps -aux | grep kubelet' will show the path to the configuration file
• ipam - ip Address Management
○ It is the plugin implementer responsibility to manage the IP range, avoid duplicate assignment of IP to the pod etc
○ Two types
§ dhcp
§ host-local 

Service Networking

• In general we use service for accessing the pod instead of accessing the pod directly
• When a service is created it is accessible cluster wide by default
○ ClusterIP - accessible within the cluster
○ NodePort - Will be accessible by the nodeIPs with the port 
• kube-proxy watches the changes in kube-apiserver for any new service creation and it will take action
• Service is a cluster wide concept and really there is no service or process running that listens to the IP. It is just a virtual object
• kube-proxy creates a forward routing rules and gets the IP within the range what is configured 
• Three ways of configuring the forward routing rules (--proxy-mode param needs to be set while bringing up the kube-proxy)
○ userspace
○ iptables
○ ipvs
• The service ip range are set while bringing up the kube-api-server using the --service-cluster-ip-range parameter
• We can check the iptables in the node
○ iptables -L -t net | grep db-service

DNS in Kubernetes

• DNS runs as a service and pod in k8x under kune-system namespace
• The DNS nameserver ip will be configure to pod's /etc/resolve.conf by the kubelet when the pod is started
• Each wont get a hostname by default but it can be configured in the CoreDNS configuration in the kube-dns pod

Ingress

• Ingress takes care of
○ Loadbalancing
○ Authentication SSL
○ URL based routing configuration
• It acts like a layer 7 load balancer
• Ingress Controller - There are many implementation and by default we will not have one running in the k8s
○ Nginx
○ HA Proxy
○ Contour
○ traefik

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ingress-controller
  namespace: ingress-space
spec:
  replicas: 1
  selector:
    matchLabels:
      name: nginx-ingress
  template:
    metadata:
      labels:
        name: nginx-ingress
    spec:
      serviceAccountName: ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --default-backend-service=app-space/default-http-backend
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
• Ingress Resource - is the set of configuration passed on to the ingress controller to route the traffic appropriately

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
  name: ingress-pay
  namespace: critical-space
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: pay-service
          servicePort: 8282
        path: /pay(/|$)(.*)
        pathType: ImplementationSpecific

CKA Kubernetes ( K8S ) Storage

 The preparation for the CKA (Certified Kubernetes Administrator) requires lots of practice and practice. Fortunately, we have lots of online playgrounds to keep practicing, there are lots of free courseware available and lots of paid as well are available. In addition to that, we get two attempts to clear the exam



Docker Storage Driver

• When we run the docker container, the docker creates another layer called as 'Container Layer' as a writable layer to store content like logs, temp file created by app or to modify the existing file. The files in the container layer will be lost when the container stops
• Commands
○ docker volume create data_volume
§ this will create a directory under /var/lib/docker/data_volume
○ docker run -v data_volume:/var/lib/mysql mysql
§ Here it mounts the volume to the 
○ docker run -v data_volume2:/var/lib/docker/data_volume2
§ It creates a folder /var/lib/docker/data_volume2
○ docker run -v /data/mysql:/var/lib/mysql mysql
§ Local folder is mounted
○ docker run --mount type=bind,source=/data/mysql,target=/var/lib/mysql mysql
• Docker uses storage driver for
○ Creating the writable layered and maintaining the files in it and terminating when the container stops etc
○ There are many drivers like AUFS, ZFS, BTRFS etc
○ Docker itself will choose the best driver based on the native operating system. But we can override it

Docker Volume Driver

• Default driver is 'Local' which will use the host OS filesystem
• There are many other drivers like
○ AzureFileStorage
○ DigitalOcean
○ gce-docker
○ convoy etc

Container Interfaces

• Container Runtime Interfaces
○ Used to abstract the runtime containers like docker, rkt, cri-o etc
○ If any new runtime container support is introduced they simply have to follow the CRI docs and can implement without touching the k8s code
• Container Network Interfaces
○ Used to abstract the networking implementation used to support communication between nodes, pods etc
○ Some examples are flannel, weaveworks, cilium
• Container Storage Interfaces
○ Used to abstract the underlying storage used by using drivers like portworx, Amazon EBS, Dell EMC, Gluster FS
○ CSI is not K8S standard, it is universal standard. So if any storage vendor has the contract for CSI then it can be plugged

Volumes and Mounts

• When we create a POD, under the spec we can define the list of volumes under spec and the mounts under the containers
• There are multiple volume providers like the 'hostPath' which creates a volume in current running node. Apart from that there are many providers for it

PersistentVolume

• It allows administrator to define a different set of storage options using the persistent volume and the POD can use one of them. This gives the advantage that now each pod definition dont have to maintain all the storage configuration within itself

apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv-log
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: /pv/log

PersistentVolumeClaim

• PVC is another k8s object created by the user with definition like requires storage size, mode etc
• Once the PVC is created the by the user, the kubernetes binds the PVC with PV
• PVC and PV are 1-1 means like only one claim can be made to a PV. Even if there are free space in PV, it cannot accommodate additional PVC 

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: claim-log-1
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Mi
  volumeName: pv-log

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: webapp
  name: webapp
spec:
  containers:
  - image: kodekloud/event-simulator
    name: pod
    resources: {}
    volumeMounts:
    - mountPath: /log
      name: log-pvc
  volumes:
  - name: log-volume
    hostPath:
     path: /var/log/webapp
  - name: log-pvc
    persistentVolumeClaim:
      claimName: claim-log-1
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
# Storage Class
• Creating a PV and creating a storage type like aws, gce are called as static provisioning

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

apiVersion: v1
kind: PersistentVolume
metadata:
  name:  local-pv
spec:
  capacity:
    storage: 500Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /opt/vol1
  storageClassName: local-storage


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi
  storageClassName: local-storage

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx:alpine
    name: nginx
    resources: {}
    volumeMounts:
      - mountPath: "/var/www/html"
        name: volume-html
  volumes:
    - name: volume-html
      persistentVolumeClaim:
        claimName: local-pvc
  dnsPolicy: ClusterFirst
  restartPolicy: Always

CKA Kubernetes ( K8S ) Security

 The preparation for the CKA (Certified Kubernetes Administrator) requires lots of practice and practice. Fortunately, we have lots of online playgrounds to keep practicing, there are lots of free courseware available and lots of paid as well are available. In addition to that, we get two attempts to clear the exam




kube-apiserver security

• It is the center of k8s and we need to secure it
• Who can access?
○ Authentication using password, token or 3rd party authentication
○ Service account for third party services
• What they can do?
○ Authorization using - RBAC (Role Based Access Control), ABAC (Attribute Based Access Control)
• Communication with other components like etcd-cluster, kube-controller, kube-scheduler etc are controlled by using the TLS certificate
• Communication between pod in the cluster can be restricted using NetworkPolicy

Authentication

• Different Users
○ Administrator
○ Developer
○ End-User
○ Third part apps
• Two types accounts we need to take care
○ User - Human
○ Service Account - other process or services
• User account is managed through the kube-apiserver and
○ different authentication method supported by kube-apiserver are
§ static password file
§ static token file
§ certificate
§ using third party like LDAP, Kerberos
○ While starting the kube-apiserver we need to pass the argument '--basic-auth-file=/var/user-credentials.csv'

TLS Authentication

• Uses symmetric and asymmetric keys for passing information between client and server
• Certificate is issued by Certificate Authority to confirm whether the server that sends the certificate is actually them
• PKI - Public Key Infrastructure
• Naming convention for keys
○ Public Key - *.crt, *.pem
○ Private Key - *.key, *-key.pem

TLS in Kubernetes

• 3 types of Certificates
○ Server Certificate
○ Client Certificate
○ Root Certificate (Signing Authority Cert)
• In K8S, we generate client and server key for components like
○ k8s-apiserver
○ etcd
○ kubelet
○ kube-proxy
○ controller-manager
○ scheduler
○ kubectl
○ ca-authority
• K8S mandates to have one Certificate Authority per cluster, we can have more than one CA but atleast one should be there

Generating Certificates in K8S

• There are many tools to create private key and certificate like
○ openssl
○ easyrsa
○ cfssl
• Commands
○ openssl genrsa -out ca.key 2048   => For creating private key
○ openssl req -new -key ca.key -subj "/CN=KUBERNETES-CA" -out ca.csr   => For creating a CSR or Certificate Signing Request file
○ openssl x509 -req -in ca.csr -signkey ca.key -out ca.crt => For creating the signed certificate, for CA we use the its own private key for signing the certificate
○ openssl x509 -in apiserver.crt -text -noout => for viewing the certificate details

Certificate API

• Its a k8s managed object which takes care of signing the CSR (Certificate Signing Requests)
• The controller manager is responsible for managing the csr requests and approving etc

kubeconfig

• It contains 3 informations
○ Clusters - dev, prod, test etc
○ Contexts - which user account uses for which cluster etc
○ Users - user account with we we have access to
• $HOME/.kube/config is a Config object in k8s and we need to define in a YAML file

API Groups

• There are multiple different APIs K8S provides
○ /version API
○ /log API
○ /healthz API
○ /api  -> core groups like namespaces, node, pods, configmaps, secret etc
○ /apis -> named groups are more organized based on apps, extensions, networking, storage, certificate, authentication etc
○ /metrics
• We can use http://master-node:6443 -k --key= --cert= --cacert=
• Or we can start the 'kubectl proxy' and we can access with http://127.0.0.1:8001 without any certificate, key etc

Authorization

• Different AuthorizationMode which we can set via the --authorization-mode attribue while starting the kube-apiserver. We can have multiple authorization mode defined and the check will happen in a chain
○ Node Authorizer - Node can access the kube-api server based on the certificate configuration
○ ABAC (Attribute Based Access Control) - A policy file needs to be created in a JSON format which tells which user or user-group has what access etc and then we need to restart the kube-api server
○ RBAC (Role Based Access Control) - Here we first define the role and then map user or user-group to the defined role. Will be easy to manage
○ Webhook - kube-apiserver checks with third party tool like open-policy-agent which manages the authorization level and will respond back
○ AlwaysAllow
○ AlwaysDeny
• We can configure in a way like a particular user has access to a certain namespace only

RBAC

• Role is a kubernetes object and we can define using a yaml definition
• RoleBinding is another object which links the user with Role
• Command to check whether the user can access or not
○ kubectl auth can-i create deployments

Cluster Role and RoleBinding

• Role and RoleBindings are a namespace resource and we cannot use Role and RoleBinding for allowing/restricting node access
• For a node resource, we need to use ClusterRole 
• Kubernetes resources can be in one of the below scope
○ Namespace scoped
○ Cluster scoped
• Example of Cluster scoped resources
○ nodes
○ PV (Persistence Volume)
○ clusterroles
○ clusterrolebinding
○ certificatesigningrequest
○ namespaces
○ ..
• The ClusterRole allows access to all the namespaces

Security Contexts

• We can specify what is the userId the container is running or the unix capability
• We can set these context at the POD level or at the container level. However the capability can be set only at container level

securityContext:
    runAsUser: 1000
    capabilities:
       add: ["MAC_ADMIN"]

Network Policy

• By default the network rule set by kubernetes is 'AllAllow' which means any pod can reach to any other pod without restriction
• NetworkPolicy is another object in kubernetes namespace
• We can link NetworkPolicy to one or more pods using the label selector
• There are many implementation for networking in k8s
○ flannel
○ Calico
○ Romana