Switching and Routing
• Switching enables communication between hosts within the network
• Commands to enable
○ ip link
○ ip addr add 192.168.1.10/24 dev eth0
• Router connected two networks together
○ Command to know the routing table
§ route
○ Command to add a routing
§ ip route add 192.168.2.0/24 via 192.168.1.1
§ ip route add default 192.168.1.1
• For forwarding traffic from one eth0 to another eth1
○ cat /proc/sys/net/ipv4/ip_forward
○ modify the file /etc/sysctl.conf
DNS
• /etc/hosts
• Each node can have its own domain-name mapping in /etc/hosts, however soon it will become cumbersome thats why we use DNS host
• The location of DNS host is defined in /etc/resolv.conf
• If the are the two definition of host name available in local /etc/hosts as well as in DNS then the selection will be based /etc/ns-switch.conf
• Within our nameserver, we can point to public nameserver ex: 8.8.8.8 nameserver which is hosted by Google
• There is 'search' property we can set in /etc/resolv.conf so the nameresolver intelligently appends the search text with the user provided name
• coreDNS is one implementation of DNS Server which is opensource
Network Namespaces
• Namespaces are used by containers like docker to create a network isolation
• Command to create namespaces
○ ip netns add green
• To list down all the interfaces in the host
○ ip link
• To list down all the interfaces visible inside the network namspace
○ ip netns exec red ip link
Docker networking
• There are different networking type when we run a docker container
○ none - the container cannot reach the container and no one can reach the container
○ host - uses the host's IP as its own ip
○ bridge - an internal private network is created
• Command
○ docker run --network <type> nginx
○ docker network ls
§ lists down the network
○ docker inspect <container_id>
§ Under the NetworkSettings we can see what is the network namesapce the container is using
Kubernetes networking
• The CNI (Container Network Interface) is a plugin used by k8s to establish a network connectivity
• As per K8S reqt, CNI should assign each pod with a IP address and each of them should be accessible by other POD and it should reach all other PODs
• There are many flavors of CNI plugin
○ bridge
○ flannel
○ weave-net
○ ipvlan
○ ...
• kubelet will point to the CNI configuration file while bringing up the kubelet in the pod
• 'ps -aux | grep kubelet' will show the path to the configuration file
• ipam - ip Address Management
○ It is the plugin implementer responsibility to manage the IP range, avoid duplicate assignment of IP to the pod etc
○ Two types
§ dhcp
§ host-local
Service Networking
• In general we use service for accessing the pod instead of accessing the pod directly
• When a service is created it is accessible cluster wide by default
○ ClusterIP - accessible within the cluster
○ NodePort - Will be accessible by the nodeIPs with the port
• kube-proxy watches the changes in kube-apiserver for any new service creation and it will take action
• Service is a cluster wide concept and really there is no service or process running that listens to the IP. It is just a virtual object
• kube-proxy creates a forward routing rules and gets the IP within the range what is configured
• Three ways of configuring the forward routing rules (--proxy-mode param needs to be set while bringing up the kube-proxy)
○ userspace
○ iptables
○ ipvs
• The service ip range are set while bringing up the kube-api-server using the --service-cluster-ip-range parameter
• We can check the iptables in the node
○ iptables -L -t net | grep db-service
DNS in Kubernetes
• DNS runs as a service and pod in k8x under kune-system namespace
• The DNS nameserver ip will be configure to pod's /etc/resolve.conf by the kubelet when the pod is started
• Each wont get a hostname by default but it can be configured in the CoreDNS configuration in the kube-dns pod
Ingress
• Ingress takes care of
○ Loadbalancing
○ Authentication SSL
○ URL based routing configuration
• It acts like a layer 7 load balancer
• Ingress Controller - There are many implementation and by default we will not have one running in the k8s
○ Nginx
○ HA Proxy
○ Contour
○ traefik
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-controller
namespace: ingress-space
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress
template:
metadata:
labels:
name: nginx-ingress
spec:
serviceAccountName: ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --default-backend-service=app-space/default-http-backend
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
• Ingress Resource - is the set of configuration passed on to the ingress controller to route the traffic appropriately
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
name: ingress-pay
namespace: critical-space
spec:
rules:
- http:
paths:
- backend:
serviceName: pay-service
servicePort: 8282
path: /pay(/|$)(.*)
pathType: ImplementationSpecific