IT 3300 : Virtualization

Kubernetes Install, Nodes, and Pods

What is it?

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. All of the stuff we do today can be done as an online tutorial here

Review the part about containers here

What can it do?

Remember, it works with our containers and provides:

  • load balancing
  • storage orchestration: can automatically mount local or cloud or other storage
  • automated rollouts and rollbacks
  • automatic bin packing: can specify ram and cpu of container, k8s will fit onto your nodes to maximize resource usage
  • self-healing: restarts services, containers, etc automatically
  • more...

Case study

Review a case study at the following link: https://kubernetes.io/case-studies/

  • What led to the organization moving to K8S?
  • What benefits did they find?

Overview

To work with Kubernetes, you use Kubernetes API objects to describe your cluster’s desired state: what applications or other workloads you want to run, what container images they use, the number of replicas, what network and disk resources you want to make available, and more. You set your desired state by creating objects using the Kubernetes API, typically via the command-line interface, kubectl.

Components

  • Pods (container or multiple containers)
  • Services (way to access a pod)
  • Volumes (storage)
  • Namespaces (multiple virtual clusters on one physical cluster)(divide cluster resources between multiple user)

Kubernetes Clusters

Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines.

  • master node coordinates the cluster
  • other nodes run applications

What is it?

Clusters

Each node has a Kubelet (an agent) which manages node and communicates with Kubernetes (K8s) master. They will also have docker installed.

MicroK8s

MicroK8s is built to run on any Linux. It’s lightweight and deploys all Kubernetes services natively on Ubuntu (i.e. no virtual machines required) while packing the entire set of libraries and binaries needed. It’s suited for laptops, workstations, CI pipelines, IoT devices, and small edge clouds because of its small footprint.

kubectl is a command line interface for running commands against Kubernetes clusters.

Install MicroK8s

  • Follow the instructions here

  • Essentially this:

      sudo apt update
      sudo snap install microk8s --classic 
      sudo usermod -a -G microk8s $USER
      sudo chown -f -R $USER ~/.kube
      #logout then back in 
      sudo snap alias microk8s.kubectl kubectl
    

Cluster Microk8s

Essentially, you will have 3 separate installs of k8s on each node, now to cluster them together:
Edit the /etc/hosts file of each so that they can resolve the DNS address of each other. Mine is like this:

    127.0.0.1 localhost
    144.38.193.220 microk8s-node1
    144.38.193.221 microk8s-node2
    144.38.193.222 microk8s-node3

On the first node:

    microk8s.kubectl config view --raw > $HOME/.kube/config
    microk8s add-node --token-ttl 3600

It will then give you a command that you can copy/paste into the other nodes.

Wait a few minutes then try kubectl get nodes. Hopefully they are all listed.

Commands

  • kubectl get nodes

Nodes

  • Where the pods run
  • can be physical or virtual
  • managed by master
  • Run a kubelet (an agent that handles comms from master)(Manages pods and containers on a machine)
  • Also run a container runtime (like docker)

Now what?

We want to put container(ized) applications on the cluster. Create a k8s configuration file. Master will schedule nodes to run the container. These containers are monitored. If it goes down, the cluster self heals by bringing it up again.

How?

What is a pod?

A group of one or more (docker) containers, along with their volumes, networks, and other information about the container. Containers should only be scheduled together in a single Pod if they are tightly coupled and need to share resources such as disk.

Create our first pod

    kubectl run kubernetes-bootcamp --image=docker.io/jocatalin/kubernetes-bootcamp:v1 --port=8080

Hmmm??

This command:

  • searched for a suitable node where an instance of the application could be run- scheduled the application to run on that Node

      kubectl get pods
    

More About Pods

Group of one or more containers that:

  • share storage/volumes
  • share network
  • have shared specification for how to run the containers.
  • Containers within a pod share an IP address and port space, and can find each other via localhost.
    • So they must coordinate usage of ports

Pod scheduling

When a node dies:

  • Pods on that node are lost (though your controller might make cluster back to desired state by creation of identical pods on a different host)

More about pods

  • Containers in different pods have distinct ip addresses
  • If a node dies, the pods scheduled to that node are scheduled for deletion, after a timeout period. A given pod (as defined by a UID) is not “rescheduled” to a new node; instead, it can be replaced by an identical pod, with even the same name if desired, but with a new UID

Pod Networking

Pods that are running inside Kubernetes are running on a private, isolated network. By default they are visible from other pods and services within the same kubernetes cluster, but not outside that network. So if you run kubectl proxy it will connect the local machine to the k8s cluster. Then you could do a wget http://localhost:8001/version.

Pod Examples

Another view

More Pod Commands

  • kubectl get pods
  • kubectl describe pods
  • kubectl logs $POD_NAME (only one container inside this pod)
  • kubectl exec -ti $POD_NAME bash

Delete pods

  • kubectl delete pod $PODNAME
    • Since our cluster always wants one instance of the pod running, if we delete the pod, another will start