Blog

Kubernetes Architecture and Deployment Basics Guide

EBS Integrator
Jul 30, 2021,

“What is and how-to use Kubernetes in 2021” seems to be the king-question that boggles a lot of people now days, here’s our take!

We’ve spent a few weeks following our DevOps team, serving them coffee, asking them questions, and doing our best Dee-Dee impression.

Dee Dee Kubernetes Meme

Suffice it to say, they banned our editor from their corner of the office for the near future.

However, we’ve gleamed enough to provide you, our dear reader, all the essentials!

If you’ve been following along, we recently published a piece discussing why you need to learn to use Kubernetes! We followed that by explaining the basics of Docker and containers in general!

Sometime ago we told you what software architecture is, and what microservices are, among other things. So, today we won’t go into almost any of those things, we’ll simply assume you know all that already!

With all of that said, let’s get cracking!

Why do we need orchestration?

So, let’s paint a picture:

You’ve had this AwesomeProduct™ or AwesomeService™ idea for ages and you finally got around to developing it! Great! You and your friend got around to designing the website, even wrote it fully in Python.

You went ahead and bought all the hardware, set up you server and all the necessities, to then, finally, deploy it in Docker! And guess what – it goes swimmingly!

Docker interaction with Hardware and OS

You’re getting visitors, your AwesomeProduct™ is hitting all your wildest dreams! So much so, that it suddenly crashes.

Oh-oh. Apparently, your host got to the extent of its ability, it crashed a few times, we have down-time – unacceptable, we simply can’t support so many users.

No worries let’s just do all that work again, we’ll set up a new host with new hardware, deploying it with docker’s going to be quite easy.

Two servers with docker

Oh and of course, when people visit our webapp, we want it to balance these users to either one of the servers… Well, we’ll have to learn how to set up a load balancer.

Two servers with docker and loadbalancer

Now, even if one of our hosts goes down, the other one’s there to pick up the slack!

But guess what, who knew your AwesomeProduct™ would be such a hit! Let’s scale up!

Let’s get more servers!

A lot of servers all set up with docker and individual load balancers

Ouch! We had to do all that manually deploy all the docker containers, each a small part of our website. We also had to figure out how to set up a few more load-balancers to communicate with and support each other.

It’s all right though. After weeks of work, we finally did it! Everything is going great and the users are happy. Until we got a new version of our product. We now have to update every single one of those containers to the recent version, manually…

Orchestration to the rescue!

Orchestration tools such as Kubernetes are here to help us! Why Kube? Well because,  we think it’s the best one out there, but here are a few alternatives to consider: Solaris, Docker Swarm, and CoreOS rkt!

So, let’s see how Kubernetes can help us:

First, we’ll have to introduce something called a kube-master. This is the head honcho of our whole architecture. For now, all we need to know the master takes care of all our requests and commands.

We’ll downscale to a few servers for now.

API server kubernetes

Now to make it all work, we need to create something known as Worker-Nodes. To do this we need to install two components alongside docker (or any other CRI), those are Kubelet and Kube-Proxy.

Kubelet and Kube-proxy

And we have our worker node’s set up, but how do we talk to this whole architecture?

Kubernetes Worker Nodes

KubeCTL or Kube Command-line tool, is your way of telling our Master exactly what you want to it to do: deploy apps, inspect or manage your cluster resources, check logs, anything really!

Alright, before we go any further, let’s get our Kubernetes terminology straight!

Kubernetes terminology explained

It all begins with our head honcho, the Kube-master.

A Kube-master or Master Node, essentially controlled (or interfaced) by the “control plane”, takes care of all the global decisions like deploying new containers (pods but more on those later) on schedule or upon need etc.

Kubernetes Control Plane

Control Plane or Master Node

A control plane is comprised out of multiple components, key among which are:

Kube-apiserver

This is the first and key component of the control plane: the “front-end” of your whole system. It manages the commands or requests we input via KubeCTL much like with our docker. Think of this as your tower of communication between the various K8S components.

etcd

etcd is a consistent distributed key-value store of data developed by CoreOS. Think of it as your warehouse of all cluster information needed for k8s operation. If any of your components ever need some data, like the number of active clusters or current schedule, state of your API or cluster, they ask the apiserver, which in turn requests etcd to provide that data.

kube-scheduler

Arguably the brains behind the operation. If kube-apiserver is the leader, kube-scheduler is the planner, the thinker. It watches the status of nodes and the resources they consume or free up. Upon this information it can make decisions on how and if new pods or nodes need to be deployed or scaled down. This ensures maximum efficiency for you server’s resources.

The component requests any and all user instructions or configurations, such as a command or a yaml file, which it then sends to the controller component for immediate action.

kube-controller-manager

If kube-scheduler is our thinker, kube-controller-manager is our labourer. In truth this component is a collection of smaller components: Node controller, Replication controller, DaemonSet controller, Endpoint controller, Job controller and Service account and token controller. What do they all do?

Node controller – An alert system set up to solely monitor if and when nodes go down or are not accessible.

Replication controller – Our construction drone, building pods throughout the whole cluster, raising the ones that went down or creating new ones.

DaemonSet controller – Controllers the bare minimum or maximum of pods our nodes can have.

Endpoint controller – Populates with endpoint objects / resources for connecting services and pods.

Job controller – Automation guru, responsible for keeping up modules that are supposed to be run continuously on repeat.

Service account and token controller – Creates default accounts or API access tokens for default or new namespaces.

KubeCTL

As we’ve mentioned KubeCTL is our command prompt allowing us to run commands against our clusters. This is a single binary that is easy to install (with curl for instance) by this quick command:

curl -LO https://dl.k8s.io/release/v1.21.0/bin/windows/amd64/kubectl.exe

Makes sure to add your binary in your PATH, afterwards, to test if everything is in working order run:


kubectl version –-client    // to check if you have the latest version.

kubectl get nodes    // to list all nodes in the cluster, it should give you your Docker or any other CRI’s master.

And much like docker engine, creating your containers (pods) is as easy as:


kubectl run awesomeproduct --image=awesomeproduct/awsprdct:grt --port=80

At this point, you’ve achieved the same thing as docker, except a layer above, but it gets better after this!

Worker Nodes and Pods

For our Master to do anything it needs its minions, its worker-nodes.

Worker-nodes are, in most basic explanation, our entire system with all the containers (mini applications) that we have, alongside all its necessities to work.

Besides our CRI (Container Runtime Interface) like docker, and our hardware we have two key components:

Kubelet

A Kubelet is a go between our CRI and the rest of the node. It makes sure our entire system can create and maintain the containers within its pods. This Kubelet is present in every node and makes sure that the containers are deployed within those nodes according to the set of instructions from both the container and the user’s instructions set via KubeCTL.

This is our master for our container system.

Pods

A pod is a logical collection of containers that make up your application. It is the smallest unit a Kubernetes Cluster controls, anything below that is taken care of our CRI in question.

You can have multiple containers within a single pod. The containers running inside the same pod also share the network and storage space. However, generally speaking in most cases a pod will contain a single container.

Kubelet and pods

Kube-Proxy

A kube-proxy acts as a network proxy and a load balancer for workloads running on the worker nodes. User and client requests that are coming through an external load balancer are redirected to containers running inside the pod through these proxies.

So, in essence think of it as a load balancer between containers. With multiple Worker-nodes you have load balancers upon load balancers… balanception!

DiCaprio Kubernetes

Even DiCaprio loves Kubernetes!

Back to AwesomeProduct™ / K8S Deployment

So, we’ve gotten most of important terms out of the way, let’s get back to our web-app and your great product.

Now as we’ve mentioned, it’s quite easy to simply run and create your containers via pods i.e. with KubeCTL, but Kubernetes is all about automation baby!

And for automation we need to give it a set of instructions, for this we use a YAML file.


apiVersion: apps/v1
kind: Deployment
metadata:
  name: awesome-product
  labels:
    app: awsprdct
spec:
  replicas: 4
  selector:
    matchLabels:
      app: awsprdct
  strategy:
	rollingUpdate
		maxSurge: 25%
		maxUnavailable: 25%
	type: RollingUpdate
  template:
    metadata:
      labels:
        app: awsprdct
    spec:
      containers:
      - name: awsprdct
        image: awesomeproduct/awsprdct:amazing
		imagePullPolicy: Always
        ports:
        - containerPort: 80

And much like images, this is our blueprint (a manifest if you will) for the whole pod. How many containers you want? How many of them are deployed and upkept at the same time? You can even specify a schedule for when to deploy more, or when to scale down.

Everything is done via this .yaml file.

After creating it, it’s as simple as:


kubectl apply -f awesomeproductdeploy.yaml

That’s it, we have created 4 pods, all running on our network. And our Control plane makes absolutely sure that they are always running, exactly 4, not more or less.

If one of them goes down, or even gets dropped by command prompt from a user, our master-node will re-create it as fast as possible.

Now if you want to change the number of pods or containers, in production, it’s as easy as:


kubectl edit deployment awesome-product

And then edit the required field, or you can make changes to your deployment state via direct commands in kubectl as well.

Kubernetes Architecture

In our case, the master-node looks at its minions(worker-nodes) and evenly distributes the pods among them. In case one of our servers experiences any kind of problem, be that hardware or load related, it will reassign the pods accordingly.

“Look at me world and tremble” with K8S

Now it’s all good and great, but currently all our pods are running on the internal port, as in, it’s not accessible to the users, yet.

What we need to do is expose our pods to the internet, via a Kube-service.

If you’re using a cloud service, create a load-balancer, otherwise, the service you’ll be looking for is NodePort.

And to do that, we’ll create another .yaml file!


apiVersion: v1
kind: Service
metadata:
  name: awesome-product-balancer
spec:
  type: LoadBalancer
  ports:
	name: http
	port: 80
	protocol: TCP
	targetPort: 3000
  selector:
    app: awsprdct
  sessionaffinity: None

Loadbalancer .yaml


kind: Service
apiVersion: v1
metadata:
  name: awesome-product
  labels:
    app: awsprdct
spec:
  type: NodePort
  selector:
    app: awsprdct
  ports:
    - port: 80
      name: http
	  protocol: TCP
      targetPort: 3000

NodePort .yaml

Any pod/app named awsprdct will be load balanced between themselves.

Best part – the selector! This little line is amazing because in most cases your Database containers for instance (which should be in separate pods) will not need to be scaled as much as your actual application. By setting up personalized load balancers for different parts of your app, you can fine tune resources to the tiniest amount and maximize efficiency!

Yet again it’s as simple as:


kubectl apply -f awsprdct-service.yaml

Updates make the world go round

So, we have our users visiting our webapp, everything is great, so much so, that we release the AwesomeProduct 2.0! And because we’re using a container orchestration tool, migrating your containers is as easy as:


kubectl edit deployment awesome-product

And replacing the image container with our new container:

At this point, Kube-master will start terminating the old containers, while replacing them with the new pods under the correct image.

So, when you need to update your website, it’s done in literal minutes, by simply updating your .yaml deployment manifest. This is among the many beauties of container orchestration!

Conclusion

Sadly, we don’t have time to go into various amazing things K8S can do, but know this – Kubernetes makes deploying and updating your containers a breeze.

It helps you prevent down-time (thus financial and reputation loss), it keeps everything super secure, and if you’re using  available cloud-hosting services with kube integration, setting up your own website, with load balancing and containers in mind, literally transforms weeks of work into few days.

I hope this article shed some light into the reasons why Kubernetes is awesome.

Kube Explain meme

Now imagine that you went ahead and contacted a bespoke software development firm, and they did all that work on AwesomeProduct™ for you.

Before you go, what’s your favourite way of dealing with micro-service architectures?

Stay classy tech and business nerds!