Musical ensemble playing classic music on various instruments while performing concert on outdoor stage

Hands on Kubernetes

Hands On Kubernetes

Welcome to the world of Kubernetes, where container orchestration becomes seamless and efficient. In this blog post, we will delve into the ins and outs of Kubernetes, exploring its key features, benefits, and its role in modern application development. Kubernetes, often referred to as K8s, is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It provides a robust infrastructure for running and coordinating containers across a cluster of hosts, simplifying the management of complex distributed systems.

Kubernetes offers a plethora of powerful features that make it a go-to choice for managing containerized applications. Some notable features include: a) Scalability and High Availability: Kubernetes allows you to scale your applications effortlessly by dynamically adjusting the number of containers based on the workload. It also ensures high availability by automatically distributing containers across multiple nodes, minimizing downtime.

b) Service Discovery and Load Balancing: With Kubernetes, services are given unique DNS names and can be easily discovered by other services within the cluster. Load balancing is seamlessly handled, distributing incoming traffic across the available containers.

c) Self-Healing: Kubernetes continuously monitors the health of containers and automatically restarts failed containers or replaces them if they become unresponsive. This ensures that your applications are always up and running.

To embark on your Kubernetes journey, you need to set up a Kubernetes cluster. This involves configuring a master node to manage the cluster and adding worker nodes that run the containers. There are various tools and platforms available to simplify this process, such as Minikube, kubeadm, or cloud providers like Google Kubernetes Engine (GKE) or Amazon Elastic Kubernetes Service (EKS).

Once your Kubernetes cluster is up and running, you can start deploying and managing applications. Kubernetes provides powerful abstractions called Pods, Services, and Deployments. Pods are the smallest unit in Kubernetes, representing one or more containers that are deployed together on a single host. Services provide a stable endpoint for accessing a group of pods, and Deployments enable declarative updates and rollbacks of applications.

Conclusion: Kubernetes has revolutionized the way we deploy and manage containerized applications, providing a scalable and resilient infrastructure for modern development. By automating the orchestration of containers, Kubernetes empowers developers and operators to focus on building and scaling applications without worrying about the underlying infrastructure.

In conclusion, mastering Kubernetes opens up a world of possibilities for efficient container orchestration. Whether you are a developer or an IT professional, diving into Kubernetes will undoubtedly enhance your skills and simplify the deployment and management of your applications.

Highlights: Hands On Kubernetes

Container-based applications

Kubernetes is an open-source orchestrator for containerized applications. Google developed it based on its experience deploying scalable, reliable container systems via application-oriented APIs.

In 2014, Kubernetes was introduced as one of the world’s largest and most popular open-source projects. Most public clouds use this API to build cloud-native applications. Cloud-native developers can use it on all scales, from a cluster of Raspberry Pis to a data center full of the latest machines. This software can also be used to build and deploy distributed systems.

Reliable and scalable distributed system

You may wonder what we mean by “reliable, scalable distributed systems” as more services are delivered via APIs over the network. Many APIs are delivered by distributed systems, in which the various components are distributed across multiple machines and coordinated through a network.It is important that these systems are highly reliable because we increasingly rely on them (for example, to find directions to the nearest hospital).

Regardless of how badly the other parts of the system fail, no part will fail. They must maintain availability during software rollouts and maintenance procedures. Due to the increasing number of people online and using these services, they must be highly scalable to keep up with ever-increasing usage without redesigning the distributed system that implements them. The capacity of your application will be automatically increased (and decreased) to maximize its efficiency.

container based virtualization

Cloud Platform

Cloud Platform has a ready-made GOOGLE CONTAINER ENGINE enabling the deployment of containerized environments with Kubernetes. The following post illustrates hands-on Kubernetes with PODS and LABELS. Pods & Labels are the main differentiators between Kubernetes and container scheduler such as Docker Swarm. A group of one or more containers is called a Pod, and containers in a Pod act together. Labels are assigned to pods for specific targeting and are organized into groups.

There are many reasons people come to use containers and container APIs like Kubernetes, but we believe they can all be traced back to one of these benefits:

  1. Development velocity
  2. Scaling (of both software and teams)
  3. Abstracting your infrastructure
  4. Efficiency
  5. Cloud-native ecosystem

You may find the following helpful information before you proceed. 

  1. Kubernetes Security Best Practice
  2. OpenShift Networking
  3. Kubernetes Network Namespace
  4. Neutron Network 
  5. Service Chaining



Hands on Kubernetes

Key Hands on Kubernetes Discussion points:


  • Introduction to Kubernetes but with a hands on approach to explanation.

  • Discussion on the Kubernetes cluster creation.

  • Kubernetes networking and components used.

  • Details on container creation.

  • A final note on Kubernetes services and labels.

Back to basics with the Kubernetes Networking Model

The Kubernetes networking model natively supports multi-host cluster networking. The work unit in Kubernetes is called a pod. A pod includes one or more containers, which are consistently scheduled and run “together” on the same node. This connectivity allows individual service instances to be separated into distinct containers. Pods can communicate with each other by default, regardless of which host they are deployed on.

How does Kubernetes work?

At its core, Kubernetes relies on a master-worker architecture to manage and control containerized applications. The master node acts as the brain of the cluster, overseeing and coordinating the entire system. It keeps track of all the resources and defines the cluster’s desired state.

The worker nodes, on the other hand, are responsible for running the actual containerized applications. They receive instructions from the master node and maintain the desired state. If a worker node fails, Kubernetes automatically redistributes the workload to other available nodes, ensuring high availability and fault tolerance.

Key Features and Benefits of Kubernetes:

1. Scalability: Kubernetes allows organizations to effortlessly scale their applications by automatically adjusting the number of containers based on resource demand. This ensures optimal utilization of resources and enhances performance.

2. Fault Tolerance: Kubernetes provides built-in mechanisms for handling failures and ensuring high availability. By automatically restarting failed containers or redistributing workloads, Kubernetes minimizes the impact of failures on the overall system.

3. Service Discovery and Load Balancing: Kubernetes simplifies service discovery by providing a built-in DNS service. It also offers load-balancing capabilities, ensuring traffic is evenly distributed across containers, enhancing performance and reliability.

4. Self-Healing: Kubernetes continuously monitors the state of containers and automatically restarts or replaces them if they fail. This self-healing capability reduces downtime and improves the overall reliability of applications.

5. Infrastructure Agnostic: Kubernetes is designed to be infrastructure agnostic, meaning it can run on any cloud provider or on-premises infrastructure. This flexibility allows organizations to avoid vendor lock-in and choose the deployment environment that best suits their needs.

Kubernetes Cluster Creation

The first step for Kubernetes basics and deploying a containerized environment is to create a Container Cluster. This is the mothership of the application environment. The Cluster acts as the foundation for all application services. It is where you place instance nodes, Pods, and replication controllers. By default, the Cluster is placed on a Default Network.

The default container networking construct has a single firewall. Automatic routes are installed so that each host can communicate internally. Cross-communication is permitted by default without explicit configuration. Any inbound traffic sourced externally to the Cluster must be specified with service mappings and ingress rules. By default, it will be denied. 

Container Clusters are created through the command-line tool gcloud or the Cloud Platform. The following diagrams display the creation of a cluster on the Cloud Platform and local command line. First, you must fill out a few details, including the Cluster name, Machine type, and number of nodes.

The scale you can build determines how many nodes you can deploy. Google currently has a 60-day free trial with $300 worth of credits.

Hands on Kubernetes

Once the Cluster is created, you can view the nodes assigned to the Cluster. For example, the extract below shows that we have three nodes with the status of Ready.

Hands on Kubernetes

Kubernetes Networking 101

Hands-on Kubernetes: Kubernetes basics and Kubernetes cluster nodes

Nodes are the building blocks within a cluster. Each node runs a Docker runtime and hosts a Kubelet agent. The docker runtime is what builds and runs the Docker containers. The type and number of node instances are selected during cluster creation.

Select the node instance based on the scale you would like to achieve. After creation, you can increase or decrease the size of your Cluster with corresponding nodes. If you increase instances, new instances are created with the same configuration as existing ones. When reducing the size of a cluster, the replication controller reschedules the Pods onto the remaining instances.  

Once created, issue the following CLI commands to view the Cluster, nodes, and other properties. The screenshot above shows a small cluster machine, “n1-standard-1,” with three nodes. If unspecified, these are the default. Once the Cluster is created, the kubectl command creates and manages resources.

Kuberenetes

Hands-on Kubernetes: Container creation

Once the Cluster is created, we can continue to create containers. Containers are isolated units sealing individual application entities. We have the option to create single-container Pods or multi-container Pods. Single-style Pods have one container, and multi-containers have more than one container per Pod.

A replication controller monitors Pod activity and ensures the correct number of Pod replicas. It constantly monitors and dynamically resizes. Even within a single container Pod design, a replication controller is recommended.

When creating a Pod, the pod’s name will be applied to the replication controller. The following example displays the creation of a container from the docker image. We proceed to SSH to the container and view instances with the docker ps command.

docker

A container’s filesystem lives as long as the container is active. You may want container files to survive a restart or crash. For example, if you have MYSQL, you may want these files to be persistent. For this purpose, you mount persistent disks to the container.

Persistent disks exist independently of your instance, and data remains intact regardless of the instance state. They enable the application to preserve the state during activities such as restarting and shutting down.

Hands-on Kubernetes: Service and labels

An abstraction layer proves connectivity between application layers to interact with Pods and Containers with use services. Services map ports on a node to ports on one or more Pods. They provide a load-balancing style function across pods by identifying Pods with labels.

With a service, you tell the pods to proxy by identifying each Pod with a label key pair. This is conceptually similar to an internal load balancer.

The critical values in the service configuration file are the ports field, selector, and labelThe port field is the port exposed on the cluster node, and the target port is the port exposed on the Pod. The selector is the label-value pair that highlights which Pods to target.

All Pods with this label are targeted. For example, a service named my app resolves to TCP port 9376 on any Pod with the app=example label. The service can be accessed through port 8765 on any of the nodes’ IP addresses.

servicefile

For service abstraction to work, the Pods we create must match the label and port configuration. If the correct labels are not assigned, nothing works. A flag also specifies a load-balancing operation. This uses a single IP address to spray traffic to all NODES.

The type Load Balancer flag creates an external IP on which the Pod accepts traffic. External traffic hits a public IP address and forwards to a port. The port is the service port to expose the cluster IP, and the target port is the port to target the pods. Ingress rules permit inbound connections from external destinations to each Cluster. Ingress is a collection of rules.

Kubernetes has revolutionized the way organizations deploy and manage containerized applications. Its ability to automate and streamline container orchestration has made it the go-to solution for modern application development. By leveraging Kubernetes, organizations can achieve greater scalability, fault tolerance, and agility in their operations. As the containerization trend continues to grow, Kubernetes is poised to play an even more significant role in the future of software development and deployment.

 

Summary: Hands On Kubernetes

Kubernetes, also known as K8s, is a powerful container orchestration platform that has revolutionized modern applications’ deployment and management. Behind the scenes, Kubernetes consists of several key components, each playing a crucial role in its functioning. This blog post delved into these components, unraveling their purpose and interplay within the Kubernetes ecosystem.

Master Node

The Master Node serves as the brain of the Kubernetes cluster and is responsible for managing and coordinating all activities. It comprises several components, including the API server, controller manager, and etcd. The API server acts as the central hub for communication, while the controller manager ensures the desired state and performs actions accordingly. Etcd, a distributed key-value store, maintains the cluster’s configuration and state.

Worker Node

Worker Nodes are the workhorses of the Kubernetes cluster and are responsible for running applications packaged in containers. Each worker node hosts multiple pods, which encapsulate one or more containers. Key components found on worker nodes include the kubelet, kube-proxy, and container runtime. The kubelet interacts with the API server, ensuring that containers are up and running as intended. Kube-proxy facilitates network communication between pods and external resources. The container runtime, such as Docker or containerd, handles the execution and management of containers.

Scheduler

The Scheduler component is pivotal in determining where and when pods are scheduled to run across the worker nodes. It considers various factors such as resource availability, affinity, anti-affinity rules, and user-defined requirements. By intelligently distributing workloads, the scheduler optimizes resource utilization and maintains high availability.

Controllers

Controllers are responsible for maintaining the system’s desired state and performing necessary actions to achieve it. Kubernetes offers a wide range of controllers, including the Replication Controller, ReplicaSet, Deployment, StatefulSet, and DaemonSet. These controllers ensure scalability, fault tolerance, and self-healing capabilities within the cluster.

Networking

Networking in Kubernetes is a complex subject, with multiple components working together to provide seamless communication between pods and external services. Key elements include the Container Network Interface (CNI), kube-proxy, and Ingress controllers. The CNI plugin enables container-to-container communication, while kube-proxy handles network routing and load balancing. Ingress controllers provide an entry point for external traffic and perform request routing based on defined rules.

Conclusion:

In conclusion, understanding the various components of Kubernetes is essential for harnessing its full potential. The Master Node, Worker Node, Scheduler, Controllers, and Networking components work harmoniously to create a resilient, scalable, and highly available environment for containerized applications. By comprehending how these components interact, developers and administrators can optimize their Kubernetes deployments and unlock the true power of container orchestration.