Hands on – Kubernetes Basics
Google Cloud Platform has a ready made GOOGLE CONTAINER ENGINE enabling the deployment of containerized environments with Kubernetes. The following post illustrates Kubernetes basics with PODS and LABELS. Pods & Labels act as the main differentiators between Kubernetes and other container schedulers such as Docker Swarm. A group of one or more containers is called a Pod and containers in a Pod act together. Labels are assigned to Pods for specific targeting, organising Pods into groups. For an introduction to containerized environments kindly visit my post on container based virtualization and Kubernetes – Container Scheduler.
Kubernetes Cluster Creation
The first step deploying a containerized environment is to create a Container Cluster. This is the mother ship of the application environment. The cluster acts as the foundation for all application services. It is here you place instance nodes, Pods and replication controllers. By default, the Cluster is placed on a Default Network. The default network has a single firewall, automatic routes are installed so that each host can communicate internally. Cross communication is permitted by default without explicit configuration. Any inbound traffic sourced external to the cluster must be specified with services mappings and ingress rules. By default, it will be denied.
Container Clusters are created through the command line tool gcloud or the Cloud Platform. The following diagrams display the creation of a cluster on the Cloud Platform and local command line. You need to fill out a few details including Cluster name, Machine type, the Number of nodes. The scale at which you can build is determined by how many nodes you can deploy. Google currently have a 60-day free trial with $300 worth of credits.
Once the cluster is created you can view the nodes assigned to the cluster. The extract below shows that we have three nodes with the status of Ready.
Kubernestes Cluster Nodes
Nodes are the building blocks within a cluster. Each node runs a Docker runtime and hosts a Kubelet agent. The docker runtime is what builds and runs the Docker containers. The type and number of nodes instance are selected during cluster creation. Select the node instance based on the scale you would like to achieve. It’s possible to increase and decrease the size of your Cluster with corresponding nodes after creation. If you are increasing instances, any new instances are created with the same configuration as existing. When decreasing the size of a cluster, the replication controller reschedules the Pods onto the remaining instances.
Once created issue the following CLI commands to view the cluster, nodes and other properties. From the screenshot above you can see that this is a small cluster machine type “n1-standard-1” and has 3 nodes. If unspecified, these are the default. Once the Cluster is created. The kubectl command is used to create and manage resources.
Once the Cluster is created we can continue to create containers. Containers are isolated units sealing individual application entities. We have the option to create single-container Pods or multi-container Pods. Single style Pods have one container and multi-containers have more than one container per Pod. A replication controller monitors Pod activity and ensures they are running the correct number of Pod replicas. They are constantly monitoring and dynamically resizing. Even within a single container Pod design, it’s recommended to have a replication controller. When creating a Pod the name of the Pod will be applied to the replication controller.
The following are the flags available to create a Pod. It is recommended to use the replication controller to create multi-container PODS. But if you wish to have a multi-container POD without a replication controller, use the “create” command. The creation is called from a configuration in JAML or JSON format.
- image=IMAGE : Docker container image to use for this container.
- port=PORT : Port to expose on the container.
- replicas=NUM : is the number of replicated pods to create. By default, one is created.
- labels=key : labels attached.
The following example displays the creation of a container from docker image. We proceed to SSH to the container and view instances with the docker ps command.
A container’s filesystem lives as long as the container is active. You may want container files to survive a restart or crash. For example, if you have an MYSQL you may want these files to be persistence. For this purpose, you mount persistent disks to the container. Persistent disks exist independently to your instance and data remains intact regardless of instance state. They enable the application to preserve state during activities such as restart and shutdown.
Service and Labels
To interact with Pods and Containers with use services, an abstraction layer proving connectivity between application layers. Services map ports on a node to ports on one or more Pods. They provide a load balancing style function across a set of pods by identifying Pods with labels. With a service, you tell it the pods to proxy by identifying the Pod with a label key pair. It’s conceptually similar to an internal load balancer. In the service configuration file, the important values are the ports field, selector and label. The port field is the port exposed on the cluster node and the target port field is the port exposed on the Pod. The selector is the label-value pair to highlight what Pods to target. All Pods with this label are targeted. For example, a service named myapp resolves to TCP port 9376 on any Pod with the app=example label. The service can be accessed through port 8765 exposed on any of the nodes’ IP addresses.
For service abstraction to work, the Pods we create must match the label and port configuration. If the correct labels are not assigned then nothing works. There is also a flag used to specify a load balancing operation. This uses a single IP address to spray traffic to all NODES. The type Load Balancer flag creates an external IP on which the pod accepts traffic. External traffic hits a public IP address and forwards to a port. The port is the service port to expose on the cluster IP and the target port is the port to target on the pods. To permit inbound connections from external destinations to each the cluster, Ingress rules are used. An ingress is a collection of rules.