openshift security

OpenShift Security | Best Practices

Securing containerized environments is considerably different from securing the traditional monolithic application because of the inherent nature of the microservices architecture. We went from one to many and there is a clear difference in attack surface and entry points to consider. So there is a lot to consider for OpenShift security best practices. In the past, the application stack had very few components, maybe just a cache, web server, and database. The most common network service allows a source to reach an application, and the sole purpose of the network is to provide endpoint reachability.  As a result, the monolithic application has few entry points, for example, ports 80 and 443. Not every monolithic component is exposed to external access and is required to accept requests directly.  And we designed our networks around these facts.

 

Central Security Architecture

Therefore, we often see security enforcement in a fixed central place in the network infrastructure. This could be, for example, a central security stack consisting of several security appliances. Often referred to as a kludge of devices. As a result, the individual components within the application need not worry about carrying out any security checks as it occurs centrally for them. On the other hand, with the common microservices architecture, those internal components are specifically designed to operate independently and accept requests independently, which brings huge benefits to scale and deploying pipelines. However, now each of the components may have its entry points and accept external connections. Therefore, they need to be concerned with security individually and not rely on a central security stack to do this for them.

 

The Different Container Attack Vectors 

These changes have considerable consequences for security and how you approach your OpenShift security best practices. The security principles still apply and we still are concerned with reducing the blast radius, least privileges, etc but they need to be applied from a different perspective and to multiple new components in a layered approach. Security is never done in isolation. So as the number of entry points to the system increases, the attack surface broadens, leading us to several container attack vectors that are not seen with the monolithic. We have, for example, attacks on the Host, images, supply chain, and container runtime. Not to mention, there is also a considerable increase in the rate of change for these types of environments; There is an old joke saying that a secure application is an application stack with no changes.

So when you make a change you are potentially opening the door to a bad actor. Today’s application changes considerably something a few times per day for an agile stack. We have unit and security tests and other safety tests that can reduce mistakes but no matter how much preparation you do, whenever there is a change, there is a chance of a breach. So we have environmental changes that affect security and some alarming technical challenges to how containers run as default. Such as running as root by default and with an alarming amount of capabilities and privileges.

 

Challenges with Containers

Containers running as root

So, as you know, containers run as root by default and share the Kernel of the Host OS, and the container process is visible from the Host. This in itself is a considerable security risk when a container compromise occurs. When a security vulnerability in the container runtime arose and a container escape was performed, as the application ran as root, it could become root on the underlying Host. Therefore, if a bad actor gets access to the Host and has the correct privileges, it can compromise all the hosts’ containers.

 

Risky Configuration

Containers often run with excessive privileges and capabilities, A lot more than it needs to carry out their job efficiently. As a result, we need to consider what privileges the container has and whether it runs with any unnecessary capabilities that it does not need. Some of the capabilities that a container may have are defaults that fall under risky configurations and should be avoided. You want to keep an eye on the CAP_SYS_ADMIN. This flag grants access to an extensive range of privileged activities.  The container has isolation boundaries by default with namespace and control groups ( when configured correctly). However, granting the excessive container capabilities will weaken the isolation between the container and this Host and other containers on the same Host. Essentially, removing or dissolving the container’s ring-fence capabilities.

Starting OpenShift Security 

Then we have security with OpenShift that overcomes many of the default security risks you have with running containers. And OpenShift does much of this out of the box. If you are looking for further information on securing an OpenShift cluster, kindly check out my course for Pluralsight on Securing an OpenShift Cluster.  Or this short YouTube video on OpenShift Security Best Practices.

OpenShift Container Platform (formerly known as OpenShift Enterprise) or OCP is Red Hat’s offering for the on-premises private platform (PaaS). OpenShift is based on the Origin open-source project and is a Kubernetes distribution.  The foundation of the OpenShift Container Platform is based on Kubernetes and therefore shares some of the same networking technology and some enhancements. However, as you know, Kubernetes is a complex beast and can lack by itself when trying to secure clusters.  OpenShift does a good job of taking Kubernetes and wrapping it in a layer of security, such as with the use of Security Context Constraints (SCCs) that bring your cluster a good base of security.

 

openshift security best practices

OpenShif Security: Security Context Constraint

When your application is deployed to OpenShift, the default security model will enforce that it is run using an assigned Unix user ID unique to the project you are deploying it to. Now we can prevent images from being run as the Unix root user. When hosting an application using OpenShift, the user ID that a container runs as will be assigned based on which project it is running in. Containers are not allowed to run as the root user by default—a big win for security. SCC also allows you to set different restrictions and security configurations for PODs. So, instead of allowing your image to run as the root, which is a considerable security risk, you should run as an arbitrary user by specifying an unprivileged USER, setting the appropriate permissions on files and directories, and configuring your application to listen on unprivileged ports.

 

SCC Defaults Access

Security context constraints let you drop privileges by default, which is important and still the best practice. Red Hat OpenShift security context constraints (SCCs) ensure that, by default, no privileged containers run on OpenShift worker nodes.  Another big win for security. Access to the host network and host process IDs are denied by default. Users with the required permissions can adjust the default SCC policies to be more permissive. So when considering SCC, think of SCC admission controllers as restricting POD access, similar to how RBAC restricts user access. To control the behavior of pods, we have security context constraints (SCCs). These cluster-level resources define what resources can be accessed by pods and provide an additional level of control.  Security context constraints let you drop privileges by default, an important best practice. With Red Hat OpenShift SCCs, no privileged containers run on OpenShift worker nodes. By default, access to the host network and host process IDs are also denied. A big win for OpenShift security.

 

Restricted Security Context Constraints (SCCs)

There are a few SCC available by default, and you may have the head of the restricted SCC. By default, all pods, except those for builds and deployments, use a default service account assigned by the restricted SCC, which doesn’t allow privileged containers – that is, those running under the root user and listening on privileged ports are ports under <1024.

SCC can be used to manage the following:

 

  • Privilege Mode: this setting allows or denies a container running in privilege mode. As you know, privilege mode bypass any restriction such as control groups, Linux capabilities, secure computing profiles, 
  • Privilege Escalation: this setting enables or disables privilege escalation inside the container ( all privilege escalation flags)
  • Linux Capabilities: this setting allows the addition or removal of certain Linux capabilities
  • Seccomp profile – this setting shows which secure computing profiles are used in a pod
  • Root-only file system: this makes the root file system read-only 

 

The goal is to assign the fewest possible capabilities for a pod to function fully. This least-privileged model ensures that pods can’t perform tasks on the system that aren’t related to their application’s proper function. The default value for the privileged option is False; setting the privileged option to True is the same as giving the pod the capabilities of the root user on the system. Although doing so shouldn’t be common practice, privileged pods can be useful under certain circumstances. 

 

Comments are closed.