The zero trust framework for networking and security is here for a very good reason. There is a variety of bad actors: ranging from the opportunistic and targeted, to state-level and all are well prepared to find ways to penetrate a hybrid network.
As a result, there is now a compelling reason to implement the zero trust model for networking and security. Software-defined perimeter (SDP) is heavily promoted as a replacement to the virtual private network (VPN) and in some cases firewalls for ease of use and end user experience. It also provides a solid security framework utilizing a dynamic tunnel of 1, per app to per user. This offers security at the segmentation of a micro level, providing a secure enclave for entities requesting network resources. These are known a micro-perimeters.
For a general overview of zero trust concepts and project such as SDP, you can check out my course on ZERO TRUST NETWORKING: THE BIG PICTURE.
Authentication, and Authorization
So when it comes to creating a zero trust network what are the ways to authenticate and authorize?
Well, firstly trust is the main element within a zero trust network. Therefore, mechanisms that can associate themselves with authentication and authorization to trusting at a device, user, application level are a must for zero trust environments.
When something presents itself to a zero trust network it is required to go through a number of security stages before access is granted. Essentially the entire network is dark, meaning that resources drop all incoming traffic by default, providing for an extremely secure posture. Building on this simple premise, a more secure, robust, and dynamic network of geographically dispersed services and clients can be created.
Before we go any further, it’s important to understand the difference between authentication and authorization. Upon examination of an end host in the zero trust world, we have a device and a user, which together form an agent.
The authentication of the device and the user is carried out first before agent formation. Authentication of the device will come first and secondly the user. After these steps, the authorization is performed against the agent. Authentication means confirming your own identity, while authorization means granting access to the system.
Generally, with most zero trust vendors, the agent is only formed once valid device and user authentication have been carried out. And the authentication methods used to validate the device and user can be separate. A device that needs to identify itself to the network can be authenticated with X.509 certificates, and a user can be authenticated by other means such as a setting from an LDAP server if the zero trust solution has that as an integration point. The authentication methods between the device and users don’t have to be tightly coupled providing flexibility.
IP addresses are used for connectivity, not for authentication and don’t have any fields to implement authentication. The authentication must be handled higher up the stack. So we need to use something else to define identity and that would be the use of certificates.
X.509 certificates are a digital certificate standard that allows identity to be verified through a chain of trust and is commonly used to secure device authentication. X.509 certificates can carry a wealth of information within the standard fields that can fulfill the requirements to carry very specific metadata. To provide identity and also bootstrap encrypted communications, X.509 certificates use two cryptographic keys, mathematically-related pairs consisting of a public and private key. The most common are RSA (Rivest–Shamir–Adleman) key pairs.
The private key is secret and held by the owner of the certificate and the public key as the names suggest are not secret, thereby distributed. The public key can encrypt the data that the private key can decrypt and visa versa. If the correct private key is not held it is not possible to decrypt data that was encrypted using the public key.
Private key storage
Before we start discussing the public key. Let us examine how we secure the private key. If bad actors get their hands on the private key, its lights out for device authentication.
Once the device presents a signed certificate one way to secure the private key would be to configure some access rights to the key. However, if a compromise occurs we are left in the undesirable world of elevated access exposing the unprotected key. By far the best way to secure and store device private keys is to use cryptoprocessors such as a trusted platform module (TPM).
The cryptoprocessor is essentially a chip that is embedded in the device. The private keys are bound to the hardware without being exposed to the systems operating system, which is by far more vulnerable to compromise than the actual hardware. TPM binds the software private key to the hard creating a very robust device authentication.
Public Key Infrastructure (PKI)
How do we ensure that we have the right public key? This is the role of the public key infrastructure (PKI). There are many types of PKI’s with certificate authorities (CA) being the most popular. In cryptography, a certificate authority is an entity that issues digital certificates.
A certificate can just be a pointless blank piece of paper unless it is somehow trusted. This is done by digitally signing the certificate to endorse the validity. It is the responsibility of the certificate authorities to ensure all details of the certificate are correct before signing it. PKI is a framework that defines a set of roles and responsibilities that are used to securely distribute and validate public keys in an untrusted network. For this, a PKI leverages a registration authority (RA).
You may be wondering what’s the difference between an RA and a CA?. The RA interacts with the subscribers for providing CA services. The RA is subsumed in the CA, which takes total responsibility for all actions of the RA. The registration authority is responsible for accepting requests for digital certificates and authenticating the entity making the request. This binds the identity to the public key that is embedded in the certificate, cryptographically signed by the trusted 3rd party.
However, all certificate authorities are not bulletproof from attack. Back in 2011, DigiNotar was at the mercy of a security breach. The bad actor took complete control of all eight of the certificate-issuing servers in which they issued rogue certificates that have not yet been identified. It is estimated that over 300,000 users had their private data exposed by rogue certificates. DigiNotar’s certificates are immediately blacklisted by browsers but it does highlight the issues of using a 3rd party.
While Public Key Infrastructure is used at large on the public internet backing X.509 certificates, it’s not recommended for zero trust networks. At the end of the day, when you think about it you are still using 3rd party for a pretty important task. For a zero trust approach to networking and security, you should be looking to implement a private PKI system.
If you are not looking for a fully automated process, you could also implement a temporary one-time password (TOTP). This allows for human control over the signing of the certificates. Keep in mind that a great deal of trust must be placed on whoever is responsible for this step.