Network Based Security in a Cloudy World

I recently needed to authenticate to a service which required that I not only have a secret, but also that I come from a certain host. This became a real problem for me because I wasn’t sure where I was from, or where I would be. In fact, I could be in multiple places at any one time, and I didn’t want to be quarantined in a single location.

In more concrete terms I was authenticating using a Kerberos host principal to the service that required that my host principal be the same host where the request came from. In other words when the service received my request it would do a reverse DNS lookup on my host principal to verify that the request came from that host.

Kerberos usually doesn’t mind where you make requests from. If you have the secret in the form of a username and password, or keytab then the service usually lets you in. However in certain low trust, and high stakes environments the extra layer of security afforded by a reverse DNS lookup is warranted.

This was a problem for me because I was running my application in the cloud where I would have multiple processes doing the request on multiple servers with different IP Addresses. Furthermore none of those servers had a public IP address but instead were hiding behind a router which was using Network Address Translation (NAT). 

Luckily for me, all the applications running on the servers behind this router were owned by my team. That is not always the case in cloud environments, where users can share resources including servers with other teams, and sometimes other companies. Therefore I was able to add a DNS PTR record from the router’s IP address, allowing reverse DNS lookups of that IP to resolve to my Kerbos host principal. Now when the secure service did reverse DNS lookup on my request it was satisfied that the request came from a trusted machine (or router in this case).

You might not always be so lucky to have a private network which only you can deploy applications to. In that case you might find that your private network allows you to map a public IP Address to a single node on the private network. You could then reroute outgoing, or egress traffic, from your application through that node in the network. As long as only you can deploy applications with that configuration you could avoid allowing all machines in that network access to the secure service.

OpenShift build integrity during install and upgrade

How Red Hat deliver OpenShift Container Platform 4

Red Hat have made improvements in the way they deliver installer, client binaries, and container images for OpenShift Container Platform 4. As you may know, the installer and client binaries are delivered to you via mirror.openshift.com while the container images are delivered from Quay.io, both Red Hat sites. You may also find these images mirrored on one of the Red Hat container registries such as registry.access.redhat.com or registry.redhat.io. Because these images can be obtained from a variety of locations, it’s important to know how to verify these components as official to help keep your infrastructure secure.

Verifying the integrity of the installer and/or client binaries

All components associated with OpenShift Container Platform 4 are digitally signed using SHA256. Starting with mirror.openshift.com where you obtain the installer, and client Golang binaries, you’ll notice a sha256sum.txt document that contains a hash of each of the installer and client binaries hosted there. If you obtain the installer and client binaries from access.redhat.com you’ll still need to get the sha256sum.txt.sig from mirror.openshift.com. On a RHEL host, the key used to sign the sha256sum.txt is located on disk at ‘/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release’. 

This is probably the best option for verifying the signature, assuming you trust the channel which you obtained the RHEL bits from. For example, if you provisioned RHEL in AWS EC2 using an official AMI, the channel is probably trustworthy. You can also obtain the official signing key from the Red Hat Product Security section of the customer portal. 

Just be aware that if an attacker has already compromised your machine then the integrity of the official key could be compromised in transit from that page. You could also rely on your web of trust to verify the authenticity of the official Red Hat signing key.

Once you have a copy of the official Red Hat signing key and sha256sum.txt.sig locally, you can use the following commands to create a temporary keyring and verify the signature is valid.

$ gpg --no-default-keyring --keyring ./temp.keyring --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
gpg: key FD431D51: public key "Red Hat, Inc. (release key 2) <security@redhat.com>" imported
gpg: key 2FA658E0: public key "Red Hat, Inc. (auxiliary key) <security@redhat.com>" imported
gpg: Total number processed: 2
gpg:               imported: 2 (RSA: 1)
$ gpg --no-default-keyring --keyring ./temp.keyring --output sha256sum.txt sha256sum.txt.sig
gpg: Signature made Fri 22 Nov 2019 09:25:33 EST using RSA key ID FD431D51
gpg: Good signature from "Red Hat, Inc. (release key 2) <security@redhat.com>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 567E 347A D004 4ADE 55BA  8A5F 199E 2F91 FD43 1D51

Note that /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release contains 2 public keys. The official ‘release key 2’, and an older ‘auxiliary key’. At the time of writing the OpenShift Container Platform 4 binaries are verified with the ‘release key 2’ key, not the ‘auxiliary key’.

This command outputs sha256sum.txt locally. You can use the sha256 hashes in this file to verify the integrity of the installer, and/or client binary. In order to do that, download the installer, and/or client binaries, calculate a sha256sum locally and check it matches the hash listed in the verified sha256sum.txt file.

Verifying the integrity of the container images

There are two methods to provision the infrastructure used by OpenShift Container Platform 4: Installer Provisioned Infrastructure (IPI), and User Provisioned Infrastructure (UPI). These 2 methods are explained in the installation section of the documentation. We’ll focus on the IPI method because it’s fully automated, and therefore less transparent than UPI. We’ll also use Amazon Web Services (AWS) as an example in this article, but the same concepts apply to other cloud providers or OpenStack which are supported environments for IPI installations.

Establishing a Chain of Trust

The bootstrap machine is provisioned using a reference from the installer binary. “The installation creates an encrypted AMI for the bootstrap and control-plane machines. The encrypted AMI is copied from the RHCOS AMI uploaded by Red Hat to each availability zone. The encryption uses the default EBS key for your target account and region (aws kms describe-key –key-id alias/aws/ebs)” from the upstream installer documentation.The bootstrap machine includes a reference to a release image by digest.

Referencing an image by digest maintains integrity of the image contents because the digest is calculated from the image contents, it’s a form of content addressable storage. In fact, if 2 registries implement the Docker Registry Schema v2 and both host the same container image, the digest will be the same. That makes referring to images by digest an easy way for Red Hat to mirror container registries for resiliency, but also for customers to mirror OCP images themselves for offline installations or performance reasons.

The installer and release image always refer to images by digest. This makes it harder for an attacker to compromise the image at rest on the registry or in transit to the bootstrap node. If an attacker was to change the target image in any way, the digest would calculate to a different value from that referred to by the release image. That is why the container-tools library used by OCP recalculates the digest locally after each pull. 

Imagine a scenario where an attacker sets up a fake registry and intercepts connections from the installer to the official quay.io registry. They could serve ‘fake’ images which don’t use the correct digest of their malicious images. If the attacker was able to intercept the network connection and redirect the installer to download their faked, malicious images and we didn’t verify the digest of the image locally, we would be vulnerable to attack. Because OpenShift Container Platform 4 uses the container tools library it throws an error if the hash of the container image doesn’t match the reference from the release image.

Verifying the integrity of an upgrade

During an upgrade, OpenShift 4 downloads the target release image and the corresponding signature from an atomic signature store also hosted on mirror.openshift.com. It then checks that the image digest was signed with the official ‘release key 2’. You can verify this manually using a command similar to that used in the install section:

$ wget https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/sha256=f28cbabd1227352fe704a00df796a4511880174042dece96233036a10ac61639/signature-1
$ gpg --no-default-keyring --keyring /tmp/keyring --output signature signature-1 
gpg: WARNING: no command supplied.  Trying to guess what you mean ...
gpg: Signature made Thu 28 Nov 2019 04:41:20 AEST
gpg:                using RSA key 199E2F91FD431D51
gpg: Good signature from "Red Hat, Inc. (release key 2) <security@redhat.com>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 567E 347A D004 4ADE 55BA  8A5F 199E 2F91 FD43 1D51


$ cat signature
{"critical": {"image": {"docker-manifest-digest": "sha256:f28cbabd1227352fe704a00df796a4511880174042dece96233036a10ac61639"}, "type": "atomic container signature", "identity": {"docker-reference": "quay.io/openshift-release-dev/ocp-release:4.2.9"}}}

The code that was added to OpenShift here does the manual steps described above automatically for each upgrade. The docker-manifest-digest value in the signed payload is compared against the digest of the target release image and if there is a mismatch the verification fails. You can see messages related to signature verification failure in the web console. For example, on a 4.2 OpenShift cluster, if trying to upgrade to a CI build, it will fail and an error message will appear:

$ oc adm upgrade --to-image registry.svc.ci.openshift.org/ocp/release@sha256:0944792c44adaaf4cc3866d9459fcbcc7c3ba3809bb0c926ac4f1bef241c9133 --allow-explicit-upgrade
Updating to release image registry.svc.ci.openshift.org/ocp/release@sha256:0944792c44adaaf4cc3866d9459fcbcc7c3ba3809bb0c926ac4f1bef241c9133

Verifying the integrity of Operator Lifecycle Manager operators

While the release image refers to all the content that makes up a default installation of OpenShift Container Platform 4, it is possible to extend the functionality of OpenShift Container Platform 4 after installation using the Operator Lifecycle Manager. 

There are three different catalogs of content (community, certified, and Red Hat), organized by origin. The Red Hat catalog contains only images from the Red Hat container registry. In order to support mirroring of images for disconnected installation of OpenShift 4 operators need to be referenced by digest. At the time of writing, not all of the Red Hat operators referred to by the OLM are using a digest yet. However, since they are shipped from the Red Hat registry we can use signatures to verify their content

Conclusion

Red Hat ensures the integrity of OpenShift Container Platform 4 installation and upgrade by using signatures on the installer binaries and release image. Once a release image is downloaded and verified, it refers to the rest of the images, which make up the default OpenShift Container Platform 4, using their digests. Pulling images by digest and verifying the content which matches that digest makes it harder for an attacker to compromise the integrity of the images. The Operator Lifecycle Manager also refers to most images by digest or uses images from registry.redhat.io where the image signatures can be verified. 

 

Using Openshift Container Platform ‘s2i’ with a TCP Docker daemon

I am using a Docker TCP socket to connect to a remote docker daemon. I run Docker on a remote host so that I can access my containers from both of my development machines. Of course the Docker TCP socket should be secured with TLS authentication in order to protect it from malicious use. You can read how to do that here.
Sometimes I want to make use of Docker images which we ship in the Red Hat Docker registry, but they are designed to be used in Openshift. They are not designed to be used ‘standalone’, but to be used with the ‘s2i’ tool, which is part of the Openshift Container Platform.
If you’ve tried to use ‘s2i’ with a Docker TCP socket, you might have found that you get this error message, “Unable to connect to Docker daemon. Please set the DOCKER_HOST or make sure the Docker socket “tcp://192.168.0.1:2376” exists“, and the ‘s2i’ manual pages hard to decipher. I had to dig into the source code to figure out the correct configuration to use. You need to set the following Environment variables:
export DOCKER_HOST=tcp://192.168.0.1:2376
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=~/.docker/

Once you set those, you should be able to use ‘s2i’ to develop containerized apps locally using images from the Red Hat Docker Registry.

Deserialization in Java, collection of thoughts

Other languages have considered deserializing data a security issue, especially when done across a trust boundary, such as across a network. However Java has a long history of deserializing data sent from across the network, which started with the predecessor to EJB, CORBA.

Unfortunately lots of deserialization of untrusted data still goes on in modern Java applications. The reason its topical now, is that recent gadget chains tools created by Chris Frohoff and Gabriel Lawrence have made exploiting this type of weakness accessible by your average script kiddie.

Luckily JBoss, took steps years ago to reduce their exposure to deserialization by switching from a JMX MBean managment and configuration model, to a more secure management interface in JBoss AS 7. The new management interfaces only bind on the loopback network interface by default, and are secured with authentication out of the box.

That’s not to say that deserialization does not occur at all, and their taking steps to reduce their exposure further. Some of the tools you can use to find, and plug holes in Java applications include network packet sniffing with MITMProxy, and I plan to experiment with Direct Defence’s SuperSerial extender for Burp suite.

With the advent JDK9 the Java community is hoping to see some performance improvements in the Java Security Manager, such as JEP 232, as well as the introduction of ValueTypes. I also really like notsoserial put together by Kantega to blacklist risky classes from deserialization with byte code instrumentation, as a direct response to these kind of vulnerablities. Although I’d still consider it a band-aid approach.

The Java community have to be aware that the recently discovery of gadget chains utilizing the Apache commons-collections weakness are just the tip of the iceberg. The real solution is avoiding Java deserializaiton altogether. That means avoiding JMX Management over RMI, JMS ObjectMessage and others. Where we can’t avoid it, at least making sure those endpoints are protected by authentication. Thanks to Will Sargent, of Terse Systems for this inspiring blog post on the same topic.