Skip to content ↓ | Skip to navigation ↓

This post will touch briefly on the “why” of reproducible builds, but it is primarily a quick and dirty “how to” when building for Amazon Web Services. If you’re not familiar with the concept, reproducible builds (sometimes referred to as “verifiable builds”) are a methodology of building software in such a way that the path from the source code to the compiled binary can be traced and confirmed. The Tor Project took this a step further and created a deterministic build process (in case you wanted more lingo in the opening paragraph) that makes sense for their threat model.

In a blog post, Mike Perry, the lead developer on the Tor Browser, sums up the problem of building complex software:

“The core problem is this: With the number of dependencies present in large software projects, there is no way any amount of global surveillance, network censorship, machine isolation, or firewalling can sufficiently protect the software development process of widely deployed software projects in order to prevent scenarios where malware sneaks into a development dependency through an exploit in combination with code injection, and makes its way into the build process of software that is critical to the function of the world economy.”

The problem is compounded when it comes to building software intended to be deployed to a cloud platform, especially if your product needs to compile against kernel source that only runs in your target cloud environment. Ultimately, you trust the provider to supply your make tools, and you trust that those tools have not been back-doored in some way that allows compromise of your final product.

But think about the potential of a malicious version of GCC sneaking into the package repositories to which you are given access. Shipping your unreleased code off to a third-party to compile always adds some level of risk and uncertainty (though depending on your threat model, it may be a perfectly acceptable risk).

I have no reason whatsoever to doubt the quality of Amazon’s security practices; from everything I have heard, they are exceptional. I believe they could provide us with a sufficiently secure build environment—which would make for a very boring blog post—but it is difficult to validate that the binaries built on AWS have not been tampered with. Instead, rolling AWS builds into the same build pipeline that does everything else for us makes a lot of sense.

It’s generally a good practice to limit the number of “snowflake” build machines in your environment, as the technical debt you accrue from too many of these can become a burden; a snowflake, by definition, is treated in a unique way.

Last year, my boss came to me and asked if I could figure out a way to build a kernel module against an AWS kernel without sending our unreleased source code to the cloud. No decisions had been made at this point, and it was more of an exploratory exercise.

At first, I was not optimistic that this was possible. The only results my Google-Fu returned were about doing the exact opposite thing: take an AWS VM and get it to boot locally using a non-AWS kernel.

It turns out there are some problems with what I was tasked. The AWS kernel is stripped down and missing some important functionality generally required to boot. (I believe it’s missing some storage drivers, for starters.)

Fortunately, after a bit of trial and error and playing around with chroots, jails, containers and so on, I happened upon this workable solution. (Hopefully, when Google indexes this post, I can help save time for some other poor soul.)

We are going to build for the cloud and create a VM that will facilitate compiling source code for a kernel module against a kernel we’re not actually running by:

  1. Creating a VM.
  2. Configuring a chroot.
  3. Downloading and installing RPM files from AWS that contain kernel source.
  4. Hacking your chroot into submission.
  5. Compiling against the downloaded AWS kernel source.

Step 1 – Setup a new CentOS 7+ VM

Install CentOS 7.2 in VirtualBox or some other hypervisor of your choice.

At this point, you might want to update the kernel to match the major version you’re building against in your chrooted environment. It’s certainly not required, but I generally try to make the kernel version in the base OS match[1] the version I intend to compile against. More than anything, this has to do with identifying which machine builds against which kernel in our build automation.[2]

Step 2 – Prepare the chroot

We are going to lay down the AWS kernel source inside of a chroot, so that we can trick our build tools into building against it. To do that, we create a few folders. As the root user, execute the following:

mkdir -p /aws/chroot/proc
mkdir -p /aws/chroot/dev

I chose the path arbitrarily during testing, and it denotes nothing special.

As part of the build process, we’ll need resources provided by both these new folders, so we need to mount our system’s /dev and /proc into the chroot:

mount -o bind /proc /aws/chroot/proc
mount -o bind /dev /aws/chroot/dev

We’ll also need to install packages inside the chroot, so we need to create a chroot specific RPM database:

mkdir -p /aws/chroot/var/lib/rpm
rpm --rebuilddb --root=/aws/chroot

We need the CentOS release RPM to install into the chroot. Install wget, then grab the release RPM:

yum install wget -y
wget http://mirror.esecuredata.com/centos/7.2.1511/os/x86_64/Packages/centos-release-7-2.1511.el7.centos.2.10.x86_64.rpm
rpm -i --root=/aws/chroot --nodeps centos-release-7-2.1511.el7.centos.2.10.x86_64.rpm

Finally, install yum and the rpm-build utility into your chroot (You may not need rpm-build, but I did for my build process . . . it just depends on what you’re building.):

yum --installroot=/aws/chroot install -y rpm-build yum

Now your chroot is all set up!

Step 3 – Grab the kernel source from AWS

You need to ssh to your AWS instance and use the yumdownloader tool to grab the packages we need. When I did this, yumdownloader was pre-installed on my AWS VM. If you need to install it for whatever reason, it should be available via the package manager. Use yumdownloader to download the following packages:

yumdownloader kernel
yumdownloader kernel-devel
yumdownloader kernel-headers

I recommend putting the resulting RPMs somewhere you can wget them from inside your chroot. There are a ton of specific ways to do this, so I’ll give just one way to quickly serve up files using Python. Ensure that you have a directory created for the RPMs:

mkdir rpms
mv *.rpm rpms/
cd rpms/

Now that our files are there, we can start a simple web server to serve the content (Python will be pre-installed):

python -m SimpleHTTPServer

This will start a server in Python that can serve the contents of the ‘rpms’ directory. You can browse the files by going to http://<your AWS IP Address>:8000/, then you can either download the RPMs from here or copy the link location and use wget from inside your chroot.

Step 4 – Hacking your chroot into submission

Now that we’ve gathered everything we need, there are a few more things we’ll need to install from inside our chroot. First, switch into your chroot:

chroot /aws/chroot

To ensure we can wget the RPMs from the previous step, as well as to make compiling work, install the last few things we need from yum:

yum install grubby wget
yum group install "Development Tools"

If you’ve placed the kernel, kernel-revel, and kernel-headers package somewhere, you can get to them with wget. Go ahead and download them into your chroot now:

mkdir rpms
cd rpms/
wget <your AWS IP Address>:8000/<each file>.rpm

You can visit your AWS IP in a browser and just copy the link address if you don’t want to type all of it.

Once you’ve downloaded all three, install them:

rpm -i *.rpm --nodeps

Hooray! Take a short break and pat yourself on the back. You now have the source tree for the Amazon kernel laid down inside a chroot.

Unfortunately, things still won’t build because when you compile a kernel module, it’s typically going to figure out which kernel to build against by querying this:

uname

Right now

uname -r

and

uname -v

are not going to return

4.1.13-19.30.amzn1.x86_64

and

#1 SMP Sat Oct 24 01:31:37 UTC 2015

respectively. If we want to deploy our kernel module in AWS, both the kernel version and version magic string returned by uname must match what AWS expects.

The easiest way to overcome this problem is to simply replace uname with a bash script that echoes out what we want:

mv /bin/uname /bin/uname.bak

Then use your favorite text editor to create the following file:

#!/bin/bash
QUERY="$1"
if [ "$QUERY" = "-r" ]; then
echo "4.1.13-19.30.amzn1.x86_64"
exit 0
fi
if [ "$QUERY" = "-s" ]; then
echo "Linux"
exit 0
fi
if [ "$QUERY" = "-m" ]; then
echo "x86_64"
exit 0
fi
if [ "$QUERY" = "-v" ]; then
echo "#1 SMP Sat Oct 24 01:31:37 UTC 2015"
exit 0
fi
echo "x86_64"
exit 0

Save the file to:

/bin/uname

Then make it executable with chmod:

chmod +x /bin/uname

You should execute each query and verify the response provided above matches the versions you are building against on AWS.

Finally, the symlink to the kernel source is broken when setting things up this way. Fix it by re-linking it:

unlink /lib/modules/4.1.13-19.30.amzn1.x86_64/build
ln -s /usr/src/kernels/4.1.13-19.30.amzn1.x86_64 /lib/modules/4.1.13-19.30.amzn1.x86_64/build

Step 5 – Build against the AWS kernel

You can now compile a binary that can be installed on AWS! Every build is slightly different, but if you’re building a kernel module, you can easily verify it loaded after whatever install steps you normally take. For this example, I built some tripwire code, specifically twnotify:

# sudo cat /var/log/messages | grep twnotify | grep kernelDec 28 20:20:36 [AWS HOST] kernel: [ 5060.962737] twnotify: in init
Dec 28 20:20:36 [AWS HOST] kernel: [ 5060.982841] twnotify:verify_table sys_close=ffffffff811c6a20 *sys_close=fe89550000441f0f
Dec 28 20:20:36 [AWS HOST] kernel: [ 5060.986327] twnotify:verify_table sys_close=ffffffff811c6a20 *sys_close=fe89550000441f0f
Dec 28 20:20:36 [AWS HOST] kernel: [ 5060.991419] twnotify: init done: returning 0

As you can see, the module was loaded and returned 0.

And that’s how you start down the road of building a kernel module for AWS at home. Most of this process isn’t specific to AWS; it could be adapted to any cloud provider that you want to integrate builds for into your existing build pipeline.

Feel free to ask questions in the comments below!

 


References

https://blog.torproject.org/category/tags/deterministic-builds

https://blog.torproject.org/blog/deterministic-builds-part-one-cyberwar-and-global-compromise

https://reproducible-builds.org/

Notes

[1] At the time of working out how to do this the Amazon Linux Kernel version was

4.1.13-19.30.amzn1.x86_64

Just replace the version with the actual current version if you’re following along at home.

[2] If you do want to update the kernel from source, make sure to do a

yum groupinstall "Development Tools"

as well as

yum install gcc ncurses ncurses-devel bc wget

You will need all of these to compile a new kernel. Actually compiling and installing a new kernel is beyond the scope of this blog post.