Skip to content ↓ | Skip to navigation ↓

Trust is important both in and out of cyberspace, and just as we need to be able trust the plumber who fixes a leaking faucet in our home, we also need to be able trust that our software selections are performing their respective jobs unmolested.

The field of trusted computing aims to develop techniques to reassure users and system administrators that the software running on their systems has not been compromised or replaced with malware, typically through measuring (hashing) the executable bits and bytes in a static application.

To this end, we caught up with Jacob Torrey (@JacobTorrey) to learn a little more about MoRe, the Measurement of Running Executables, which is the subject of his presentation this week at THREADS conference, part of NYU-Poly’s Cyber Security Awareness Week (CSAW).

Torrey’s presentation provides a cohesive overview of the work performed by on the DARPA Cyber Fast Track MoRE effort, which examined the feasibility of utilizing x86 TLB splitting as a mechanism for periodic measurement of dynamically changing binaries, and led to the creation a proof-of-concept system to split the TLB for target applications and allow them to be measured in order to detect code corruption.

“I work with all aspects of trust in computer systems, however, it can be difficult to trust software written by hundreds or thousands of people you’ve never met, and have no real method to verify the program is performing as it should,” said Torrey, a research engineer for Assured Information Security (AIS) who leads the low-level computer architectures group, and researches new methods to improve endpoint/workstation security and enhance trust in computer systems.

Unfortunately, Torrey says not all applications and software are amenable to these measurement techniques, and without a measurement it is extremely challenging to determine if the computer has been maliciously compromised.

“My research provides a way to use a low-level architectural design on modern systems, initially used by advanced root-kits, to provide measurement capabilities for dynamic applications so they can be verified and the user made aware of whether their browser or OS has been modified to [possibly] malicious ends.” Torrey said. “It feels pretty good to take a technique originally used by malware authors and put a defensive spin on it.”

Torrey’s session at THREADS is aimed at the architects of future computer and software systems to highlight current problems in order to identify steps to mediate the issues and show how in the future the notion of trust in a computer system should be considered fundamental.

“You can liken trusting a computer to trusting someone with a dissociative identity disorder who rapidly switches between hundreds of completely separate persona’s, where it is impossible to build a rapport and trust one identity when another one can take over  and betray you,” Torrey said. “The technical project I will talk about provides one small but powerful tool to improve our trust in the digital world.”

Torrey hopes that the highly-technical research folks who attend his session will leave with a renewed appreciation of the intricacies of modern computer architectures, and how their design decisions can unintentionally open new possibilities for both positive and negative utilization.

“As I have built upon the work from my peers, I want to be able to give back to the infosec community and hopefully spur new and exciting developments,” said Torrey, who believes that the “least privilege principle” ultimately minimizes trust in both systems and their users.

“For the students and infosec professionals in attendance, I aim for them to come away with an increased awareness of how much trust they put in computer systems, and for them to strive to find ways to improve future systems and networks with that in mind,” Torrey continued.

But Torrey also warns that trusted computing can also be a double-edged sword in that if we check and measure everything, our productivity will suffer, much like taking the day off of work in order to watch that plumber fix your faucet.

Additionally, Torrey says there are still many unanswered questions about what to do when you have detected your system is no longer trustworthy.

“Many trusted systems fail-safe and will not operate if a compromise is detected, which is a great trait if you are about to type in your banking password, but less so if you really want to see when the next bus is coming,” he said.

Torrey says another big question is how to properly infer intent as being malicious or otherwise from a simple measurement of an executable, such as whether the hash changed due to a software update or patch, or if a root-kit taken hold.

“If we continue down the rabbit-hole of measurements equaling trust we will drown in a sea of meaningless hashes, and then the challenge has simply migrated to trusting the measurements from the manufacturer,” he said.

Despite that risk, Torrey says he has seen some really innovative work recently in behavioral analytics and the inference of trust based on how an application behaves, as opposed to just a static measurement of the binary.

“You can see this trend growing with the contract-based security system in Android, in which an application must ask for permission to perform certain risky behaviors, in a sense forming a contract with the user, and then the operating system ensures the application doesn’t violate its contract by doing things it’s not supposed to,” Torrey said. “Behavioral trust is a much more natural way to impose limits and minimize exposure in the event of a ‘betrayal’ event.”

Torrey says that currently a huge cognitive leap is required to go from our everyday experience with trust to that of trusted computing and security as a whole, noting that an entire security awareness industry has sprung up around training users to avoid common pitfalls.

“It is incredible to me that we can function so naturally, making implicit trust decisions as we go about our daily life, but yet the software and security fields have barely scratched the surface,” Torrey said. “If cyber-security can be made more similar to personal and societal security models, I foresee more informed users making better decisions with less need for security awareness training.”


Related Articles:


P.S. Have you met John Powers, supernatural CISO?

Title image courtesy of ShutterStock