Skip to content ↓ | Skip to navigation ↓

We all rely on some form of security software each and every day. Whether you are relying on the Certificate Authority system to validate that the SSL certificate from this or any other site is valid—or even more transparent—you are trusting that the install of OpenSSL on the server your computer is sending passwords and credit card information to has been properly patched and is actually negotiating a secure connection free of issues like Poodle and Heartbleed.

When the average person hears about things like OpenSSL vulnerabilities in the news, it seems strange and foreign, so they’ll go on their way without giving it any more thought. Matters aren’t helped any by rampant misrepresentation of information security issues in the news.

But why do we trust some software and not other software? How can someone take a quick pass at some software, application, or code and determine how likely it is to be trustworthy?

Let’s take a closer look at a couple of applications, which were touted as solutions to very real problems in the last few years. We’ll dive into what methods could be used to uncover whether the applications in question are deserving of our trust. This is not intended to be a step-by-step guide for reverse engineering software (that can be learned elsewhere), but rather a way to get familiar with thinking about applications a bit more critically.

For the sake of this article, we’re going to look at a simplified methodology of software investigation. This methodology can essentially be broken into four categories: the smell test, the confirmation test, the code review, and the second opinion. The meaning of each of these should all become self-apparent as we go over the two software examples but engineers love lists of definitions, and I’m no exception.

Smell Test: According to Wiktionary, the idiomatic definition of the smell test is “an informal method for determining whether something is authentic, credible, or ethical by using one’s common sense or sense of propriety.” We’ll take it a step further. When I think of the smell test in the context of investigating software, I think, “Do the claims of this software sound reasonable?”

Confirmation Test: This test simply looks at what the software says it does and independently confirms those claims. You won’t always be able to perform the confirmation test, but you should whenever you can. For example, if a piece of software claims to make your communications unreadable, a confirmation test might be to capture those communications and verify they’re not in plain text.

Review Test: Are you able to view the source code of the software? Does the implementation of what the software claims to do seem sane and workable?

Second Opinion: Like all good security software, the software you use should be peer reviewed by other security researchers to ensure it’s safe to use. If the software is hosted on GitHub, a good starting place to read the peer reviews is the Github issue tracking page. When someone finds an issue in the implementation, this is a likely place for them to report it to the developer.

This is not an official list, nor is it anything you will find in any textbook. This is simply an application of the scientific method laid out a bit differently and applied for our purposes. Equipped with a workable approach to use as our investigative guide, let’s dig into some software!

NQ Vault

I’m sure some astute readers have undoubtedly seen the link to an article entitled How I Cracked NQ Vault’s “encryption” floating around. If you’ve not had the fun of reading this breakdown, I encourage you to do so. Unlike the author’s approach, let’s use methodology outlined above.

To evaluate the “smell test,” we first need to know what this software claims it does. That’s easy enough to find on their Google Play Store page:

Do you worry about private data or secrets on your smartphone falling into the wrong hands? Vault makes it easy for you to fully control your privacy or secrets. Keep your pictures, videos, SMS, contacts, even Facebook messages private, and hide them from prying eyes. Protect your apps with a password or camouflage them for maximum privacy.

I would say that an app designed to encrypt your private data passes the smell test. After all, many phones come equipped with various ways to encrypt your data out of the box. There’s nothing unreasonable about data encryption. Now, if the software had promised to put you in communication with family members who have long since passed away—well—channeling the dead would be far more deserving of our scrutiny.

Having confirmed that we can absolutely expect an app like this to work, the diligent security researcher should move on to the “confirmation test,” and confirm that the application does what it says it’s going to do. In order to confirm the application works as claimed, we need to understand what it looks like when something is encrypted. In the most basic sense, it looks like this:

illustration_nqvualtE(k,m)=c

“E” is a function that takes two inputs: k and m;
“k” is the secret key;
“m” is the message you want to encrypt;
“c” is the resulting cipher text

This process is entirely reversible as D(k,c)=m; the difference is that we’re calling a decryption function, D, and providing it with our secret key and the cipher text in order to produce the original message.

This might seem complex if you have no experience with cryptography or programming, but regardless of your experience, what becomes immediately apparent is that if any portion of our encrypted file contains unaltered elements from our original input file, then we can conclude the software is not truly encrypting our input.

While the article goes into some detail about how you might go about writing software to recover a password to decrypt files encrypted with NQ Vault, we only have to see that large segments of the input file and the encrypted file appear the same to know that this application fails the confirmation test.

Therefore, it should probably not be trusted until these issues are addressed (and since this is such basic cryptography, maybe they should not be trusted going forward).

That’s one example of a “security” application we can toss aside; let’s take a look at one more. We’ll see if we can flex a bit more of our investigative skills (and a little less of our coding skills) this time!

Detekt

You may remember the news stories circled when Detekt was first released. It was presented as an effective response to those who have reason to fear they may be under government surveillance. The website for Detekt describes it as follows:

Detekt is a free tool that scans your Windows computer for traces of FinFisher and Hacking Team RCS, commercial surveillance spyware that has been identified to be also used to target and monitor human rights defenders and journalists around the world.

Okay, so Detekt is looking for government spyware. So far, that seems reasonable. If you click the “How Does it Work?” link on their homepage, this may be the point at which your security researcher spidey-sense starts tingling. This link does not tell you how the software works—it tells you how to work the software. This is an important distinction.

However, this is a free and open source project, meaning we will likely be able to find out something about it on GitHub. From the Github README.md, we get a pretty good idea of how this software claims to work. In the author’s own words:

Detekt is a Python tool that relies on Yara, Volatility and Winpmem to scan the memory of a running Windows system (currently supporting Windows XP to Windows 8 both 32 and 64 bit and Windows 8.1 32bit).

Detekt tries to detect the presence of pre-defined patterns that have been identified through the course of our research to be unique identifiers that indicate the presence of a given malware running on the computer.

Great! This actually tells us exactly how the tool works. It’s pretty limiting that Windows 7 is not supported, and Windows 8.1 64-bit is not supported, as that’s going to be much more common than a 32-bit install these days. While Detekt appears extremely limiting so far, it does pass the smell test, in that this tool is based on technology that is known to work fairly well.

We could (and if this wasn’t an illustrative example for a blog post, should) certainly go investigate Yara, Winpmem and Volatility, but that’s a bit extreme for our example today. Let’s see if there is any way to confirm that this tool does what it says it’s going to do. Based on my reading of the description, it looks like Detekt simply searches for Yara patterns in memory. One trick I’ve picked up in reviewing open source projects is to dig through the Github commit history.

You can get an idea of issues, like what the developers failed to get working correctly (frequently indicated by a big “TODO” comment in the code), things they thought were working but didn’t work and were removed, or possibly sensitive information that was later obscured or hidden in some way (for example, searching Github for “extension:conf server configuration” may reveal some juicy info). You never know what you’re going to find! I decided that the commit history for the .yar files might be a good place to look for some dirty laundry.

commit1

Notice the commit message: “Try to harden against false positives from AV memory.” This caught my attention immediately because of something I hadn’t yet considered from reading the descriptions of Detekt. This might just identify a line of inquiry I didn’t even know I had yet!

Namely, do other anti-virus suites detect the same things as Detekt? If Detekt has to modify Yara rules to prevent it from alerting on the virus definitions loaded into memory, that would mean traditional anti-virus suites are already looking for these malware families!

As I kept going through the Github commit history, I found another interesting change:

commit2

Based on this commit, I can see that Detekt previously detected eight known pieces of malware. This commit removed detection for six of them. So, in addition to greatly reducing what it looks for, Detekt may be totally redundant to your existing anti-virus protection.

To confirm my suspicion that anti-virus already does what Detekt does—and does it better—I began the complex task of searching Google for things like “FinFisher FinSpy virus definition.” In a few minutes, I learned that FinFisher spyware is reported as “Win32/Belesak.D Trojan” by F-Secure. I also learned that it is already detected by 39 out of 57 of the definitions checked by Virus Total.

As we began looking at this from the “confirmation test” point of view, we transitioned naturally into the next step: reviewing the code (which was surprisingly easy in this instance, as the commits told us what we needed to know regarding making a judgment call here).

I’m not sure there’s much value in continuing to investigate Detekt beyond this point. I think it becomes clear that this software is not an effective tool when we summarize what we learned about it:

  • It looks for only two specific pieces of malware.
  • The malware it looks for is already detected by the vast majority of anti-virus suites.
  • It does not uninstall, quarantine, or otherwise interact with the malware it does find, according to the resistsurveillance.org homepage.
  • It seems to give users needlessly alarming messaging as opposed to how any other anti-virus would present the detection of the same malware (typically, you would just get the name (for example Win32/Belesak.D Trojan detected).

For those keeping count, we have still not looked for any third-party review of Detekt, and instead have relied on our own intuition. It is possible that we could be totally wrong about how effective Detekt actually is! After doing a bit of searching, I found that Steve Lord had taken a look at Detekt. Steve even provided a more technical review which can be read here.

None of Steve’s findings bode well for Detekt. Steve’s conclusions echo my own. He says (rather eloquently):

Detekt may find the two pieces of malware it claims to find, it falls way short of the hype that surrounded it at launch and it’s extremely hacky, to the point where I can comfortably say that the tool takes several excellent open source projects and stitches them together to create something more like Frankenstein’s antimalware monster, coming apart at the seams on closer inspection. However, all of my interactions with Claudio have been positive and I’m sure he believes he’s doing it for the right reasons. It’s a hack, it’s a kludge, but who hasn’t written one?

With Steve’s review rounding out our investigation we have only increased our confidence that this tool was probably a great learning project for the developers, but fails in its objective to be an integral part of the average user, or even a professional researcher’s tool set.

I hope that by reading this, I’ve got you to think about strategies for quickly evaluating security software without having to learn a bunch of coding (although, if you’re serious about evaluating security software, you won’t have a choice).

While it may be more fun to look for obvious and unambiguous failures on the part of the developer—like the NQ Vault example—don’t overlook investigating whether or not a piece of software actually solves a real security problem in a unique way so as to become a valuable tool in your security toolbox. It’s better to have a small set of trustworthy tools than a large set of semi-workable tools.