What is the moral obligation of a person who finds a possible weakness or flaw in a product? Is there a clear moral line to toe? Do the coloured hats really mean what we think they mean? What if it’s all upside down? What if people with the best of intent are actually making our security weaker?
Historically, we define White Hats as hackers who find vulnerabilities, and disclose them to the vendor of the impacted product. On the opposite side, Black Hats are defined as people who don’t share with the impacted software or Hardware Company; and we expect that they use the vulnerability to set up potentially damaging exploits. There’s a lot of moral ambiguity about Hackers that will only sell vulnerabilities for money and what colour their hat is; where we have ambiguity, we often refer to them as Grey Hat. Witness the soul searching referenced by Daeken about the Onity lock disclosure.
What if the “responsible disclosure” of vulnerabilities is actually what a specific subset of attackers uses to set up their attacks? This is a position espoused in an OpEd piece by Andrew Auernheimer on Wired magazine. The article’s particular moral position (not discussing the author; just the article,) is that vulnerabilities should only be disclosed to someone the finder personally knows, for the purposes of social justice.
There is a lot of truth in the article – vendors are typically held accountable to shareholders, who have a primary goal of profit. This means that if there’s no money to be made doing an activity, such as generating patches; most organizations will do whatever is likely to have the least possible cost.
Companies in general are unique, special snowflakes of environments. This causes organizations to create complex testing scenarios before they roll new patches or releases into production. This creates a window of attack, where a known vulnerability cannot be mitigated in an organization, while the test and release to production process is occurring.
However, there is one section of the Auernheimer Wired article that I feel is a bit misleading to a casual reader. Based on linked source material, the OpEd piece states:
- None of the exploits used for mass exploitation were developed by malware authors.
- Instead, all of the exploits came from “Advanced Persistent Threats” (an industry term for nation states) or from whitehat disclosures.
- Whitehat disclosures accounted for 100 percent of the logic flaws used for exploitation.
To me, someone who does not go to the source material and think about it will believe that the act of disclosing vulnerabilities only helps the attackers. My read is that the conversation is more nuanced.
So, what exactly does the source material say? It’s a great presentation, which would be really good to watch; even without the article to point to it. Pertinent to this particular blog post – starting at 22:30 in the Exploit Intelligence Project v.2 video; there is discussion about how vulnerabilities were translated into malware usage. There are 3 primary columns called out:
- Documentation about an APT that was disclosed to the public and includes exploit details.
- Zero Day disclosures; again containing the specific exploit code / details from prominent white hats, that fits into the techniques, tactics and procedures of a specific attack group.
- Zero Day Initiative HP initiative released vulnerabilities; which by definition contains details of the exploit.
- A 4th column for unknown vulnerability exists, but the author (Dan Guldo) posits that no one is finding unknown vulnerabilities. The malware authors are using what other people (white hats) are so kindly generating.
The additional piece of data that all 3 of the sources that were used contain specific exploit details feels like it is highly relevant. Continue on to about 31 minutes and the presentation starts talking about how the vulnerabilities that were used were directly taken from the disclosures, and used with zero modification. There is a really important key quote in the video about this:
“We can see that the sources of information they prefer… are sources that are more described over sources that are less described. So if something comes with exploit code that has been implemented, that’s been ‘tested’ on a live target and it worked, then it’s definitely going to get incorporated. … As we have less information, as we have more work we have to perform, we abuse those vulnerabilities less. All the way that if a vendor discloses a vulnerability that includes few details about how to exploit it, that’s almost a signal that it’s not ever going to be exploited, because it’s too difficult to reverse out that vulnerability to figure out what it is. It’s also of limited value, because I don’t know how effective it would be to spend that effort, whereas I have proven reliable code.”
I would posit that the truth is more nuanced than the Wired Op Ed implies. Daeken’s concerns are real – the universe we operate in is complex. A software or hardware vendor cannot fix a problem they don’t know about. It is impossible for them to fully know and protect all possible attack vectors at original ship. (Although, I do believe that software vendors could be doing a better job setting the initial bar; but that’s a whole different post.) The vendor may not be the ultimate target of an attack using the vulnerability. In fact, any more, it’s likely they are not. The Onity lock hack Daeken agonized over? If used maliciously, the impact is to hotel customers. The RSA hack? Also not explicitly about the source organization (RSA); really about their customers.
If the key to usage is the amount of detail we provide (regardless of if the vulnerability is discovered by a vendor or an independent), then let’s not provide the fully detailed out code paths to the public. We are not serving anyone well by handing over the best way to harm us. Let’s make attackers work for a living; and spend their resources creating exploits, instead of handing them over.
Skull Wordle courtesy of Shutterstock