Skip to content ↓ | Skip to navigation ↓

Responsible disclosure is the gold standard for fixing security vulnerabilities. But as we all know, sometimes at least one stakeholder doesn’t hold up their end of the agreement.

Parties violate a responsible disclosure timeline for many reasons. Take the Zero Day Initiative, for instance. One of its security researchers discovered a vulnerability in Foxit’s PDF Reader back in May 2017. The firm reached out to Foxit about the flaw, but when the vendor took two months to say that it would not issue a fix, Zero Day Initiative decided to disclose the zero-day vulnerability ahead of its usual 120-day responsible disclosure deadline. Perhaps motivated by this disclosure, Foxit changed its mind and deployed a fix for the flaw a few weeks later.

This story fortunately had a happy ending. But not all instances of vulnerability discovery, or ethical hacking more broadly, do. Another researcher might have developed exploit code for the vulnerability and published it online. They could have seen their actions, however illegal, as a bid to force the vendor to fix the flaw and therefore felt justified in terms of digital security.

In other cases, a penetration tester could have stolen corporate information, broken a crucial system, or exploited zero-day security vulnerabilities as part of their efforts to demonstrate the insecurity of a client’s network.

Such hypothetical scenarios beg the following question: should ethical hackers and security researchers protect an organization by any means necessary, including some illegal activity?

To figure out if the end justifies a means for a security researcher charged with defending an organization or protecting ordinary web users, I turned to the information security community. I asked five regular contributors to The State of Security if there is a limit to what an ethical hacker should do. Their answers are presented below.

Keirsten Brager, security engineer | @hiddencybfigure

Keirsten Brager“I don’t believe that researchers should jeopardize their freedom or future for any organization. If the researcher is charged with a crime for illegally defending his company, it could expose the business to legal liability, cause a PR crisis, and/or negatively impact the bottom line. Even worse, the publicity could end up making the company a target for hackers, and the researcher could go to jail. Then what?

“As far as defenders becoming hackers is concerned, there is certainly value in having red team skills to become a better defender. However, let’s remember that most companies are functioning with understaffed or non-existent security teams, and attribution continues to be a challenge. There are also too many unknowns. What if a hacker is illegally using Company A’s resources to attack Company B, and Company B retaliates against Company A? What if it ends up being a protracted and expensive battle between the researchers and hackers, and the hacker has nation-state resources at his disposal?

“While “hacking back” makes for a good soundbite, it takes the already limited resources away from doing the more valuable work of securing the organization. When having these conversations, I hope the adults in the room stop and remember: security is a business issue. Therefore, sound business decisions should be made about how hacking back exposes the business to additional risk that could negatively impact shareholder value. We are here to enable the business to be profitable, not add unnecessary risk and expense.”

Kim Crawley, information security writer | @kim_crawley

kim crawley“Something that’s legal isn’t necessarily good, and something that’s illegal isn’t necessarily bad. Nonetheless, being caught breaking the law can be devastating for a small- or medium-sized business.

“My ex-husband once developed a firewall that “shoots back.” Basically, if the network detects an attack such as a distributed denial-of-service (DDoS) attack, the firewall would trigger a denial-of-service (DoS) condition in the other direction, send packets with “dead beef” targeting BIOS or a HDD’s MBR, or some combination of actions. His patent for that firewall didn’t make him any money; that’s food for thought.

“Overall, I would tread with extreme caution if you intend to react offensively to cyber attack.”

Mattew Pascucci, cyber security practice manager | @MatthewPascucci

matthew pascucci“We’ve seen the debate of security researchers being demonized by law enforcement recently in the news, and it’s an interesting question. The concerns from security researchers are valid, but wisdom and responsibility need to be used on both ends. These white hat researchers are looking to find vulnerabilities or push the industry to get a jump on them before malicious actors attempt to do the same for their own gain. It’s true that many times these vulnerabilities are found and disclosed by researchers, bugs which are then turned around into malicious tools, but without these disclosures, attackers could use these flaws without anyone even knowing about them.

“It’s because of this that security researchers should be responsible with disclosures and work with companies to patch vulnerabilities. I’m a big fan of bug bounties; using these are very helpful to researcher communities. It’s a fine line to walk; many times, companies and law enforcement look at the cause and not the reason of traffic or software they’re seeing, and they therefore lump researchers in with malicious actors. I think there needs to be more work done from a legal perspective to protect researchers, to have companies open to the benefits of bug bounties, and to have proper and responsible disclosure from researchers. There are multiple areas that need to be worked on to protect everyone involved.”

Bev Robb, information security writer | @teksquisite

Bev Robb“That’s a really tough question to answer. If it was 1997, I would say protect your organization by any means necessary, including illegal activity if it would benefit your organization (and all organizations as a whole).

“But two decades later, we found ourselves in the bitter reality of 2017: security researchers should not travel outside the protection of their home country and should not live in a country that has an extradition treaty with the United States. This means all security researchers who live in the United States or in a country that has an extradition treaty with the United States should play by the rules at all times.”

Craig Young, principal security researcher | @craigtweets

“Similar to an officer of the law, security researchers need to conduct themselves with the utmost respect for the law and individuals’ rights to privacy. The idea that it should somehow be OK for researchers to break the law in the pursuit of a worthy goal is a very slippery slope. While at times it can be frustrating having to abide by a separate set of rules from your adversary, this is in fact critical from a professional ethics point of view.

“This question does get a bit murkier when considering the appropriateness of laws. This is becoming more relevant, however, in light of the 2013 amendments to the Wassenaar Arrangement to include broadly defined “intrusion software” as a controlled weapon. If not reigned in, policy changes within the United States and other nations could serve a fatal blow to researchers and penetration testers intending to keep their activities legal. In this specific context, I could certainly identify with a security professional feeling justified in breaking the law rather than leaving enterprise networks grossly exposed to criminals.”


How about you? Do you think there are some cases where defenders are justified in becoming attackers? Let us know your thoughts in the comments!