The cybersecurity industry is more well-informed than most, but even so, misconceptions arise and spread, helped along by the fact that the rise in cybersecurity incidents has led to substantial “pop culture” intrigue with all things cybersecurity.
One of the more harmful of these misconceptions is the conflation of “hacker” and “attacker,” terms which are treated as interchangeable.
Hacker vs Attacker
“Hacker” is another name for an ethical researcher. It refers to someone who proactively explores, identifies and alerts organizations to vulnerabilities that an attacker could use for malicious purposes. They seek to disclose in good faith by alerting organizations that may or may not have vulnerability disclosure policies.
Although “hacker” is now sometimes mistakenly used to refer to an attacker, its origins are benign and complementary. The term arose to describe someone who was smart enough to “hack” their way through the security levels of a computer system or network.
An “attacker,” on the other hand, is just that. It’s someone who gains unauthorized access to someone else’s network and computers for malicious purposes. An attacker probes for vulnerabilities, but unlike a hacker, the attacker exploits them without permission or without warning the organization. This can be for monetary gain such as in ransomware attacks, cryptojacking, etc., which are costly scenarios if the victim’s computing resources are cloud-based and the attacker is racking up CPU usage fees. The attack could also focus on the theft of user data for monetization on the dark web. Alternatively, it could be for competitive advantages such as using a RAT or APT to escalate privileges and extract intellectual property or other valuable data. Rather than a direct attack, some nefarious individuals create malware decoys such as mobile apps with keyloggers and trojans that steal banking and retail account passwords, enabling account takeovers.
Attackers could also be working on behalf of hostile nation states for espionage purposes by seeking all sorts of potentially catastrophic outcomes from exfiltrating confidential data to disrupting critical infrastructure and services. No matter the reason, they all have one thing in common – their actions are harmful and can have catastrophic consequences.
What’s the best way to stay out of an attacker’s range? Forcing an attacker to move on to weaker prey by taking steps to make your organization a tougher target. Besides investing in your talent and security stacks, one of the best ways to strengthen your organization’s defenses is by working with the hacking community.
It’s hard to close holes you don’t know about. Some companies create bug bounty programs, inviting hackers to find bugs and rewarding them when they do. More cyber-savvy organizations, such as the U.S. Department of Defense, for example, have established vulnerability disclosure programs (VDPs). Both methods have their advantages. The important thing is that every company should choose one and programmatically run it.
Unfortunately, only 7% of Forbes’ Global 2000 currently have a published policy that:
- Recognizes the valuable contributions that security researchers can offer the organization and the global online community;
- Provides security researchers with clear guidelines for conducting vulnerability discovery and establishing the scope of the program as well as how and who to contact;
- Encourages and systematically enables the hacker’s reporting of discoveries without fear of prosecution or retribution as well as provides the processes and channels to do so;
- Promises speedy and effective action to close vulnerabilities in soonest-possible time frames to prevent or minimize potential damage to the organization’s data and its stakeholders by attackers; and
- Provides reasonable rewards such as in the form of compensation, references, etc.
We see too many cases when the hacking community draws an organization’s attention to a major gap in security, only to have the organization ignore the warnings and even target those well-intentioned hackers with legal threats. The recent statement by a popular social media company — that they don’t need a vulnerability program because they have a security team — is ludicrous on its face. It’s like saying: “I have a family doctor, so I don’t need a specialist.”
Some things to keep in mind:
- Be sure to get the Board of Directors and C-suite on board prior to launching the program. They have a clear obligation and particular vested interest in keeping the organization out of the “latest data breach” news cycles, but they may misunderstand how a VDP works or even the difference between “hackers” and “attackers.” A well-informed CIO and CISO can guide the C-suite and Board in making the right choices.
- A good VDP starts with a promise. The promise should state the organization’s commitment to securing the data of its customers, partners and other stakeholders; its willingness to be the party to a productive (and non-punitive) alliance with ethical hackers; and its readiness to be alerted to and act to quickly address vulnerabilities that might otherwise potentially impact its or its ecosystem’s cybersecurity and data privacy. Make sure that promise is published, along with contact instructions, in an obvious place.
- Defining what’s in scope will vary from organization to organization. Take the time to collaborate with colleagues and be as thorough and mindful as possible in this part of the process. Also, recognize that the first year or two are a “shake-down cruise” for your VDP or bug bounty program – expect to do some fine-tuning.
- Policies should be short, clean and concise. It shouldn’t be written for attorneys but for those who may be English learners. For guidance, check out disclose.io.
- Decide how the organization will communicate with researchers and how much time you’ll require from them before they’re allowed to disclose the vulnerability (or even whether you’ll ask them not to and provide sufficient compensation in exchange for the NDA).
- Plan out how your internal teams validate, mitigate and, if applicable, externally disclose a security vulnerability.
- Develop a framework for how activities and results are summarized and reported to stakeholders and decision-makers.
- It’s also important that the full external-facing staff of the organization be made aware of the program so that they can route incoming alerts swiftly, legally and correctly to those responsible for addressing them. Too many hackers have attempted to alert companies to vulnerabilities, only to be ignored or threatened.
- If it’s your first time, don’t go at it alone. Partner up with Bugcrowd or HackerOne. Usually, in-house programs can be a little too much work to run and manage. Remember: crawl, walk and then run is the best approach to starting out.
Each of these aspects will take some planning, execution and fine-tuning. For example, regarding communicating with researchers, you can work so that your VDP or bug bounty contact process is clear, but also be ready for the unexpected. Hackers are regularly contacting companies via Twitter or a support email address to advise them of vulnerabilities. These people are doing a service for the company and have considered the potential risks. Try to be sure that whomever responds to them thanks them before routing them to the appropriate reporting mechanisms and contacts.
The research community should be thought of as valued partners whom you may not have met yet but who have unique skills for finding and alerting your organization of potentially ruinous vulnerabilities before a true attacker can exploit them. And that’s a partnership worth protecting.
About the Author: Chloé Messdaghi, president at Women of Security (WoSEC), founder of WeAreHackerz, ethical hacker advocate, podcaster, and vice president of strategy at Point3 Security, is an expert in the cybersecurity industry. She is a frequent speaker at cybersecurity conferences and events, and she is a trusted source to business and security media.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.