Skip to content ↓ | Skip to navigation ↓

An enterprise vulnerability management program can reach its full potential when it is built on well-established foundational goals that address the information needs of all stakeholders, when its output is tied back to the goals of the enterprise and when there is a reduction in the overall risk of the organization.

Such vulnerability management technology can detect risk, but it requires a foundation of people and processes to ensure that the program is successful.

There are four stages to a vulnerability management program:

  • The process that determines the criticality of the asset, the owners of the assets and the frequency of scanning as well as establishes timelines for remediation;
  • The discovery and inventory of assets on the network;
  • The discovery of vulnerabilities on the discovered assets; and
  • The reporting and remediation of discovered vulnerabilities.

The first stage focuses on building a process that is measurable and repeatable. Stages two through four focus on executing the process outlined in stage one with an emphasis on continuous improvement. We’ll examine these stages in more detail below.

Stage One: The Vulnerability Scanning Process

1. The first step in this stage is to identify the criticality of the assets in the organization.

To build an effective risk management program, one must first determine what assets the organization needs to protect. This applies to computing systems, storage devices, networks, data types and third-party systems on the organization’s network. Assets should be classified and ranked based on their true and inherent risk to the organization.

Many facets need to be considered in developing an asset’s inherent risk rating such as physical or logical connection to higher classified assets, user access and system availability.

For example, an asset in the DMZ with logical access to an account database is going to have a higher criticality than an asset in a lab. An asset in production is going to have a higher criticality than an asset in a test environment. An internet routable web server will have a higher criticality than an internal file server.

However, though an asset is a lower criticality, remediation on that asset should not be ignored.  Attackers can leverage these oft-ignored assets to gain access and then traverse through network by compromising multiple systems until they get to the systems with sensitive data.  The remediation effort should always be based in relation to overall risk.

2. The second step is to identify the system owner(s) for each system.

System owners are ultimately responsible for the asset, its associated risk and the liability if that asset becomes compromised. This step is critical in the success of the vulnerability management program, as it drives the accountability and remediation efforts within the organization.

If there is no one to take ownership of the risk, there will not be anyone to drive remediation of that risk.

3. The third step is to establish the frequency of scanning.

The Centre for Internet Security in their Top 20 Critical Security Controls recommends that an organization should “run automated vulnerability scanning tools against all systems on the network on a weekly or more frequent basis.” Tripwire releases vulnerability signature (ASPL) updates on a weekly basis.

Scanning this frequently allows the owners of the assets to track the progress of remediation efforts, identify new risks as well as reprioritize the remediation of vulnerabilities based on new intelligence gathered.

When a vulnerability is first released, it may have a lower vulnerability score because there is no known exploit. Once a vulnerability has been around for some time, an automated exploit kit may become available which would increase the risk of that vulnerability. A system that was once thought to not be vulnerable may become susceptible to a vulnerability or set of vulnerabilities due to new software installed or a patch rollback.

There are many factors that could contribute to the risk posture of an asset changing. Frequent scanning ensures that the owner of the asset is kept up to date with the latest information. As an outer limit, vulnerability scanning should take place no less frequently than once per month.

4. The fourth step in building this process is to establish and document timelines and thresholds for remediation.

Vulnerabilities that are able to be exploited in an automated fashion, that yield privileged control to an attacker, should be remediated immediately. Vulnerabilities yielding privileged control that are more difficult to exploit or are currently only exploitable in theory should be remediated within 30 days. Vulnerabilities lower than this can be remediated within 90 days.

In the event of a system owner being unable to remediate a vulnerability within the approved time frame, a remediation exception process should be available.

As a part of this process, there should be a documented understanding and acceptance of the risk by the system owner along with an acceptable action plan to remediate the vulnerability by a certain date. Vulnerability exceptions should always have an expiry date.

Stage Two: Asset Discovery and Inventory

Asset discovery and inventory account for Critical Security Control numbers one and two. This is the foundation for any security program – information security or otherwise – as the defenders cannot protect what they do not know about.

Critical Security Control number one is to have an inventory of all authorized and unauthorized devices on the network. Critical Security Control number two is to have an inventory of authorized and unauthorized software installed on the assets on the organization’s network.

These two go hand in hand as attackers are always trying to identify systems that are easily exploitable to get into an organization’s network. Once they are in, they can leverage the control they have on that system to attack other systems and further infiltrate the network.

Ensuring that the information security team is aware of what is on the network allows them to better protect those systems and provide guidance to the owners of those systems to reduce the risk those assets pose.

There have been many cases where users deploy systems without informing the information security team. These could range from test servers to wireless routers plugged under an employee’s desk for added convenience. Without the appropriate asset discovery and network access control, these types of devices can provide an easy gateway for an attacker into the internal network.

Tripwire IP360 conducts a discovery of assets within defined ranges as well as discovers what applications are running on those discovered assets prior to conducting a vulnerability scan.

Stage Three: Vulnerability Detection

Once all the assets on the network are identified, the next step is to identify the vulnerability risk posture of each asset.

Vulnerabilities can be identified through an unauthenticated or authenticated scan or by deploying an agent to determine the vulnerability posture. Typically, an attacker would view a system with an unauthenticated view. Therefore, scanning without credentials would provide a similar view to a primitive attacker.

An unauthenticated scan is good for identifying some extremely high-risk vulnerabilities that an attacker could detect remotely and exploit to gain deeper access to the system. However, there are often vulnerabilities that can be exploited by a user downloading an attachment or clicking a malicious link that remain undetected using this method.

A much more comprehensive and recommended method for vulnerability scanning is to scan with credentials or deploy an agent. This allows for increased accuracy in the determination of the vulnerability risk of the organization. Vulnerability signatures specific to the operating system and installed applications that were detected in the discovery and inventory stage are run to identify which vulnerabilities are present.  Customers of Tripwire Enterprise can leverage an additional module of their Axon agents to enable vulnerability detection.

Vulnerabilities in locally installed applications can only be detected using this method. An authenticated IP360 vulnerability scan also identifies vulnerabilities that an attacker would see from an external unauthenticated vulnerability scan.

Many vulnerability scanners simply detect the patch levels or application versions to provide a vulnerability posture reading. Tripwire IP360, however, provides a much more detailed analysis as the vulnerability signatures are able to determine factors such as the removal of vulnerable libraries, registry keys and (but not limited to) whether or not a reboot of the system took place for the remediation to apply.

Stage Four: Reporting and Remediation

Once the vulnerability scan is complete, a score is attached to each vulnerability using an exponential algorithm based on three factors:

  1. The skill required to exploit the vulnerability;
  2. The privilege gained upon successful exploitation; and
  3. The age of the vulnerability.

The easier the vulnerability is to exploit and the higher the privilege gained, the higher the IP360 risk score will be. In addition to this, as the vulnerability age increases, the score of the vulnerability also increases.

The first metric that should be taken is an overall baseline average IP360 risk score for the organization.

Successful Tripwire vulnerability management customers start by targeting a risk reduction of 10% to 25% year over year. As the program matures, a target IP360 risk score can be set for the organization to achieve. In the initial years, an average risk score per asset of below 5,000 is a good target.

Most mature organizations strive to have even lower averages and focus on addressing any single vulnerability with a score higher than 1,000.

The next metric that should be taken is the average IP360 risk score by owner.

The ownership of assets was identified in the first stage; therefore, each owner should be able to see the baseline IP360 risk score for their assets. Similar to the target for the overall organization, each owner should target reducing their average risk score by 10% to 25% year over year until they are below the accepted threshold for the organization.

System owners should be able to view their scores in comparison with other system owners to create a sense of competition among their peers. Those who have the lowest scores should be rewarded for their efforts.

In order to drive remediation, system owners need empirical vulnerability data to outline which vulnerabilities should be remediated along with instructions of how to conduct the remediation. Reports should outline the most vulnerable hosts, the highest scoring vulnerabilities and/or reports targeting specific highly vulnerable applications. This will allow the system owners to prioritize their efforts with a focus on the vulnerabilities that will reduce the most amount of risk to the organization.

As new vulnerability scans are run, the metrics from the new vulnerability scans can be compared to the previous scans to show trending analysis of the risk as well as remediation progress.

Some metrics that can be used to track remediation are as follows:

  • What is the average vulnerability score of each asset by owner and overall?
  • How long does it take, on average, to remediate infrastructure-based vulnerabilities by owner and overall?
  • How long does it take, on average, to remediate application-based vulnerabilities by owner and overall?
  • What percentage of assets have not recently been scanned for vulnerabilities?
  • How many remotely exploitable vulnerabilities yielding privileged access are exposed on systems?

It is not uncommon for an organization to have a very high average vulnerability score with lengthy remediation cycles in the initial stages of building the program. The key is to show progress month by month, quarter by quarter and year by year.

The vulnerability risk scores and time to remediation should be decreasing as teams become more familiar with the process and become more educated on the risks that the attackers pose.

Vulnerability and risk management is an ongoing process. The most successful programs continuously adapt and are aligned with the risk reduction goals of the cybersecurity program within the organization. The process should be reviewed on a regular basis, and staff should be kept up to date with the latest threats and trends in information security.

Ensuring that continuous development is in place for the people, process and technology will ensure the success of the enterprise vulnerability and risk management program.

Interested in learning more about building a mature vulnerability management program? Click here to discover more.