Skip to content ↓ | Skip to navigation ↓

This is NOT another blog post about May’s WannaCry/WannaCrypt outbreak. Instead, the focus is on the lack of a consensus on the proper prevention and response to events like the outbreak. On a positive note, the WannaCry attacks showcase what a connected infosec community can do to minimize impact when things go terribly wrong. Kudos to @MalwareTechBlog, @MalwareJake, @johullrich and others for stepping up and sharing their findings!

The downside was the focus on useless chatter, blame-shifting and lack of useful advice for developing better prevention and response policies. This did nothing to make things better and only led to fear, panic, and bickering fueled by the media’s incessant sensationalism and apocalyptic reporting.

By far, the biggest controversy that surfaced dealt with system patching. This worm exploited a vulnerability in the SMB protocol that was patched by Microsoft in March 2017 (MS17-010). We later learned from Kaspersky Lab that 98 percent of all infected machines were running Windows 7 and not Windows XP, as the initial rants on Twitter suggested. Had you deployed the March patch, your Windows 7 systems would not have been vulnerable.

One side’s mantra was “patch your stuff!” The other, “we can’t patch, stuff will break!” Ultimately, what’s the correct approach? I say BOTH are part of a prudent protection and response plan. Here’s what I see happening.

First, we as a community have failed by allowing vendors and developers to avoid taking responsibility for maintaining operational compatibility throughout the entire lifecycle of the products, services and software we purchase. Vendors like Microsoft, Apple and others have excellent processes in place to respond to and mitigate security vulnerabilities. The problem lies downstream.

Vendors who develop software or devices that run on these platforms don’t have reliable processes in place for keeping their software or devices operationally validated as the platform changes. Users patch, things break, and the vendor is unable or at best extremely slow to provide a fix. The result is terrible or non-existent patch management, higher risk exposure and the ongoing “patch fear” mentality.

Unfortunately, end users have accepted this as the norm. We should be holding vendors to a higher standard. We put SLAs in place with many of our service providers and hold them accountable when services fail.

Why shouldn’t the mission-critical software and devices we purchase be held to the same standard? Wouldn’t we sleep better knowing our vendors have a vested financial interest in resolving operational and security issues within a pre-arranged time frame? Shouldn’t we be able to patch knowing that our vendors have us covered? Setting these expectations should be part of our security governance practices.

Second, we often fail to manage our patching processes correctly. We don’t have a good handle on the assets in place, the function of those assets, or their current patch levels. We don’t have processes for testing patch deployment in a non-production environment. Obviously, not every device will crash and burn when updates and patches are applied. Some might, but with the proper vetting process in place, we will significantly lower this possibility.

Every business is different. In my experience, I have rarely had an issue deploying security patches over the past twenty plus years. I am not saying that they haven’t happened, but in every case, we could resolve issues while maintaining the highest degree of patch deployment possible. Your mileage may vary.

The important message here is patch as much as you can without breaking things and know what’s on your networks. Forgotten devices often have a way of biting you when you least expect it. Reducing your risk exposure by patching what you can as quickly as possible is key. Patching is not a zero-sum game. There is no such thing as absolute security. The best you can hope for is risk reduction.

Just because you have concerns about patching some devices should not preclude you from patching those devices that are unlikely to have issues. Doing nothing borders on negligence. Failing to test patches before deployment is just as negligent. Even if patching does break something, you should always have a roll back plan.

Keep in mind, depending upon your exposure factor, the downtime incurred due to patching issues might be insignificant when compared to what could happen if a vulnerability gets exploited. Know your risk profile and manage your patch process accordingly.

Third, many of us fail to maintain support contracts. There are many vendors who have outstanding support and response programs. The catch is you must pay for those through ongoing support contracts. Often, security updates and patches exist but require ongoing maintenance contracts for access. We don’t keep these agreements in force, thereby precluding us from remediating known risks.

Just as operating a motor vehicle on a public highway requires knowing the rules of the road, operating within those parameters, and keeping your vehicle maintained so that it operates safely, operating devices and networks connected to the public Internet are no different. We all have a responsibility to maintain our systems and keep them in safe operating order. We should hold our suppliers accountable, be sure we have implemented intelligent patch management processes, and keep our maintenance contracts in order.

We know malicious actors will exploit any vulnerability possible. When we get owned because we failed to adopt prudent and common sense security practices, we need to look in the mirror. Owning our problems is the first step to better security. Remember – there is no crying in infosec!

 

JimNitterauerAbout the Author: Jim Nitterauer, CISSP is currently a Senior Security Specialist at AppRiver, LLC. His team is responsible for global network deployments and manages the SecureSurf global DNS infrastructure and SecureTide global SPAM & Virus filtering infrastructure as well as all internal applications and helps manage security operations for the entire company. He is also well-versed in ethical hacking and penetration testing techniques and has been involved in technology for more than 20 years.

Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.