Skip to content ↓ | Skip to navigation ↓

A standard is a set of guidelines and prescriptions meant to delineate specific requirements to be applied consistently in any field to ensure a minimum level of diligence or promote a widely agreed upon set of objective best practices.

Though information security as an industry is only slightly older than the technologies it seeks to protect, the field has been inundated with literally thousands of standards, most of which are in a constant state of evolution.

Conforming to and complying with standards is costly, it has created an expansive industry of its own, and even if an organization goes to great lengths there is still no guarantee they will hit the benchmarks outlined. Standards are quite simply a pain in the you-know-what.

Nonetheless, they are here to stay. To better understand the role standards play in the security industry, we spoke with Andrew Yeomans about the nature of security standards in general and their overall impact.

Yeomans is the Vice President of Global Information Security at a large international investment bank, and is an executive board member of the Jericho Forum,  an international information security thought-leadership group “dedicated to defining ways to deliver effective IT security solutions that will match the increasing business demands for secure IT operations in our open, Internet-driven, globally networked world.”

He is co-author of “Java Network Security”, the first book to cover secure multi-tier Java applications, and is also a member of the Executive Advisory Board of the ISSA UK chapter and Infosecurity Europe Advisory Council. His statements and opinions expressed here are his alone, and do not necessarily reflect those of his employer or the organizations of which he is a member.

Yeomans says that in the most simple of terms, standards for infosec are primarily designed to accomplish one feat – to mitigate risk. The term “risk” is in-and-of -itself a real minefield because people for the most part do not agree on the definition of what “risk” is.

This is most evident in the attempt to translate security efforts to the business class, a regular topic of articles in our series on connecting security to the business. But it is not merely the definition of risk that is an obstacle for meaningful and productive conversations, there are also the problems of how to identify, measure and prioritize actions to minimize risk.

“If you try to evaluate risk qualitatively, it’s often easy to see ways to reduce risk – but not so easy to see if it’s worth reducing,” Yeomans said. “And if you do it quantitatively, it’s too much like hard work.”

One of the distinguishing features of some standards is that they are for the purpose of enhancing security itself, like the SANS 20 Critical Controls with their focus on primary technical controls.

Still other widely adopted standards, such as those prescribed by the International Organization for Standardization (most specifically ISO 27001), are designed to delineate best practices for security management systems. Yeomans points out that hypothetically, an organization could feasibly have a quality security management system as described by ISO, but still be be poor at actually implementing security controls as defined by the SANS 20 CSC.

“I liken this to the earlier ISO 9000 quality control standards. My example is a fast food shop making sawdust burgers,” Yeomans explains. “It could in principle get ISO 9000 certification for its processes, which guarantee that they make consistent quality sawdust burgers. It does not guarantee that they are of good quality or edible, just consistent.”

And that’s where the disconnect between an organization’s management system its actual level of security can be readily apparent.

“In the security world, I’ve seen systems where you have your risk register, with fully documented processes to ensure that everything has an audit trail of signatures – but still never require them to implement the changes that would actually improve security,” Yeomans pointed out.

And many standards often try to cover technologies in general, irrespective of their different utilizations. The end result is that we have Server standards, Windows standards, Desktop standards, Crypto standards, etc. – all of which cover so many different application areas that they result in far too many options to be understood, let alone applied.

Yeomans makes a comparison with an established industry, electrical installation, which has a reasonable number of differing application areas; i.e. domestic wiring like those for bathrooms and kitchens, and commercial applications like those for office buildings or factories. the standards specify detailed requirements for implementation in those particular areas.

“In a few cases discretionary decisions are required, where the requirements may be outside the framework, but this makes it easy to check for quality and safety of work,” Yeomans explained. “Now at a lower level of detail there are other standards like how thick is the wire, what material is it made from, what insulation is to be used, etc. So basically there are different standards for the manufacturers and for electrical installers.”

But that is not really the case in the IT security industry, Yeomans said.

“There are some low-level standards like the NIST 800 series that cover the details, but the ‘installer’ (system integrator) level is not covered particularly well. At best, there are some kits of parts available to create your own standard like ISO 27002 or SANS 20 CSC, both of which need careful interpretation,” Yeomans continued.

“Why not have a set of well-defined application areas like  the handling personal info, of credit card info, of company secrets, or for public webservers, and then manufacturers can validate their products against those areas specific to the application of their products?”

So for example, one could get their products approved to carry PCI data, and the potential marketplace for the technology would have the ability to verify that and opt to purchase or reject the products if they feel they are not fit for their intended utilization.

Currently that’s not how it’s done, and the result may be that resources are being expended to mitigate risk where in reality ot is low or non-existent, which is counter productive for everyone in the chain. What do you think – is there a better way, as Yeomans suggests?

Feedback welcome…