According to a test I just took, I type at a speed of 94 words per minute. While typing the 92 words required for that test, I made 3 different mistakes. That’s a 3% error rate. Apparently the average error rate is about 8%.
As noted in 2007 by this fascinating (well, it was fascinating to me at the time) blog on average typing speed and rates:
“The implications of a 4-6% error-rate are enormous. If people are making that many errors, then good spellcheckers and auto correctors are essential. If one out of every 17-25 words is mistyped, then long command-lines seem like a very bad idea, because something like one out of every 20 commands would be in-error. Systems should be able to gracefully recover from bad input; because they will be inundated with it.”
Obviously, over the past eight years, this admonition has been heard, for I made several errors while typing this sentence, but each one was thoughtfully underlined in red by the software I use, thereby enabling me to fix it almost immediately.
But what about the “long command-line” issue? And bad input to systems?
Apparently, we’re still looking for answers to those problems because there’s still no such thing for misconfigurations in the data center. And that’s really what “bad input” ends up being called: misconfigurations.
Oh, certainly editors of all kinds can catch spelling errors, but misconfigurations aren’t simply typos. They’re more on the lines of logic errors or errors of omission. Branching the wrong way on a policy can reverse its effect, letting the bad guys in and keeping the good guys out. Failure to include a line in a script to start a process at its minimum required privilege level can provide opportunity for exploitation.
That’s not to say that they can’t be typos. Typing “801” as a port number when you meant “80” in a server or firewall configuration certainly is a misconfiguration, but it’s not the kind of typo that can be caught by a syntax highlighter or dictionary because maybe you meant to secure port 801.
But you didn’t, and now there’s been a breach and all heck is breaking loose. You wouldn’t be alone.
“Misconfigured servers have caused a spate of recent data breaches. In November 2013, approximately 2,000 Chicago Public Schools students’ personal information was exposed when a server was incorrectly configured; in January 2014, EasyDraft, which was processing payments for Bright Horizons Family Solutions, acknowledged that a misconfigured server had been exposing Bright Horizons customers’ names, bank routing numbers and bank account numbers since October 2012; and in May 2014, San Diego State University began notifying 1,050 people that a misconfigured server had exposed their names, Social Security numbers, birthdates and addresses.” – Misconfigured Server Causes Massive Data Breach at MBIA
IBM’s 2014 Cyber Security Intelligence Index found that 95 percent of all security incidents involve human error, and Gartner predicts by 2017, “75 percent of mobile security breaches will be the result of mobile application misconfiguration”.
Misconfiguration is often the result of rushing through a task or failing to pay attention to the execution of repetitive tasks. Every task that involves configuring a device or system carries with it risk. The level of risk increases with the complexity of the task. Complexity might be measured in steps, or systems, or even a process. No matter how it’s measured, the more of it there is, the more risk there is. Eventually, a mistake is going to be made.
Consider a fairly typical data center with 500 servers, each running a hypervisor. Assume each hypervisor carries about 20 virtualized workloads. Each of those virtual machines requires, on average, five different network attributes. Attributes that must be assigned (configured). That’s a total of 250,000 opportunities to make a mistake.
And it only takes one to enable the breach that makes headlines.
Worse, this only counts servers and virtual workloads. It doesn’t count the total number of systems, devices, and services that have to be configured to deliver the more than 500 apps the average enterprise supports. And now we’re going to add mobile devices. And things. And the cloud.
The scale at which we are now enabled to make mistakes is staggering.
While we might not have auto-correcting, red-underlining technology to call out our mistakes, we do have tools at our disposal that can balance the need for speed against the requirement for security. DevOps and SDN promise to automate and orchestrate many of the manual tasks required to configure servers, systems, and apps. The focus of this automation is often speed, but it’s also an opportunity to better secure the average 25000+ “things” connected to corporate networks. (Data Breach: The Cloud Multiplier Effect, Ponemon, June 2014)
DevOps bring to the table the notion of treating infrastructure as code. The use of repositories and “base” configurations enables speed and standardization that helps optimize operations and improve overall stability.
SDN (in some of its incarnations) offers a centralized point of control for policies governing everything from basic networking to access control and general security. Bringing that perspective to security, policies and configuration files should also be treated “as code”. That means policies and scripts should be reviewed for accuracy and compliance with corporate standards.
Too, treating every device and system on the network as “immutable” (even if it isn’t) can force better security and eliminate mistakes. If corporate practices disallow changes to running configurations or policies and requires instead that all such modifications must be pushed through a security command and control process that includes review, it can be a lot easier to catch logic errors or omissions.
Given the environment today and the growth forecasted in devices, systems, and things, security practitioners and leaders need to reevaluate how policies are put into practice and give serious consideration to embracing DevOps and SDN to reduce the mistakes that cause the breaches that make the headlines.
About the Author: Lori MacVittie is responsible for evangelism across F5’s entire portfolio including a broad set of network and application security solutions. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine with a focus on applications and security. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.
Title image courtesy of ShutterStock