Skip to content ↓ | Skip to navigation ↓

Last month, Michal Nemcok blogged about the lack of security in the Progressive Insurance diagnostic monitoring dongle. By hacking the monitoring device, someone may be able to gain access to and change the behavior of the car, itself.

Now, this is serious stuff – vulnerabilities that might impact the operation of the thing that carries your body around town at 65 miles per hour every day. Your first reaction might well be one of indignation. How irresponsible can that R&D organization be to put us at such great risk?

If only it were so simple.

First of all, let’s establish that the value placed on security is a business decision. It’s an ongoing investment, and an expensive one at that. Companies constantly weigh the risk posed by potential security issues against the cost of protecting against them just like they weigh other potential investments.

Which is not to let R&D off the hook – they absolutely have a responsibility to be informed and make risks visible. But if a business decides not to concern itself with the risks that lack of security poses, no R&D organization is going to be able to improve that situation – it’s too expensive and time-consuming to just work it in to the process.

So, then the dongle that can compromise my car is the fault of the executives of the manufacturer?

That’s probably closer to the truth, but before you get out the pitchforks put yourself in the shoes of those executives. Say you have had the great idea to produce a device that monitors driving habits and have started up a little company around that idea. You, of course, have limited funds and are hoping to make a sale to a big insurance company, so that you can keep your startup afloat and maybe even grow it.

What do you do when your engineering team comes to you and says that your dongle isn’t locked down and they need a few months to improve the situation, and you’re not sure you can pay their salaries for the next few months without that sale? Are you going to make a potential company-killing decision to secure the device or are you going to go after the sale and plan to deal with the security issue down the road? After all, you don’t introduce a security problem at all if your company fails to survive and your product never sees the market.

The answer is that most people are probably going to choose to keep their business afloat because, well, their families need to eat. And because that dynamic likely exists for every new software and hardware company coming online today, we probably need to start thinking about the “public health” issues that the dynamic represents and what we can do about it.

How do we avoid a situation in which the new product innovations that we need consistently introduce vulnerabilities into our world? Certainly, we can raise the bar on information security education for our new engineers. That will help but won’t establish funding for the tools and time needed to really secure new products.

Do we demand information security standards on products the same way we have come to expect automobile safety standards? Are we prepared for the consequences, in terms of slower and costlier delivery of innovation? If we as consumers demand a safety requirement that makes delivery of that first product 30 percent more expensive, how many more startups will fail to deliver at all?

I wish I had an answer, but I don’t. But as we hear more stories like this we need to put away our indignation and think instead about how to solve this very real problem.

How can we, the people who need new and innovative products, encourage these very difficult decisions to be made in a way that protects us from new security issues?



picThe Executive’s Guide to the Top 20 Critical Security Controls Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].

Title image courtesy of ShutterStock