In the first and second part of this series, we introduced the risks of the IoT / IoE world and addressed the mandatory security design considerations around the C-I-A triplet; the concepts of “openness;” the secure system and SDLC; the 4 “A”s; as well as the term “non-repudiation.” To continue with our overview, we will describe the important concepts of privacy, non-dual-use, defaults, and “human override” before we conclude this concept paper and make a few final suggestions.
Okay, this is a big one that might make or break the success for IoT / IoE.
It is key that any personal data, sensitive data, or biometrics be fully protected and tamper-proof (encrypted both in transit / usage / storage). These systems (the data storage and handling CPUs) must not only be designed with security in mind but also attain the EAL 6 (semi-formally verified design and tested) or EAL 7 (formally verified design and tested) level of the Common Criteria certifications.
It’s clear that this requires time and a lot of money, but it should be deemed way too serious and risky to short-cut this important step. At the end of the day, IoE will affect everything, so we should make absolutely sure we know what we’re doing to help prevent security lapses from occurring in the future.
It’s also quite easy to force any vendors by regulation to put an EAL certification-level logo on any such IoT device similar to the (up to) 5-star safety rating approach when you buy a car (the window sticker for cars for sale in the US shows their safety rating), hence the buyer has an easy verification option before making a purchase decision.
The usage and leverage of these already existing standards and industry processes would support the further introduction of a truly secure and therefore privacy-enabled IoE.
NON-leverage principle / no dual-use
What do I mean by that? Well, unfortunately, many people in the past have taken short-cuts and misused other unique credentials for their benefit without recognizing detrimental side-effects of such use.
For example: In the United States, oftentimes Social Security Numbers (SSNs) are misused as a unique identifier or authenticator of a person for credit applications and utility contracts (water, power, gas, phone, etc.). As a result, the frequent usage and storage (in numerous locations) of that sensitive data is the main root cause for potential identity theft in the US. Anyone who has knowledge of your SSN can open an account in your name. Worst of all, you will be liable. Proving fraud and misuse is an exercise that you won’t want to go through.
So, any used identifiers or unique IDs would be created separately from those common ones. And this goes both ways – those used for the IoT / IoE world will never be used elsewhere.
As already mentioned above with the EAL-verification process, testing is absolutely key.
I’d like to state that it’s important to use non-production (non-real sensitive) data for the testing. Far too often bad guys have found and acquired sensitive information because instead of hacking the (oftentimes more secured) production system, they hacked the 1:1 copy of the staging or test system that contained the same information. This needs to stop – you can create fictive data that is as realistic as any truly sensitive data, and any data relationship can be modeled, as well.
Those arguing differently are only revealing bad coding practices or bad foreign key-designs of their database tables. And if nothing else helps, you can still scramble the entries in the data fields.
Another key consideration is the proper setting and handling of the so-called default values, default connectivity, default passwords, default accounts, and default anything.
If a user or car-owner is not aware, capable or willing to act – and changes any default values right when s/he buys the IoT device or car – then we would create issues all over the place as we have for the last 40 years. So, we must walk the new owner of our IoT car through the process of setting the necessary values instead of defaults. This process should not be skipped over.
Is it really important to immediately enable all potential connectivity like Bluetooth, Wireless, Wired, USB, XYZ, you name it?
When the owner logs on with the super user admin account, this can be done instead in a trusted, authenticated, logged or tracked, and robust manner and not left wide open to potential risks just to make the first appearance more user-friendly.
Last (but certainly not least), we should always create a human override option (synonym for the big red “STOP” button) for cases when life is at risk.
Always have an emergency exit option. With so much connected, decentralized, artificial intelligence and autonomy, we should not forget the human being can become easily overlooked, outpaced or ignored. Since technology is there to support us humans (and not the other way around), and since we humans still can make better decisions in complex situations that affect life, we MUST ensure that this enables our control of the IoT.
This will also serve us well in the case that hackers make it too far. We can hit the stop button to prevent bad things. Just make sure that button is absolutely tamper-prove (physical presence is necessary).
As I already stated recently at a CIO panel, it is important to foster more support for security throughout many companies and organizations. Since many folks (including management and executives) in various organizations have paid lip service when it comes to security in the past, a new approach should be taken to ensure we create the right incentives for these management layers in all the various involved entities (especially for those companies that create the above technologies).
Therefore, consider these key points:
- All leaders in a company should have a portion (an adjusted percentage rate depending on their job importance) of their annual bonus schemes allotted to the achievement of security goals. Remember, security is a leadership duty.
- As long as the NSA, GHCQ and others are sponsoring the dark market for zero days and exploits, etc., we create the problems we then later want to solve. These agencies should find alternative ways to disrupt the enemies.
- With “security” vendors releasing so-called PoCs (proof-of-concepts), which can easily be used and changed into malicious code by the “dark force,” it is no wonder cyber security is still in its infancy. The security industry would be well advised to make a clear decision on which side they stand.
Please apply those above principles and thoughts to all types of connected things, regardless if it’s a car as used in the examples above, or a machine, a steel mill, a manufacturing plant, a nuclear power station, an airplane controller, a SCADA system, or any other (Internet) enabled device.
This will hopefully improve the current state of security, especially with the upcoming fourth wave of IoE.
About the Author: Michael Oberlaender has a broad, global, diversified background in various industries and markets, 27 years of IT including 17+ years full time security experience, and a strong focus on IT & security strategy. Michael is a globally recognized thought leader, book author (“C(I)SO – And Now What?”), publisher, and has written numerous articles for security magazines, and also has been frequent speaker, panelist and moderator at security conferences. He holds a master of science (physics) from the University of Heidelberg, Germany. He is member of (ISC)², ISACA, ISSA, and InfraGard (FBI).
Michael is currently serving as the Chief Information Security Officer of a larger corporation across the US, Canada and the UK. His expressed statements and opinions are that of his own and do not reflect on any current or prior employer or customer.
To find out more about Michael´s book, “C(I)SO – And Now What?”, click here.
You can also follow Michael on Twitter here, and connect with him on LinkedIn here.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.