Skip to content ↓ | Skip to navigation ↓

Previously, I proposed that security and economy are inextricably linked and that such a link has the potential to increase both national and personal prosperity. If you are a student of history, I do not believe you will have any difficulty accepting this hypothesis, particularly when you put aside any consideration of cultural and societal issues or constructs.

A sovereign entity can potentially achieve national prosperity through security and economy, but that construct may not be tenable over time. Therefore, how prosperity is achieved is where it gets tricky. Why? Because people see the world in different ways and people want to live their lives differently.

Some Wounds Cannot Be Easily Healed

My hypothesis is relatively straight forward: all technological solutions to a cybersecurity problem that do not center on the human dimension or consider human decision-making are bound to fail. The hypothesis is set by this idea: human needs and wants, not technology, define interests, and many of these interests are in conflict or are even irreconcilable.

To illustrate a potential irreconcilable decision, I point to a quote from Shimon Peres: “If a problem has no solution, it may not be a problem, but a fact, not to be solved, but to be coped with over time.”

I believe you can make a strong case that the “cyber problem” is in fact not a problem, but – as Shimon Peres says – a fact not to be solved but to be coped with over time. Why so?

When we take into account the human dimension of the cyber “problem,” we very quickly reach possible irreconcilable positions. To negate the human dimension, people actually need to buy into technologism, which asserts that technology is capable of shaping or improve human society.

Therefore, let us characterize the “cyber problem” in a way Shimon Peres may have if we are to use his quote: the human-technology cyber conflict cannot be solved, but instead is a fact not to be solved but to be coped with over time.

It is for this exact reason why I have been saying for years: if your cybersecurity solutions are eliminating or discounting the human dimension, you may just as well be spending your time on ideas that are incredibly expensive, get you strange looks, and could just be making the situation worse.

So, we now have some indication that the cybersecurity problem is not a problem but a “cybersecurity fact” that we need to deal with. In other words, it is a management problem that is influenced by interests and decided by humans.

Overlaying a Powerful Dimension

The interests of nations are not the only thing decided by humans, as there is one more powerful group of entities that touch all issues of the cybersecurity fact we face: multinational corporations. They too, of course, have interests, and their own interests do not necessarily line up with either of the systems already discussed.

Similarly, it would be foolish not to consider them as power brokers when 69 of the top 100 “economies” in the world are corporations, not nations (September 2016 figures).

In pursuit of their own interests, corporations leverage technology. For example, in order to provide a “consumer-friendly experience,” a bank might provide a mobile app to its customers and then as a “matter of convenience” start to close down brick-and-mortar branches in favor of a fully mobile banking experience. A by-product of this disruptive behavior is the immeasurable cybersecurity vulnerabilities that could be created.

If that was not tricky enough already, let’s add this dimension to the “cybersecurity fact” that we need to manage: who owns responsibility to keep the networks safe and secure, particularly as more and more corporations leverage technology for their own interest?

Allow me to elaborate with an example: if Bank X makes a massive shift to electronic and mobile banking and then begins to see an increase in cyberattacks from foreign attackers, does the nation in which Bank X is headquartered have any responsibility to protect Bank X?

Perhaps the issue is widespread across the entire industry. In that case, the threat is no longer isolated to Bank X or even the entire industry; it could potentially expand and become a national security interest.

Does that mean the state should intervene? And if so, who picks up the cost? Should the state regulate, fine, or tax corporations in this case to fund its intervention?

Difficult to Define Interests when “Definitions” are not Clearly Defined

Authorities and international bodies, such as INTERPOL, try to make distinctions but given the nature of this “cybersecurity fact” we are trying to manage, there is just too much blurring. And when interpretation is allowed to enter into the equation, rational thought can sometimes take a back seat.

This is where multilateralism gets tricky. Whose interests are really being served?

Going back to the article where I said today’s cybersecurity problems started in 1648, the Westphalian system established a set of norms that gave the world a “common playbook” on how to interpret certain constructs, such as borders, the sovereign and legal boundaries. Effectively, the Westphalian system preserved the notion of sovereignty, something that multilateralism does not necessarily do.

And because the human dimension is always affected by emotion (former FBI lead negotiator Chris Voss explains as much in his book), we cannot assume that interests will be rational or serve some end that does not put self-interest above all others.

As I indicated earlier, this is why wide-reaching interpretations of multilateral agreements are rarely effective in practice. We place so much emphasis on “agreeing” on something that the result is oftentimes some watered-down document that is unenforceable, meaningless, or open to interpretation.

In the next article, I will open with an example I use often – a part of the Budapest Convention (also known as the Convention on Cybercrime) – that can reinforce my hypothesis: all technological solutions to a cybersecurity problem that do not center on the human dimension or consider human decision-making are bound to fail.


George PlatsisAbout the Author: George Platsis has worked in the United States, Canada, Asia, and Europe, as a consultant and an educator and is a current member of the SDI Cyber Team ( For over 15 years, he has worked with the private, public, and non-profit sectors to address their strategic, operational, and training needs, in the fields of: business development, risk/crisis management, and cultural relations. His current professional efforts focus on human factor vulnerabilities related to cybersecurity, information security, and data security by separating the network and information risk areas.

Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.