Skip to content ↓ | Skip to navigation ↓

On any journey we take as we progress through life, occasions will arise when we arrive at a juncture where we recognise that somewhere way-back, we may have taken a wrong turn, which has brought us to a less than ideal place – an imposition which I believe we find ourselves in today with mitigating cyber crime and its associated threats.

So, first of all let’s take a big deep breath and apply a little ‘transparent honesty’ to help us peruse what it may help us to understand the current situation some may find themselves locked into. The first revelation that is invoked by this vision is the fact that within the past decade, we have seen a steady increase in successful security compromises, attacks and systems invasions, amounting to billions in illicit generated revenue being directed into the pockets of criminality.

Alongside this criminal success story, we may observe the impact suffered on-mass of unauthorised access facilitated to billions of sensitive accounts and personal details – a situation which would seem in 2015 to be already looking to be getting worse with more names organisations being added to the list of compromise!

As we take another transparent view, we may see more facts appear in the guise of some anonymous research, which interviewed a sample of 100 security professionals, some of whom freely admitted that within their own organisations some successful cyber-attacks and compromises could remain undetected and hidden for several months, if not years before they are discovered. In fact, here one such related research project has observed that on average, the number of days attackers were present within a clients’ secured environment was 229 days before discovery.

That said, in my personal real-life experience, I have observed ineffective security operational staff oversee incidents involving security compromises by unknown actors for ‘n’ number days, weeks, or even months, which concluded that such unauthorised incursions could not be quantified to confirm what the actual access/compromise had been! But the most outrageous experience of all I have seen was a utility company who would seem to have a ‘OoOi’ (One-out-One-in) policy under which as one unauthorised compromise was detected and removed, another was detected – and this is a big name UK Plc!

As we have now taken views through the lens of transparent view focusing on ‘honesty,’ we may soon realise what the aggregated cyber threats really represent in the year 2015 in the bigger picture. However, this does reveal one more visual effect from the real world. Consider the polished brand of the big slick company based in the city of London. Now, when it comes to looking like what may be a worse-case scenario, this organisation really does take the biscuit.

In this case, not only would security seem to have been an optional extra, the situation did get much worse with the companies ineffective IT directorship overseeing high levels of privileged access being bestowed upon multiples of global end-points, who were granted authorised levels of autonomy to apply critical changes to live and operational critical assets as and when they felt the need existed (and all in the pursuit of doing the right thing).

And as if this were not enough, notwithstanding that these globally separated individuals were allowed to play with critical systems, they had not received any adequate levels of training on the selected platforms they were interfering with – the outcome of which was a significant event, manifesting in significant down-time, and a significant impact on production, and of course finances. However, as this was proven to be a learning experience, the company did at least engage an organisation to run some security and penetration testing against their deployment  the output of which did conclude that this was by no means the worst the testing team had seen but it did came to be a very close second, and required some urgent attention to bestow the basics of security to protect their assets. The real point here is, this was an operational deployment, which was supposed to be compliant with PCI-DSS, the ISO/IEC 27001, and a set of polices that had been agreed by their internal audit team.

One area of interest in this conversation may also be aligned to the implicit trust we can, and do give to technology. We have deployed IDS/IPS, so there is no need to check it because we know that it is working. There is absolutely no need to check logs as I said, we have IDS/IPS in which we trust. Of course, we don’t need to secure any internal assets as the environment is protected by a firewall – in which we trust – so there is no need to check the logs. And after all, we only have a few thousand employees and contractors connected to the network, and they have all signed the acceptable use policy (AUP), so no issues are envisaged here to concern ourselves about. All-in-all, we are running a very tight ship, right?

Possibly one of the key attributes to inadvertently allowing adverse security events to occur may be related to the implicit trust we have granted to the automation of security. And maybe it is this implicit trust that is “dumbing” our reactions, gut-feels and skill, which is detracting from some security professionals applying the prerequisite level of pragmatic knowledge and imagination to judge the state of security, and to question what the machines that are doing, saying and thinking for us.

Here we may also look to some very old research around circa early 1900s when an experiment was conducted on ‘Dancing Mice,’ which observed their reaction to unstressed and stressful conditions. The research, when applied to homo sapiens by Yerkes & Dodson, found that those who were not subject to any form of stimulation or expectation (say because of the reliance on automation) lost the ability to perform to satisfactory levels, whilst those who were placed in overstressed conditions, demonstrated lower levels of effectiveness driven by stress – now apply that thinking to an untrained, unsupported, or even unqualified security operative responding to a stressed situation of a security incident, and it may become a little clearer as to why some such events do not always run to plan.

It is my belief that if we are to see some progress in mitigating and denting the current successful onslaught of successful cybercrime invasions, we must start to apply what I call the ‘60/40’ Security Rule, with 60% representing the elements of training, human imagination and thinking outside that box, with the remainder 40% being the actual technology and tools we need to apply to defend our perimeters and information assets.

It may be time to face up to the fact that the current in-box thinking approach taken by some organisations, focusing on the known knowns may no longer be enough to defend their organisations from cyber-attack, and thus time has arrived to look top-down, bottom-up, and sideways-on to understand the openness of the business to the prospect of security compromise, cyber intelligence gathering exercises, or data leakage in order to understand how to mitigate the ever present threats of what we now commonly refer to as the APT.

We need to think as does our attacker, and train our operational teams to look for the unusual, check everything, trust nothing automated, and when they see that unusual log entry, don’t dismiss it, but ask the question why?

Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc. If you are interesting in contributing to The State of Security, contact us here.