Skip to content ↓ | Skip to navigation ↓

As a Professional Services Consultant, I have the pleasure of traveling all around the globe meeting clients and talking to a wide variety of IT security professionals who form the front line of defence against malware.

One of my favorite topics is how people got their start in their careers in IT, but when I start discussing my own early years and touch upon my university studies, I’m often surprised by the number of people who do a double take when I share my chosen subject.

Applied Psychology and Computing may seem like an unusual combination of topics to many until I explain just how much the two topics can take from each other. Whether we are using psychology to inform the design of an artificial intelligence’s strategies for learning or using psychical techniques to better understand how human-computer interactions can be enhanced, there’s a lot of practical benefits to applying one field of science’s findings to the other.

Unfortunately, this is a trick not lost on malware designers, who are increasingly exploiting clever traps that leverage the psychology of end users and system administrators alike. Manipulation is an ancient art, but psychological studies are constantly revealing new facets to the science of “nudging” people into particular behaviors, and malware is more than happy to deploy these techniques.

In recent malware and spyware, we’ve seen timers that encourage users to act quickly without fully assessing the implications of their decisions, careful wording intended to evoke emotional responses and encourage “mistakes” that help viruses takeover networks and malware leveraging user’s habits to infect or spread malicious software.

When we’re up against such threats, it’s easy to blame users, but never before has there been such an easy way to attack individuals with such strategies and in such large numbers. The problem is that when we are tricked, we (as individuals) often struggle to learn from the experience. Consider that even today, despite wide knowledge of pyramid schemes, far too many people are exploited by similar strategies in the real world.

So whilst it’s easy to lay the blame on users, the reality is that, just like software bugs, human errors are likely impossible to be eliminated completely – and old tricks may work just as well as the newer ones. Whilst user behavior is an element that malware makers can exploit, encouraging good user behavior alone cannot be a primary defense against these issues.

So what can we do when there’s a very real risk that a threat can’t be stopped from reaching your network because of an end user’s actions? For me, a big part of the equation is ensuring that every one of your pillars of prevention, detection and response is robust.

In this particular case, when I refer to prevention, of course, I’m talking about preventing further infection – making sure you’re managing your server infrastructure against attacks from your client network and keeping on top of your vulnerability management program. And when we’re talking about detection, this too has to go beyond just the initial stage of infection and far beyond user-based alerting or the hope that antivirus software will detect everything (because nothing is 100% effective). Instead, you need to make sure that you know what’s actually being tamper with by malicious agents on your network.

And finally, your response must go beyond the hope that users “learn their lesson” and that automated tools alone will resolve the issue. There needs to be an understanding about the real impact of an infection and a review process of the “how” that led to the situation in the first place.

Even with that in place, and as detection and infection technology continues to improve, so too will the trickery that can be achieved by malware makers. Given the improvements made in recent years related to User Experience (UX) design in the software market, it’s likely only a matter of time before we start seeing more ransomware that uses new and unique concepts to encourage interaction.

I would not be surprised to see more malware that emulates popular software features like two-factor authentication onboarding workflows (to capture telephone numbers) or social media experiences. All that means you’ll need to keep educating your users continuously over the years to come.

The good news is that technology has opened up new avenues for educating users, with videos, email newsletters and social media making it easier than ever reach out to the organization and help everyone understand the latest tricks being used against them. And psychology is helping us to better understand what makes information in these formats “stick” better so it’s more effective and efficient to train the business.

Going back to my university days, in my final year I spent some time developing and studying the end user effectiveness for alternative authentication methods (many of which at the time seemed unusual but now are not uncommon – picture passwords, for example, were added to Microsoft Windows some years later) and found that there were some real benefits to taking a “human first” approach to solving IT security challenges.

Now, when we consider that the bad guys have come to an understanding about this in the past few years, perhaps those of us on the other side need to start learning more about this, too.