By Kai Roer, Praxis Security Labs
If you’re determined to be a criminal, then cybercrime is an understandable choice. AI- and GPT-based tools are easily available at low cost and are increasingly accessible to non-technical criminals. Rates of detection and criminal conviction are inadequate, and the low ‘costs of entry’ can make it a highly profitable endeavor.
But, when cyber-defenders also have access to equally sophisticated technology, why are rates of cybercrime as high as they are, and still accelerating with no signs of a slowdown? The answer is because one vital component of every organization’s cyber defenses is unable to evolve as quickly as the technology being used by attackers, and that element is the human brain. Or, more accurately, human behavior. Human behavior is, by far, the weakest link in your cyber security strategy: currently, 95% of all cyber security issues can be traced to what the WEF calls “human error”. I suggest that this terminology indicates the authors have missed the point; humans do as humans will, and I prefer non-judgemental, non-blaming terms such as “human factors” or “human involvement”. Nonetheless, AI has accelerated the growth and the complexity of attacks, likely leading to phishing attacks and scams that are harder to detect by their victims. Such attacks are designed to manipulate instincts and reactions developed over tens of thousands of years of human evolution, to generate a specific action or reaction that gives the criminal their route in.
Are you lying to me?
Such social engineering-style attacks rely on the victim believing what they are seeing and what they are being told. Consciously (or with subconscious competence), we consider a wealth of attributes in determining whether we believe something. Among these: who is telling me, am I currently expecting something at least approximating to this, is it “normal”, is it “reasonable”, does it contain enough verifiable information to enable me to accept the unverified elements, etc.
However: the vast majority of lie detection by humans relies on other, less conscious detection methods, and often relate to the messenger rather than the message itself. We assess intonation in speech, the emotional state of the person speaking, hesitation, consistency between what they’re saying and how they’re trying to portray themselves, or too much/too little eye contact. Any one of these indicators from this list (and many others) may flag up that what you’re being told is suspicious, and that you should be commensurately cautious with both the message and its messenger.
These two things together illustrate the scale of our problem: many of the instincts that would subconsciously tell us when the messenger is suspect are entirely circumvented in digital environments, while AI tools help ensure that the content of messages themselves is sufficiently plausible to pass standard sniff tests.
But this is not the biggest issue facing cybersecurity. The issue is that we don’t recognize the extent to which we’ve lost our social threat defenses: our human brains have little understanding of how to navigate digital environments safely, precisely at a time when there are plenty of bad actors trying to take advantage of that weakness.
Are lies part of your standard office day?
Let’s assume we know each other. A person who you believe to be me approaches you in a completely dark room and gives you an object, telling you to eat it. Clearly you have no idea what this item is; even though you know me, your thought process is almost certainly to question what I have given you. In the physical world, being without visual aids is typically enough for you to require more evidence before accepting my information as truth.
Now imagine you are looking at your email inbox. A person who you believe to be me – a person known to you and whom you trust – sends you an email. There’s an attachment which I’ve indicated you should download, even though you have no idea what it is. Even if your suspicions are raised, it’s unlikely that your response will match the heightened skepticism you feel in the first scenario.
Yes, the first scenario is just a nightmare scenario in another realm, right? But the second… well, that’s a standard day at the office for most of us. Research has shown that we are chronically and unjustifiably optimistic in cybersecurity scenarios; our brains are not able to recognize the risks involved in the digital space and, often, it’s your job to open attachments. Criminals exploit this fact when they trick you.
In theory, one aspect can be remedied fairly easily: everyone needs to start treating the digital world with the same caution and skepticism we instinctively apply in physical environments. If everyone in an organization understands that the principles of trust, caution and verification apply equally vitally online as they do offline, such an organization will find its defenses against digital deception and social engineering significantly bolstered. In practice, however, our brains are not developed to function in this digital, abstract space, rendering this theoretical remedy wishful thinking.
Staying ahead of the game: helping employees change their mindset Ah, you say: Security Awareness Training (SAT). Actually, no. As analysts such as Forrester have recognized, SAT is no longer a sufficient strategy. We are now in the era of Human Risk Management (HRM), an approach that goes beyond traditional training methods to facilitate a mindset change among employees towards cybersecurity.
Here’s how you can get started.
Engage employees in tabletop exercises, cyber lunches or other engagement focused on social interactions, or exercise that simulate the loss of different senses. These scenarios should challenge employees to navigate various situations without relying on certain senses – mirroring the limitations we often face in digital communications. Like phishing simulations, real life simulations are a powerful tool in getting people to understand complicated or abstract concepts.
Run your scenarios as small team discussions: organize your employees in small teams. This encourages more active participation and discussion and will positively influence your security culture.
Don’t omit the debrief: after each exercise, conduct a properly facilitated debriefing session where teams can discuss their strategies or decisions. You may consider nominating a facilitator to guide and encourage discussion.
Solicit feedback after exercises, to help you gauge their success and how your employees feel about security and their role in interventions.
As this list suggests, the good news is that getting started with HRM is very accessible, even for SMEs, which often bear the brunt of cyber attacks but equally often are less well-resourced to defend against them.
Changing human behavior often entails trying to resist instincts honed over thousands of years of evolution. Instead of pouring resources into training, assessments and similar actions that have been shown to deliver little protection against social engineering, consider changing and improving the environment and processes within and with which your team must operate. For example: implement technology that makes it easy for the employees to do the right things.
Ultimately you may want to consider, too, external advice and solutions that will enable you and your organization to stay ahead of the bad guys.