The psychology of scams: how cybercriminals are exploiting the human brain

Last year more than £11.4 billion was stolen from people in the UK by cybercriminals. As technology becomes more sophisticated, so do the methods cybercriminals can use to commit their crimes. Our ever-growing reliance on technology in day-to-day life is constantly exposing new vulnerabilities cybercriminals can exploit, while at the same time, AI has lowered the skill barrier making it easier for even non-sophisticated criminals to launch advanced attacks.

But it’s not just weaknesses in our technology that can put us at risk of being scammed. In a world where AI tools can clone voices in minutes to generate convincing deepfakes, create fake websites or write thousands of seemingly legitimate reviews in an instant, social engineering tactics are evolving at a terrifying rate, putting even the most cautious individuals and businesses at risk. 

Scammers’ psychological playbook 

In our busy lives, we are reliant on our implicit trust in the systems, people and brands that surround us to oil the wheels of society. As we implement AI systems, we’re encouraging those patterns further. Moving fast on the daily commute or under pressure in a stressful workplace, we often go with the quickest, rather than the safest, choice. For example, we might not double-check the email address of a sender or spot a bogus link, relying on this implicit trust to help us make decisions fast.

When we see a trusted and well-known brand or business, we automatically deem it safe because it appears legitimate and familiar. Scammers can capitalize on the implicit trust we place in our day-to-day technology systems and exploit attentional bias, a cognitive bias wherein we find it more difficult to identify non-obvious threats when under stress and trying to do several things at once, which has become the norm for our working lives.

This means in order for a threat to cut through the noise of day-to-day work and cognitive stress, it has to be very attention grabbing, making it likely that threats designed to imitate or impersonate our well-known systems will be deemed safe because it appears legitimate and familiar. Scammers can tap into this cognitive bias and disadvantage to carry out their attacks, knowing it means people are less likely to question potential scams or threats. They also use impersonation, urgency and fear to manipulate victims into trusting them or acting quickly without verification. 

This technique, known as social engineering, is the deliberate manipulation of people into giving away confidential information or performing actions that compromise security. It’s most commonly seen in personalized scams. By tapping into these cognitive shortcuts, scammers increase the chances of their attacks succeeding because when something feels familiar, we’re far less likely to question it.

Employees under pressure

Employees in the workplace can be particularly vulnerable to this kind of psychological scam. While companies often invest significant resources in cybersecurity systems to keep their infrastructure and revenue safe, the human risks their team pose are too often overlooked in terms of investment. In the midst of a hectic workday, an employer facing decision fatigue might approve a suspicious transaction without proper verification or not question an email that appears to be from a senior colleague asking to click a link or send an urgent bank transfer.

This is not simply a case of ‘users are the problem’. Even with rigorous awareness training, overloaded employees will still face this issue. When faced with the fast-paced demands of modern business and stress, especially when workloads are heavy and we have numerous tasks to attend to, we become cognitively impaired at decision-making, which literally gets worse as the day goes on.

Research tells us that we make worse decisions at 6pm than we do at 10am, for this reason. Even with user awareness training that is rigorous, high stress-high workload fields will always suffer the effects of decision fatigue making them more likely to be exploited in this kind of social engineering attack. Busy employees can easily overlook red flags, with potentially huge and damaging consequences for their organization.

AI generates highly convincing personalized messages that mirror the tone and style of a company or individual, allowing hackers to craft the perfect phishing email that often bypasses traditional email filters. Over 30.4 million phishing emails detected across Darktrace’s customer fleet between December 2023 and December 2024 and 70% successfully passed the widely used DMARC authentication approach. With the volume of attacks continuously increasing, and with AI-powered threats leading to enhanced sophistication, human teams need support and augmentation to hope to defend themselves.

How to protect your organization

The business impact of cybercrime goes further than financial losses and can lead to reputational damage that often takes years to build up. But there are steps to take to make sure your organization isn’t the next victim. Education and enhancing digital literacy are key in protecting employers and organizations from the fast-evolving ways cybercriminals operate.

This includes comprehensive employee training programs focused on recognizing and responding to social engineering attempts. Additionally, organizations should implement robust systems of control and guardrails around their employees, including multifactor authentication and using domain-based message authenticators on emails. When online, this could include ensuring employees don’t skip the simple steps of verifying senders, double-checking URLs and always keeping a proactive mindset and healthy dose of skepticism. 

Equally, if not more important, is making sure cybersecurity measures are up to scratch, working in tandem with employees. With cybercriminals employing AI to advance their crimes, our defenses must be doing the same. It’s inevitable that humans won’t be able to spot or prevent all malicious activity so it’s critical that cybersecurity systems are adequately plugging the gaps.

Security leaders should leverage AI to stay on the front foot of attacks, using advanced technology to identify threats that may appear harmless in other environments and evade traditional security tools. AI-driven cybersecurity systems, that learn the behaviors and traits of an organization, are an essential piece of the defense puzzle for businesses today. 

A smarter defense

As AI develops, cybercrimes will only become more sophisticated, more affordable and more scalable. We’ve already seen the impact of the likes of ransomware-as-a-service crime groups, as well as wider social engineering methods, and these are only set to grow. Educating teams now about how to be more alert and digitally aware, while also investing in the likes of AI as a defense tool, is critical to staying secure in the complex cyber threat landscape we face today. The best defense is the strong partnership between human awareness and AI-enabled security.

We’ve compiled a list of the best firewall software.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top