AI Advances Elevate Threat Levels

0

Written by Michael McKinnon, CIO, Tesserent.

Recent advances in artificial intelligence (AI) have given cybercriminals new tools that elevate the chance of successful cyber attacks. Advancements in AI enable cyber criminals to create increasingly sophisticated and harder-to-detect social engineering attacks. Governments and businesses need to be aware of these risks and must take steps now to mitigate them.

Global socioeconomic differences have encouraged the creation of Internet scammers and con artists seeking to escape poverty. In disadvantaged countries with extreme levels of unemployment, ease of access to technology and Internet connectivity have led to an environment where new cybercriminals can emerge. A perfect storm has developed with motivated and disenfranchised people using technology and increasingly sophisticated techniques to exploit victims globally for their livelihood.

There are also highly organised crime groups and nation-state actors who have the time and resources to invest in leveraging advanced technologies including AI. These bad actors are far more dangerous and their attacks can be harder to predict. AI-based social engineering attacks can be much more effective than traditional phishing scams because AI algorithms simulate human behaviour with extreme precision.

Where some spammers try their luck by distributing their scams to the masses hoping for a small number of victims to respond, the increased effectiveness of AI enables them to focus their efforts with greater precision. AI language models can be used to learn the preferences and interests of potential victims, allowing scammers to tailor personalised targeted attacks. This makes it much harder for users to detect if they are being scammed or not.

One of the long held barriers for criminals operating overseas has been their lack of proficiency in the English language. Telltale signs of bad grammar or spelling mistakes have made it easy to spot spam and scams but, as AI technology continues to become more widespread, cybercriminals can easily leverage the power of machine learning and deep learning to create ever more convincing social engineering attacks.

Cybercriminals can use anything from natural language processing to generating realistic “pretexts” or stories that convince victims more easily. And while scammers traditionally keep their content brief, perhaps as a consequence of having to perform more accurate English translations, AI by contrast can create flawless content almost instantly which is longer and better constructed, and by extension much more believable.

The risk of phishing attacks remains one of the highest priority areas of concern for organisations today and could be made even worse with the help of AI technology. In addition to creating more convincing justifications and believable scenarios, machine learning can be used to generate realistic email addresses, names, and other personal information that makes the scams all the more effective.

One of the most insidious types of email attack is business email compromise (BEC) where an employee’s email account is compromised and controlled by a cybercriminal masquerading as the employee. According to the Australian Cyber Security Centre, Australian businesses lost more than $98 million last financial year. AI allows an attacker to impersonate an employee’s style and tone of writing – an approach that could potentially evade the best currently-known user behaviour monitoring that we rely on today to combat BEC attacks.

But it won’t only be text based AI language processing that gives cybercriminals the advantage. Deep fake audio and video content is evolving at a rapid pace and can copy the speaking voice of anyone you know with enough recorded samples of their voice. This step-change in social engineering risk is well and truly on the cards. It will no doubt be used to create messages which look or sound like your friends and family asking you for money or other favours.

In a business setting, the possibilities of deep fake content stretch even further and could include fake CEO messages, financial statements and other public announcements which could cause chaos if used unethically. For publicly listed organisations the threat of share price manipulation has never been higher.

To counteract this, organisations need to ensure they have the latest AI-driven security technology in place to detect and protect against deep fake attacks as soon as possible. The combination of both user education and advanced technologies for detecting anomalous behaviours are no longer enough to stay one step ahead of cybercriminals.

Combine AI with the power of automation at scale, and you begin to see the true potential of AI-driven cybercrime. As we are likely to see a dramatic reduction in the time it takes to develop, test and launch attack campaigns. By leveraging data science techniques such as clustering and predictive modelling, attackers can quickly identify patterns and assess which attacks may be most successful in certain contexts.

There are many potential issues that AI can bring to the table when in the hands of cybercriminals. But there are also measures governments, the business community and the general public can take to protect themselves against such threats, and they’re more important than ever before.

Defending against the automation and use of AI that attackers will use means that we must be proactive and use the same technology to assist in our defence. Investing in security solutions that are able to detect AI-driven cybercrime before it happens is one of the keys to remaining secure online.

Companies should also be aware of algorithms that can analyse user behaviour to identify malicious activities and block them in real time – thereby helping protect users against such attacks. In what almost seems like a science-fiction type of reality, there’s no doubt we are entering a realm where AI will be pitted against AI in efforts to protect humans from themselves.

As AI technology continues to become more widespread, it is more important now than ever that organisations are properly prepared for this evolving threat landscape.

Share.

Leave A Reply