Machine Learning, often oversold as Artificial Intelligence, a double-edged sword for cybersecurity

0

Machine learning (ML), usually oversold as artificial intelligence (AI), presents a double-edged sword for businesses, because, while it provides cybersecurity advancements, it can also give cybercriminals an advantage. While malware researchers use ML to better understand online threats and security risks, adversaries can use it to become harder to detect, and more targeted or successful in their attacks. IT departments and security decision-makers need to understand the complexity of ML in cybersecurity, and how to strike a balance between risk and reward. Security professionals need to stay one step ahead of savvy cybercriminals, and optimise ML in unique and effective ways that cybercriminals can’t, according to ESET.

ML, as a subcategory of AI, has already triggered radical shifts in many sectors, including cybersecurity. ML has helped security developers improve malware detection engines, increase detection speeds, reduce the latency of adding detection for entirely new malware families, and enhance abilities to spot suspicious irregularities. These developments lead to higher levels of protection for organisations against advanced persistent threats (APTs), as well as new and emerging threats.

With that being said, cybersecurity professionals are beginning to recognise that AI/ML is limited in its capacity to combat online threats, and that the same advanced technologies are readily available to cybercriminals. According to an ESET survey, the vast majority of IT decision-makers are concerned about the growing number and complexity of future AI/ML-powered attacks, and the increased difficulty of detecting them. (1)

For example, in 2003, the Swizzor Trojan horse used automation to repack its malware once every minute. (2) As a result, each of its victims was served a polymorphically-modified variant of the malware, complicating detection and enabling its wider spread. (3)

Two-thirds of the almost 1000 IT decision-makers surveyed by ESET agreed that new applications of AI/ML will increase the number of attacks on their organisations, while even more respondents thought that AI/ML technologies will make future threats more complex, and harder to detect (69 percent and 70 percent respectively).

Nick FitzGerald, senior research fellow, ESET, said, “Amongst the recent hype regarding AI and ML, many organisations and security decision-makers fail to realise that these tools aren’t reserved for responsible, constructive use. Technological advances in AI/ML have an enormous transformative potential for cybersecurity defenders, however, cybercriminals are also aware of these new prospects.

“Cybercriminals might, for example, adopt ML to improve targeted attacks and thus become more difficult to uncover, track and mitigate. Cybersecurity developers can’t rely on ML to fight online threats when hackers are using that same technology. They must be realistic about the limitations of ML, and understand the consequences these advancements can have.”

While ML isn’t a silver bullet cure to cyberattacks, it is being effectively and smartly incorporated into anti-malware protection products to improve detection of ever evolving online threats.

References –
(1) – https://www.welivesecurity.com/wp-content/uploads/2018/08/Can_AI_Power_Future_Malware.pdf
(2) – http://www.virusradar.com/en/Win32_TrojanDownloader.Swizzor/detail
(3) – https://www.welivesecurity.com/2010/07/15/swizzor-for-dummies/

Share.