ChatGPT: AI for Good or AI for Bad Camp

0

By Sean Duca, Vice President, Regional Chief Security Officer – Asia Pacific & Japan, Palo Alto Networks.

Science, technology, and all its components have strongly benefited humanity over generations. By definition, it is the search for new knowledge – so how could it be bad? But the reality is that every tool has the potential to be good or bad, and it depends on the people using it.

In our relentless quest to mimic and decipher the human mind, we have ushered in the era of Artificial Intelligence (AI). ChatGPT, a text-based AI bot, has become the latest tool making headlines for its viral use of advanced AI. From accurately fixing a coding bug and creating 3D animations to generating cooking recipes and even composing entire songs, ChatGPT has showcased the immense power of AI to unlock a world of incredible new abilities.

On the flip side, AI – as considered by many – is a double-edged sword. In cybersecurity, experts today have access to AI-powered security tools and products that enable them to tackle large volumes of incidents with minimum human interference. However, at the same time, amateur hackers can leverage the same technology to develop intelligent malware programs and execute stealth attacks at increasingly higher levels.

Is there a problem with the new chatbot?

Since the launch of ChatGPT in November, tech experts and commentators worldwide immediately became concerned about the impact AI-generated content tools will have, particularly for cybersecurity. A question many are asking is – can AI software democratise cybercrime?

Recently, at the Black Hat and Defcon security conferences in Las Vegas, a team representing Singapore’s Government Technology Agency demonstrated how AI crafted better phishing emails and devilishly effective spear phishing messages, much better than any human actor could.

Using OpenAI’s GPT-3 platform and other AI-as-a-service products, the researchers focused on personality analysis-generated phishing emails customised to their colleagues’ backgrounds and individual characters. Eventually, the researchers developed a pipeline that groomed and refined the emails before attacking their intended targets. To their surprise, the platform also automatically supplied highly relevant details, such as mentioning a Singaporean law when instructed to generate content for their targets.

The makers of ChatGPT have clearly suggested that the AI-driven tool has in-built controls to challenge incorrect premises and reject inappropriate requests. While the system technically has guardrails designed to prevent actors using it for straightforwardly malicious ends, with a few creative prompts, it generated a near flawless phishing email that sounded ‘weirdly human’.

How to tackle the challenges?

As per the Australian Cyber Security Centre (ACSC), the total self-reported losses by Australian businesses hit with Business Email Compromise (BEC) attacks reached $98 million in 2022, up from $81.45 million in 2021. This trend is only expected to rise with the availability of tools on the dark web for less than $10, the emergence of ransomware-as-a-service models, and AI-based tools such as ChatGPT, which collectively lower the barrier to entry for cybercriminals.

Considering the looming threats of an ever smarter and more technologically advanced hacking landscape, the cybersecurity industry must be equally resourced to fight such AI-powered exploits. However, in the long run, the industry’s vision cannot be a vast team of human threat hunters sporadically trying to tackle AI threats with guesswork.

On the positive side, Autonomous Response is significantly used to address threats without human intervention, but the need of the hour is to take intelligent action to counter these evolving threats. While organisations can ensure a baseline level of cyber security by implementing practices such as the ACSC’s Eight Essential mitigation strategies, it does not guarantee protection from newer, more advanced threats. As AI-powered attacks become a part of everyday life, businesses, governments, and individuals must turn to emerging technologies such as AI and Machine Learning to generate their own automated responses.

Using AI tools more responsibly and ethically

Following Australia’s recent high-profile hacks, it’s no surprise businesses are looking at ways to improve their cybersecurity posture. Implementing emerging technologies can no longer be ignored, especially with the Australian Securities and Investments Commission (ASIC) placing increased scrutiny on company directors who failed to prioritise cybersecurity.

However, businesses face a number of challenges in navigating the AI cybersecurity landscape. From technical complexities to the human components, there is a considerable focus, particularly on the balance between machines, the people involved and ethical considerations.

Establishing corporate policies is critical to doing business ethically while improving cybersecurity. We need to establish effective governance and legal frameworks that enable greater trust that the AI technologies being implemented around us will be safe and reliable while contributing to a just and sustainable world. Therefore, the delicate balance between AI and people will emerge as a key factor in a successful cybersecurity landscape in which trust, transparency, and accountability supplement the benefits of machines.

Share.

Leave A Reply