Malicious Use Cases for AI

0

Malicious use cases of artificial intelligence (AI) will most likely emerge from targeted deepfakes and influence operations, as revealed in the report, Adversarial Intelligence: Red Teaming Malicious Use Cases for AI from Recorded Future.

The report also found more advanced use cases such as malware development and reconnaissance will benefit from advancements in generative AI.

Report engineers collaborated to test four malicious use cases of AI to illustrate “the art of the possible” for threat actor use.

It tested the limitations and capabilities of current AI models, ranging from large language models (LLMs) to multimodal image and text-to-speech (TTS) models.

All testings were undertaken using a mix of off-the-shelf and open-source models to simulate realistic threat actor access.

The key findings from the report included:

Use Case #1: Using deepfakes to impersonate executives

  • Open-source capabilities currently allow for pre-recorded deepfake generation using publicly available video footage or audio clips, such as interviews and presentations.
  • Threat actors can use short clips (<1 minute) to train these models. However, acquiring, and pre-processing audio clips for optimal quality continues to require human intervention.
  • More advanced use cases, such as live cloning, almost certainly require threat actors to bypass consent mechanisms on commercial solutions, as latency issues on open-source models likely limit their effectiveness in streaming audio and video.

Use Case #2: Influence operations impersonating legitimate websites

  • AI can be used to effectively generate disinformation at scale, targeted to a specific audience, and can produce complex narratives in pursuit of disinformation goals.
  • AI can be used to automatically curate rich content (such as real images) based on generated text, in addition to assisting humans in cloning legitimate news and government websites.
  • The cost of producing content for influence operations will likely decrease by 100 times compared to traditional troll farms and human content writers.
  • However creating templates to impersonate legitimate websites is a significant task requiring human intervention to produce believable spoofs.

Use Case #3: Self-augmenting malware evading YARA

  • Generative AI can be used to evade string-based YARA (a tool aimed at helping malware researchers identify and classify malware samples) rules by augmenting the source code of small malware variants and scripts, effectively lowering detection rates.
  • However, current generative AI models face several challenges in creating syntactically correct code and addressing code linting issues and struggle to preserve functionality after obfuscating the source code.

Use Case #4: ICS and aerial imagery reconnaissance

  • Multimodal AI can be used to process public imagery and videos to geolocate facilities and identify industry control system (ICS) equipment – from devices, networks, controls and systems used to operate and/or automate industrial processes – and how the equipment is integrated into other observed systems.
  • Translating this information into actionable targeting data at scale remains challenging, as human analysis is still required to process extracted information for use in physical or cyber threat operations.

You can read the full report here.

Share.