
We speak with Ryan Fetterman of Foundation AI with Splunk in Boston at .conf25. For the past three years, Ryan has been part of the SURGe security research team at Splunk. This team focuses on strategic security research and the modern-day problems of the blue team. Recently, SURGe has joined with intelligence researchers who have also come to Cisco, and now part of a team called Foundation AI, that’s focused on developing security domain focused language models.
Ryan highlights a recent case, initially reported in July 2025 by Ukraine’s CERT-UA which first publicly released a report on a novel malware strain dubbed LameHug, attributing it to APT28 with moderate confidence. The Python-based malware (delivered as .pif, .exe, and .py files compiled via PyInstaller) had no static defaults — instead, it contains base64-encoded prompts that are decoded at runtime and sent to the Qwen 2.5-Coder-32B-Instruct model through the Hugging Face API. The LLM responds with system-appropriate commands (e.g., for reconnaissance or document collection), which the malware immediately executes on the victim host — enabling truly dynamic, on-the-fly adaptation during an active attack.
From the defensive perspective, Ryan confirms there is a lot of opportunity to apply AI in the SOC, because so much of what the SOC does is fundamentally about producing and consuming the logs and CTI and trying to make sense out of that, generate reports and share that information back out. These areas fundamentally align with the core strengths of large language models, which is natural language understanding, natural language generation. But Ryan warns, that attackers are also on an adoption curve. And as much as we’re trying to figure out the natural fit for AI solutions on defense, they’re trying to do the same things on offense.
MySecurity Media attended .conf25 courtesy of Splunk.