The cybersecurity field depends on openness: researchers publish detailed analyses so defenders can understand attacker tactics and harden defences. But today’s AI-assisted development tools, often called vibe coding, change the dynamics. New research from Trend Micro demonstrates how large language models, paired with coding assistants, can be used to translate public technical reports into workable code fragments, lowering the barrier for copycat campaigns and complicating attribution.
Trend Micro researchers asked a practical question: Can an AI coding assistant turn a publicly available technical analysis into meaningful malware fragments? To test this, they used a published investigation, “The Espionage Toolkit of Earth Alux”, as the source material and fed described TTPs (Tactics, Techniques, and Procedures) into modern LLMs and developer assistants.
The experiment produced Python and C fragments that mirrored persistence mechanisms and communication patterns described in the report. The figures below are taken directly from that experiment and show how quickly text can be turned into code that resembles attacker modules.
Figure 1 — AI interpreting published TTPs and outlining malicious script structure
Finding: With only the details in a public report, AI tools can create code outlines that resemble first-stage malware components, visually representing how AI interprets published TTPs to outline malicious script structures.
Figure 2 — AI planning script development from textual descriptions
Finding: Using uncensored/open-source models makes it trivial to bypass built-in guardrails, demonstrating how easily such tools can be re-purposed by attackers, and illustrating AI's ability to plan script development based on textual descriptions.
Figure 3 — Generated Python snippets simulating persistence and communication
Conclusion: Although the fragments are incomplete and require manual refinement, they provide a significant head start, enough for less experienced actors to prototype semi-functional tools and enable convincing false-flag or copycat campaigns.
To give concrete context for why public reports can be so instructive to AI models, the Earth Alux investigation documents several advanced operational behaviours that an AI can “learn” from:
These kinds of specifics, when described in public reports, make it easier for an LLM to generate relevant code fragments or scaffolding that mirror real-world attacker patterns. (Full report: Trend Micro, The Espionage Toolkit of Earth Alux.)
Should research stop publishing technical details? Trend Micro’s position, and that of many practitioners, is that the benefits of transparent threat reporting still outweigh the risks.
Public analysis:
What must change is how publications and defenders account for AI-assisted misuse. Researchers and publishers should consider whether and how to test how their write-ups might be interpreted by LLMs, and defenders must broaden the mix of signals they rely on.
“Transparency in security reporting has always been a cornerstone of community defence. Our findings show that while criminals can attempt to misuse these reports with AI tools, the benefits of sharing research far outweigh the risks. What changes is how we as an industry must think about attribution and the responsibility of testing how our publications might be interpreted by AI models.”
— Robert McArdle, Director of Forward Threat Research, Trend Micro
This need for adaptation also extends to the critical issue of attribution, where AI-assisted misuse presents a significant challenge. Attackers have long used reuse, mimicry and false flags to frustrate investigators; vibe coding amplifies that capability:
Frameworks that emphasise actor modelling (for example, the Diamond Model and intent-based analysis) will play a central role in distinguishing genuine actor fingerprints from AI-enabled mimicry.
Organisations should act now to adapt their posture:
Share intelligence responsibly. Timely, contextualised sharing, coupled with operational guidance, remains critical to collective defence.
As the visuals from this research show, what once required weeks of technical expertise can now be prototyped in hours using AI. This democratisation of malware development raises the stakes for defenders, but it also highlights the power of collective innovation.
The lesson is not to retreat from transparency, but to adapt. By rethinking attribution, investing in advanced detection, and sharing intelligence more effectively, the industry can turn openness into resilience.
Cybersecurity has always been a cat-and-mouse game. With vibe-coding, the pace accelerates, but so does the opportunity for smarter, community-driven defence.