MOUNTAIN VIEW, Calif. — State-sponsored fraud groups have developed artificial intelligence-powered malware that can generate malicious scripts and “change its code on the fly” to evade detection systems, according to a Google Threat Intelligence Group (GTIG) blog post.
In a report the company released at the same time, GTIG said this marks the first time its researchers have observed malware families using large language models during execution.

“While still nascent, this represents a significant step toward more autonomous and adaptive malware,” the report said.
‘Novel’ AI Operations
According to analysts, the development is one example of how threat actors are using AI not only to increase productivity but also to carry out “novel AI-enabled operations,” GTIG said. The groups have also been posing as students or researchers in prompts to bypass AI safety guardrails and obtain restricted information.
In addition, they are acquiring access to AI tools through underground digital marketplaces for phishing, malware creation and vulnerability research, according to the post.
“At Google, we are committed to developing AI responsibly and take proactive steps to disrupt malicious activity by disabling the projects and accounts associated with bad actors, while continuously improving our models to make them less susceptible to misuse,” the company said in the report. “We also proactively share industry best practices to arm defenders and enable stronger protections across the ecosystem.”
‘Intensifying Efforts’
Other companies reported separately that technology firms are intensifying efforts to address a security flaw in AI models involving indirect prompt injection attacks, in which hidden commands embedded in websites or email content can trick AI systems into providing unauthorized information.






