Cato researcher develops malware technique using AI models
A researcher has developed a new way to bypass security protections in popular AI models to create malware that can steal passwords from the Chrome browser. This technique, named “Immersive World,” was designed by a person with little coding experience. The researcher used large-language models (LLMs) like DeepSeek, OpenAI’s ChatGPT, and Microsoft Copilot. They built a fictional world called Velora, where creating malware is considered normal. This approach allowed for a direct discussion of difficult topics related to programming and security. The development highlights a concerning trend: even those with basic skills can now use AI tools to create malicious software. Cato Networks researchers noted that LLMs are changing cybersecurity, making it easier for anyone to generate harmful code. Immersive World features three main characters: Dax, a systems administrator; Jaxon, a top malware developer; and Kaia, a security researcher. The researcher used these characters to guide LLMs in generating code to extract passwords from the Chrome Password Manager. Tests on several AI models showed that this technique could successfully retrieve passwords without the LLMs having explicit instructions to do so. Cato researchers notified DeepSeek, Microsoft, and OpenAI about the findings, with only Microsoft and OpenAI responding. The discovery comes as the use of AI in businesses is increasing rapidly. Reports show significant growth in the adoption of AI tools like ChatGPT and Copilot, as well as other models such as Google's Gemini. Industries like entertainment and transportation are also seeing increased AI adoption.