Microsoft and OpenAI have warned that nation-state hackers are weaponizing artificial intelligence (AI) and large language models (LLMs) to enhance their ongoing cyberattacks.
For instance, the OpenAI reported that China’s Charcoal Typhoon used its services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.
Another example is Iran’s Crimson Sandstorm, which used LLMs to generate code snippets related to app and web development, generate content likely for spear-phishing campaigns, and for assistance in developing code to evade detection.
In addition, Forest Blizzard, the Russian nation-state group, is said to have used OpenAI services primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.
“Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships. Importantly, our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely,” reads the new AI security report released by Microsoft on Wednesday in partnership with OpenAI.
Thankfully, no significant or novel attacks making use of the LLM technology have been detected yet, according to the company. “Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool. Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors’ usage of AI,” Microsoft noted in its report.
To respond to the threat, Microsoft has announced a set of principles shaping its policy and actions to combat abuse of its AI services by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates.
“These principles include identification and action against malicious threat actors’ use notification to other AI service providers, collaboration with other stakeholders, and transparency,” the Redmond giant said.