Pirate actors in service of nations like China, Russia, Iran, or North Korea are already using ChatGPT to refine their cyberattacks. This is revealed by a study published on Wednesday by Microsoft and OpenAI, the companies behind the popular chatbot.
The study, shared on their website, documents for the first time how these digital criminals employ the generative artificial intelligence application to scout potential victims and sharpen manipulation techniques, but also to boost productivity. In practice, ChatGPT is used to draft emails, translate documents, debug software, or optimize the development of malware.
“Groups of cybercriminals, nation-state actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an effort to understand the potential value for their operations and the security controls they may need to evade them”, Microsoft states in its release.
Actors with malicious intent
Both companies have identified and dismantled the accounts of five state-linked actors using AI to support cyber threats. One example is Forest Blizzard, an “extremely active” group tied to the GRU—the Russian military intelligence service— which has used ChatGPT to research satellite and radar technologies that could be related to military operations in the Ukraine war. Also known as APT28 or Fancy Bear, this group was involved in cyberattacks against Hillary Clinton’s 2016 presidential campaign.
Other malicious actors detected by Microsoft include Charcoal Typhoon and Salmon Typhoon, linked to China; Crimson Sandstorm (Iran) and Emerald Sleet (North Korea). All of them have used AI to craft phishing content, understand weaknesses in computer systems, or explore ways to evade detection by authorities.
Nevertheless, Microsoft and OpenAI have not detected that their AI tools have been used to shape cyberattack techniques that are “especially novel or unique”.