Hacker communities have begun creating their own intelligent chatbots to support cyber fraud and data theft. This was reported by oppression Speech.
According to Europol (the European branch of Interpol), at least several AIs based on large language models are already operating in the criminal environment. Two of them (WormGPT and FraudGPT) focus on creating malware, searching for vulnerabilities in security systems, and providing suggestions on methods of fraud and hacking of electronic devices.
Another chatbot, Love-GPT, is used to commit love fraud. With its help, attackers created fake profiles on Tinder and other dating apps to siphon money and valuable data from users of dating services.
Europol, the US Cybersecurity and Infrastructure Protection Agency and the UK National Cyber Security Center have published guidance explaining new threats and measures to protect against attacks using artificial intelligence.
The materials say that artificial intelligence is already being used to organize large-scale phishing campaigns, during which users are sent messages containing links to fake sites. Fake pages are used to steal bank card information and other personal data. AI capabilities make it possible to customize such messages for individual users, increasing hackers’ chances of success.
happened before known About the release of a ChatGPT analogue aimed at cybercriminals.