Since its release was announced, ChatGPT, One of the pioneers of this new age of artificial intelligence (AI) has been the subject of constant desire, fear and controversy. As the debate intensifies between those who support the unlimited development of artificial intelligence and those who want to stop a technology that poses a potentially great threat to humanity, the virtual ‘pirates’ They speed up their race to seize control.
In recent months, multiplying the number by seven hackers talking about how to manipulate chatbot on the InternetAccording to a study by a cybersecurity company NordVPN.
‘ forums‘Dark Web’invisible to search engines or traditional browsers, the internet’s backyard is filled with conversations about this topic: New posts about AI tool increased from 120 in January of this year to 870 in February, so 625% increase.
In addition, Threads in ChatGPT increased 145% (from 37 to 91 in a month), boots inside ‘One of the hottest topics on the Dark Web.
A tool to sow chaos
At the start of the year, most of these threads focused on how bad bad actors can be. Encourage ChatGPT to produce malware necessaryIt’s a vulnerability that its creator, OpenAI, has since fixed.
However, after a month, the trend among the community hackers is is be more aggressivewith these messagesPlans are outlined to take control of the chatbot and use it to wreak havoc.
A fact worth worrying about: Under your control with ChatGPTcybercriminals take advantage of its artificial intelligence commit fraud on an industrial scaleaddressing multiple victims at the same time
And when a hacker manages to take control of the chatbot, remove your security restrictions and use it for create malware and incoming emails phishingpromoting hate speech or make propaganda.
chain cyber fraud
“However, revolutionary artificial intelligence for cybercriminals may be the missing piece of the puzzle for a number of scams,” he says. Marijus BriedisThe NordVPN cybersecurity expert. And that, “social engineering takes a lot of time. hackers. However, once bootsthese tasks are completely outsourced, set up a fraud production line“.
In addition, Briedis said, “ChatGPT’s use of machine learning alsofraud attemptslike incoming emails phishingoften identifiable by misspellings, can be developed to be more realistic and persuasive.“.
For all these reasons, the Dark Web discussions over the last month were simply “tricks” and solutions designed to encourage ChatGPT to do something funny or unexpected, take full control of the vehicle And turn it into a weapon.
How do you protect yourself against AI-generated fraud?
Given the threat described, NordVPN, the company responsible for the study, offers some tips to be on the lookout for possible new scams orchestrated by artificial intelligence tools.
- Avoid giving personal data: the chatbot AIs are designed to learn from every conversation you have, improve their skills in “human” interaction, while at the same time creating a more accurate profile that can be kept for you. If you are concerned that they may use your data, avoid giving them your personal information.
- Pay attention to the details: Artificial intelligence is likely to present additional opportunities for online scammers, and an increase in phishing attacks can be expected as hackers increasingly use bots to create realistic scams. Traditional aspects of phishing, such as bad spelling or grammar in an email, may be about to disappear, so check the sender’s address and see if there are any inconsistencies in links or domain names.
- Use an antivirus: the hackers they’ve already successfully manipulated chatbots to create basic malware, so it’s worth having an antivirus tool that can alert you to suspicious files and keep you safe if you download them.