AI Safety and Fraud: Understanding Risks in the ChatGPT Era

No time to read?
Get a summary

Since its introduction, ChatGPT has been a focal point in the new AI era, stirring a mix of fascination, concern and debate. Supporters push for rapid AI progress, while critics warn about risks that could threaten humanity. In the background, a competitive race to control this tech has quietly intensified, with actors aiming to influence outcomes on a global stage.

In recent months, researchers have noted a surge in discussions about exploiting chatbots online, a trend highlighted by a cybersecurity company study. The online underworld, largely invisible to ordinary search tools, has seen a spike in chatter about AI tools from January to February this year, with dramatic percentage increases reported by researchers. This rise underscores growing interest and the potential for misuse in darker corners of the web.

Posts on private forums have also climbed, reflecting a growing focus on the most popular AI topics. The conversation is dominated by questions about how to harness or control AI systems, making one of the hottest topics on the dark web a recurring concern for cybersecurity professionals.

A tool to sow chaos

Early in the year, much of the discussion warned about malicious actors attempting to coax ChatGPT into producing harmful software. The vulnerability was acknowledged by OpenAI, the creator, and subsequently addressed. Yet, conversations soon shifted toward more aggressive aims, with plans being laid to hijack the chatbot and deploy it for harmful purposes.

Threat actors have discussed using ChatGPT to automate fraud, addressing many victims at once and exploiting the system’s capabilities. When control is gained, security boundaries can be removed, enabling the creation of malware, phishing campaigns, dissemination of hate speech, or propaganda through the tool itself.

Cybercrime chains

NordVPN’s cybersecurity expert Marijus Briedis notes that AI could become a critical piece of the fraud puzzle. He explains that social engineering traditionally takes time, but once these tasks are outsourced, a production line for fraud can form. He adds that machine learning used by ChatGPT could be tailored to craft more convincing phishing messages and other fraudulent communications.

According to Briedis, the dark web discussions in recent weeks often portrayed these schemes as tricks or simple workarounds to push ChatGPT into behaving unexpectedly. The overarching worry is that criminals may leverage the technology to operate at scale and with less friction, turning a powerful tool into a weapon.

Protecting against AI-generated fraud

Given the threats outlined, NordVPN offers practical tips to spot and avoid AI-driven scams as they evolve. The guidance emphasizes vigilance and prudent use of chatbots in daily online activity.

  • Avoid sharing personal data: AIs learn from every conversation to improve interactions and build profiles. To protect privacy, users should refrain from disclosing sensitive information.
  • Scrutinize details: As scammers adopt more sophisticated tactics, phishing attempts may become harder to distinguish. Verify sender information, and carefully check links and domain names for inconsistencies.
  • Utilize antivirus software: Since some bad actors have manipulated chatbots to generate basic malware, having robust security tools on devices helps detect suspicious files and prevent downloads.

These recommendations aim to reduce exposure to AI-fueled fraud while staying informed about evolving tactics in the digital landscape. For security awareness, ongoing education and cautious online habits remain essential practices for individuals and organizations alike. (NordVPN, 2024)

No time to read?
Get a summary
Previous Article

Elche PP unveils a new slate for local elections led by Pablo Ruz

Next Article

Energy Pricing, Telecom Infrastructure, and Regulatory Review in Russia