AI Risks: WormGPT and the Threat of Unbounded Security Tools

No time to read?
Get a summary

An obscure developer has crafted a chatbot resembling ChatGPT but aimed at hackers and cybercriminals, capitalizing on an unrestricted response style. Reports from PCMag highlight this troubling creation.

Sources indicate the developer began offering access to WormGPT on a well-known hacker forum in June. Unlike mainstream AI assistants like ChatGPT or Google Bard, this hacking-focused chatbot is said to answer questions about illegal activities and other sensitive topics without the usual safety safeguards.

The developer provided screenshots showing WormGPT responding to user prompts. One example involved asking the bot to write a Python virus and to provide guidance on organizing a cyberattack. These demonstrations underscore the platform’s potential for facilitating unlawful behavior rather than benign use.

The underlying technology traces back to the large open-source language model GPT-J from 2021, which reportedly served as the foundation for building WormGPT. The bot is described as having been trained on materials related to malware development, raising concerns about the influence of training data on the model’s capabilities and the risk of misuse.

Curtain-raising tests by a prominent cybersecurity firm, SlashNext, evaluated WormGPT under real-world prompts. When asked to craft a phishing email, the bot produced a convincing message that directed recipients to a counterfeit link designed to harvest credentials. The vendor suggested WormGPT could be priced at about €60 per month, with a yearly option around €550, though currency values can shift with exchange rates — a cost that, in the eyes of researchers, lowers the barrier to illegal activity and makes such tools more accessible to a broader audience.

Responses from former developers of ChatGPT regarding the emergence of this new AI variant include sober warnings about the vulnerabilities and ethical pitfalls inherent in releasing powerful language models without robust safety layers. Analysts argue that tools of this kind demand careful governance, ongoing monitoring, and clear boundaries to prevent harm while still enabling legitimate security research and defensive innovation.

Industry observers emphasize that WormGPT is a reminder of the dual-use nature of modern AI. On one hand, the same technology that powers helpful applications can be repurposed for fraud, data theft, and disruption. On the other hand, defenders can leverage similar models to detect suspicious patterns, write better security policies, and educate users about risk. The challenge lies in striking a balance that preserves innovation and experimentation while enforcing accountability and strong protections.

Experts caution that the existence of WormGPT does not mean that responsible developers have relinquished control over AI capabilities. It rather signals a moment for more stringent verification, responsible distribution practices, and tighter access controls. The consensus among security professionals is clear: as AI tools become more capable, the need for transparency, user education, and safe-by-default configurations becomes increasingly critical for safeguarding digital ecosystems.

No time to read?
Get a summary
Previous Article

Public Response in Slupsk to the Civic Platform Event

Next Article

Khakassia Governor Campaign: Sokol Files, Broad Support Emerges