A new and dangerous AI-powered hacking tool is making waves across the cybercrime underworld — and experts say it could change the way digital attacks are launched.
Called Xanthorox AI, the tool was first spotted earlier this year on darknet forums and encrypted chat groups, where it’s being marketed as the “killer of WormGPT and all EvilGPT variants.” But this isn’t just another tweaked version of a chatbot. Xanthorox is something entirely different — and far more advanced.
Built for offense, not defense
Cybersecurity firm SlashNext refers to it as “the next evolution of black-hat AI.”
What makes it especially dangerous is how it’s designed: Xanthorox is not based on existing AI platforms like GPT. Instead, it uses five separate AI models, and everything runs on private servers controlled by the creators. That means no cloud APIs, public infrastructure, and few ways for defenders to track or shut it down.
This “local-first” design helps it stay hidden and makes takedowns difficult.
“Xanthorox isn’t a jailbreak. It’s a ground-up offensive AI system,” boasts an anonymous seller in forum posts. “We built our own models, our own stack, and our own rules.”
A Swiss army knife for hackers
Xanthorox comes packed with five distinct language models, each optimized for a specific task:
- Xanthorox coder handles tasks like generating malicious code, writing scripts, and exploiting vulnerabilities in software.
- Xanthorox vision can analyze images and screenshots to extract sensitive data or interpret visual content — useful for cracking passwords or reading stolen documents.
- Xanthorox reasoner advanced mimics human reasoning, helping attackers craft more believable phishing messages or manipulate targets through social engineering.
- It includes real-time voice and image handling modules, allowing hackers to control the AI via real-time voice commands and voice messages or by uploading files like .txt, .pdf, or .c code.
- It features a live web scraper tool that pulls data from over 50 search engines for real-time reconnaissance.
These tools allow hackers to plan and launch fully automated attacks, including phishing campaigns, ransomware drops, and malware development. Xanthorox can also work offline if needed, making it useful even in isolated environments or where internet access is restricted.
A growing concern for defenders
Cybersecurity experts are sounding the alarm. Xanthorox’s modular design means it can quickly evolve, making it difficult for defenders to keep up. Traditional detection tools that spot specific threats may no longer be enough.
“Because Xanthonox AI’s LLM will continue to evolve, it’s likely its attacks will not remain the same,” Kris Bondi, CEO and co-founder of Mimoto, a security firm, told Dark Reading via email. “This adds another significant obstacle for enterprises that rely on after-incident forensics to inform how they fine-tune their detection-and-response capabilities.”
How are security teams responding?
Generative AI tools are being used increasingly for good, from writing code to helping with education. But platforms like Xanthorox show the dark side of this technology. It’s autonomous, scalable, and customizable — a triple threat in the wrong hands.
While it’s unclear how widely Xanthorox AI is being used yet, its existence signals a new era of AI-powered cyber threats, one where attacks are more automated, adaptive, and more complicated to stop.
For now, businesses are urged to strengthen email security, monitor for AI-generated phishing, and prepare for a wave of hyper-personalized, AI-driven attacks.