The AI Arms Race: How Generative Models Are Rewriting Cybersecurity
From phishing kits to deepfake fraud, GenAI is being weaponized and defenders are scrambling to adapt
Hi there, I’m experimenting with some changes on my Substack newsletter and will be adding posts around the AI space. I’ll keep these posts short and sweet with a quick TL;dr section up top for busy readers.
TL;dr
Bad actors are using generative AI and large language models (LLMs) to scale phishing, smishing, malware creation, and deepfake scams with terrifying speed and precision
Tools like WormGPT (not linked on purpose), prompt injection attacks, and website generators, such as Vercel’s v0, are lowering the barrier to cybercrime
Deepfake-based impersonation and AI-cloned business websites are emerging as serious threats to trust and commerce
Defenders are fighting back with AI-native security tools, zero-trust frameworks, and a stronger push for cross-sector collaboration
The dual-use nature of LLMs is accelerating an AI arms race, one where speed, adaptability, and foresight will determine who stays safe
How Generative AI & LLMs Are Supercharging Cyberattacks
With the rise of jailbroken LLMs like WormGPT and FraudGPT (not linked on purpose), cybercriminals now have access to generative tools that can automate phishing emails, malware scripts, and even reconnaissance for advanced attacks. These models remove the need for technical skills, turning what used to be complex social engineering into an industrialized process anyone can run with a credit card and a Tor browser.
According to TechRadar, researchers have seen these models generate highly convincing fraud emails that evade traditional filters, making them incredibly difficult to detect.
The Rise of Phishing as a Prompt
Attackers are now using tools like Vercel’s v0, a frontend website generator built on LLMs, to create phishing sites that replicate the look and feel of real services. Just a few words typed into the interface can output a full login page, complete with brand logos and UI kits, perfect for tricking unsuspecting users. As Axios reports, security researchers at Okta are already seeing this behavior in the wild.
At the same time, prompt injection, an adversarial technique that manipulates how an LLM interprets instructions, is becoming a serious concern. As outlined in Wikipedia and OWASP research, it allows attackers to hijack seemingly safe models, bypass safety controls, and coax them into producing harmful outputs or leaking sensitive data.
Deepfake Fraud Is Here, and It’s Targeting Small Businesses
Small business owners across the country are reporting a spike in fraudulent websites, impersonation scams, and fake customer support numbers using AI-generated content. According to Business Insider, these AI-powered frauds are so convincing that many entrepreneurs are spending more time taking down fake versions of their business than running the real one.
Add in deepfake audio and video impersonation, and you’ve got a recipe for disaster. It’s now trivial to clone a voice or generate a fake Zoom call with an “executive” demanding urgent wire transfers.
I’m not the only one who’s been getting strange video call requests or urgent texts from my CEO from numbers I don’t recognize.
How Cyber Defenders Are Fighting Back
Vendors like SentinelOne, CrowdStrike, and Microsoft are embedding LLMs into their detection engines, while enterprise IT teams are shifting to zero-trust architectures (ZTA) that validate every interaction. These changes go hand-in-hand with behavioral analytics, tracking usage patterns across endpoints and user sessions to detect suspicious behavior before damage occurs.
On the policy front, federal leaders are calling to reauthorize the Cybersecurity Information Sharing Act (CISA) to expand threat intelligence sharing across the public and private sectors.
The Catch: LLMs Are a Double-Edged Sword
LLMs are fundamentally dual-use. The ability to understand patterns, generate content, and simulate scenarios makes them perfect for red-team simulations and defense, but equally ideal for reconnaissance, evasion, and exploit planning. A recent arXiv paper shows how LLMs are already being used in zero-day analysis and exploit chain planning, automating the cognitive labor of a skilled hacker.
This makes the cybersecurity landscape less about code and more about velocity. The question isn’t whether we’ll see GenAI in future breaches. It’s whether defenders can outpace the next wave of attacks using those same tools.
What You Should Be Doing (Now)
Use layered defense: AI-native endpoint protection, but don’t forget the basics: 2FA, strong passwords, phishing training
Audit your AI tools: Review how your company uses LLMs. Is prompt sanitization in place? Are inputs and outputs being logged and verified?
Prepare your people: Train your staff on how deepfake scams and prompt attacks work. Awareness is your human firewall
Track the open-source underworld: Keep an eye on WormGPT, FraudGPT, and other dark web models; they’re evolving fast
Support good policy: Reauthorization of CISA and smart AI governance will determine whether we’re prepared for what’s next.
Thank you for reading and being a supporter of my humble newsletter 🙏.
You can also hit the like ❤️ button at the bottom of this email to support me or share it with a friend. It helps me a ton!