Web Desk
A recent cybersecurity report has raised the alarm over AI’s growing role in cybercrime.
Researchers say bad actors are now using AI to automate attacks, refine social engineering tactics, and bypass traditional security systems, making digital threats faster, more scalable, and harder to detect.
Experts warn that this trend could pave the way for fully autonomous AI criminal networks, marking a dangerous new chapter in the evolution of organized crime.
AI Supercharges Cybercriminal Operations
The report highlights how criminals are already repurposing tools like CCTV, chips, drones, GPS, and 3D printing for illegal purposes.
As AI continues to mature, its role in cybercrime is becoming more critical.
“Today’s criminal groups have access to increasingly powerful tools,” the report states, “and AI is now at the center of this shift.”
Former NSA cybersecurity expert Evan Dornbush notes that AI’s real strength lies in speed—not necessarily creativity.
“AI doesn’t make scams more original—it makes them faster,” he said. “Fraudsters can generate and improve scam messages quickly, making them harder to detect.”
However, Dornbush warned that unless the security community finds ways to reduce the profitability of these crimes, the problem will grow. “AI lowers costs for attackers. We must either raise their costs or cut their profits.”
Making Cybercrime Easier for Everyone
Lawrence Pingree, Vice President at Dispersive, issued a chilling warning: AI is making cybercrime easier and more accessible—even for those with minimal technical skills.
“The frightening part is that AI can help anyone launch scalable attacks. This is just the beginning,” he said.
Pingree added that as smaller, stealthier AI models become more efficient, they’ll be able to carry out multi-stage cyberattacks with little human guidance.
Combine that with deepfake technology, and the result could be a new breed of malware capable of highly targeted social engineering.
A “Game-Changer” for Cybercrime
Willy Leichter, CMO at AppSOC and a specialist in AI governance, called AI a game-changer for organized crime.
According to Leichter, AI allows fraudsters to scale phishing attacks while tailoring them to individual targets more convincingly.
He stressed that unlike legitimate AI tools, which must meet high standards of accuracy and safety, criminals don’t need perfection—just enough to trick users and bypass defenses.
“Criminals can fool even cautious users with AI-generated content,” Leichter warned.
“That’s why it’s critical for the cybersecurity community to respond with equally strong AI-driven defenses. Falling behind in this AI arms race is not an option.”