Web Desk
As tools like ChatGPT-4o, Grok 3, and Midjourney grow more advanced and accessible, cybercrime threats are surging—especially in vulnerable regions like South Asia.
According to experts, generative AI is now being used to power sophisticated scams, disinformation, and biometric fraud, with ordinary users increasingly caught in the crosshairs.
Madhu Srinivas, Chief Risk Officer at global RegTech firm Signzy, said AI-generated images, videos, and documents are already being misused at a dangerous scale.
“The most alarming part is how these tools are targeting everyday users,” said Srinivas. “From deepfake sextortion to fake political content, the damage is real—and often irreversible.”
Top 5 Cybercrimes Driven by AI-Generated Images
1. Deepfake CEO Scams
Criminals use AI to clone the face or voice of top executives and trick employees into transferring money or leaking sensitive data.
2. Sextortion Threats
Attackers alter personal photos to create explicit deepfakes, then use them to blackmail victims—especially women and minors.
3. Political Manipulation
Fake protest photos or violent incidents are created to stir tension or sway voters, especially around election periods.
4. Biometric Spoofing
AI-generated faces and irises are being used to bypass facial recognition systems in banking, border control, and national security.
5. Marketplace & Dating Scams
Fraudsters use synthetic headshots to create fake profiles on platforms like Airbnb or Tinder, often as part of identity theft or money laundering.
South Asia’s Growing Vulnerability
Srinivas warned that South Asia is uniquely at risk due to:
High use of WhatsApp and Telegram for news and messages.
Low digital literacy in many areas.
Rising political polarization, making it easier to spread viral fake content.
> “One AI-generated photo of a fake riot or rally can create chaos before anyone even knows it’s false,” he said.
Biometric Systems at Risk
The rise of synthetic images is also putting biometric authentication systems under pressure.
Banks, border authorities, and surveillance networks are all vulnerable to spoofing attacks using hyper-realistic AI faces or iris patterns.
Are AI Platforms Doing Enough?
Although companies like OpenAI and xAI have introduced tools like digital watermarking, Srinivas says the pace of innovation is outstripping security efforts.
“Right now, even a novice can generate a fake face or ID. The guardrails aren’t strong enough.”
“Right now, even a novice can generate a fake face or ID. The guardrails aren’t strong enough.”
5 Solutions to Fight Back
Srinivas outlined a roadmap to combat the misuse of generative AI:
1. Mandatory watermarking and metadata for all AI-generated media.
2. Risk-based access controls for prompt inputs and outputs.
3. Open-source tools for verifying visual content.
4. Transparent abuse reporting systems across platforms.
5. Global cooperation between governments, tech companies, and law enforcement.
Call to Action: Strengthen Society’s Defenses
To keep pace with the threats, Srinivas urges urgent reforms across sectors:
Law enforcement must boost digital forensics and modernize cybercrime laws.
Journalists should treat image verification like fact-checking.
Educators need to teach AI literacy so students can spot and challenge synthetic content.