A new bill in North Carolina aims to regulate artificial intelligence by making it illegal to create deceptive “deepfake” content.
Announced shortly after OpenAI’s new collaboration with the state treasurer, the bipartisan bill is gaining traction among lawmakers.
Under the proposed law, anyone creating deepfake videos — where individuals appear to say or do things they never actually did — could face misdemeanor charges.
The legislation also gives victims the right to sue if they’re misrepresented in such content.
At the same time, the bill offers legal protection for AI developers.
If a licensed professional misuses an AI tool and causes harm, the person wronged couldn’t sue the tool’s developer — only the professional who made the mistake.
Republican Rep. Jake Johnson of Polk County is leading the charge, working alongside Democrats Zack Hawkins and Vernetta Alston. Johnson emphasized that the goal isn’t to stifle innovation.
“We don’t want North Carolina to earn a reputation as a bad place for AI development or business,” he said. “This legislation is still evolving. We want to get it right and make the state a top destination for AI startups and projects.”
Some lawmakers raised concerns about political campaign ads.
Could they use AI to mislead voters? Johnson clarified: using AI for background music or visuals is fine, but any attempt to falsify what a politician said or did — such as creating a fake mugshot — would be banned.
Previous attempts to curb deepfake ads haven’t passed into law, but this bill is showing early promise.
It passed its first committee review on Tuesday and now moves to the House Judiciary 3 Committee for further consideration.
The discussion around AI accountability and misinformation is heating up. And with OpenAI already offering its services to state officials, North Carolina could become a testing ground for responsible AI governance.