Key Takeaways
- State attorneys general from North Carolina and Utah launch AI safety task force with OpenAI and Microsoft
- Task force aims to develop basic safeguards for AI development, focusing on child protection
- Voluntary guidelines backed by potential joint legal action from state law enforcers
State attorneys general from North Carolina and Utah have partnered with OpenAI and Microsoft to establish an AI safety task force, creating the first major state-level initiative to regulate artificial intelligence development and protect consumers.
The bipartisan effort led by Democratic North Carolina AG Jeff Jackson and Republican Utah AG Derek Brown will develop voluntary safeguards for AI companies, with particular focus on preventing harm to children and identifying emerging risks.
Federal Regulation Vacuum
With no comprehensive federal AI legislation in place, state law enforcers are stepping in to fill the regulatory gap. Jackson expressed little confidence in Congress acting quickly on AI regulation, noting their inaction on social media and internet privacy issues.
“They did nothing with respect to social media, nothing with respect for internet privacy, not even for kids, and they came very close to moving in the wrong direction on AI by handcuffing states from doing anything real,” Jackson told CNN.
Diverging Company Approaches
The task force partners already show different safety philosophies. OpenAI’s Sam Altman recently stated the company would allow verified adults to engage in erotic conversations, while Microsoft’s Mustafa Suleyman emphasized creating “an AI you trust your kids to use” without romantic conversations for any users.
Enforcement Power
Though the safeguards will be voluntary, the task force provides a platform for state attorneys general to coordinate monitoring and potential legal action against companies that harm consumers. Jackson noted this distinguishes the effort from “a group of think tanks coming together” on principles.
Microsoft’s Kia Floyd stated the partnership reflects “a shared commitment to harness the benefits of artificial intelligence while working collaboratively with stakeholders to understand and mitigate unintended consequences.”
The initiative comes as AI safety concerns escalate, with reports of technology causing delusions, contributing to self-harm, and requiring companies to block young users from adult content.



