Guardrails off for Anthropic: Firm tweaks AI safety policy amid heightened competition, lack of regulation—what changes?

The guardrails are off for Anthropic, a company founded by former OpenAI employees worried about the dangers of artificial intelligence.

Once focused on the proper development of AI technology with safety in mind, Anthropic is now weakening its foundational safety principle, with the company releasing a statement on its revised security policy.

“We’re releasing the third version of our Responsible Scaling Policy (RSP), the voluntary framework we use to mitigate catastrophic risks from AI systems,” Anthropic said on Tuesday, marking the change.

Anthropic’s new safety policy: What changes?

In a statement to Business Insider, the company also said that amid heightened competition and lack of government regulation, it would no longer abide by its commitment “to pause the scaling and/or delay the deployment of new models” when such advancements would have outpaced its own safety measures.

Anthropic’s previous safety policy required it to pause training more powerful models if their capabilities outpaced the company’s ability to control them and ensure their safety — a measure that’s been removed in the new policy.

Explaining the shift, Anthropic said that the current policy environment with regard to the technology had “shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level.”

Further, the company’s chief science officer, Jared Kaplan, told Time Magazine that its responsible scaling policy had failed to keep pace with the AI race.

“We felt that it wouldn’t actually help anyone for us to stop training AI models. We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead,” Kaplan was quoted as saying by the publication.

Anthropic also said that it was “convinced” that “effective government engagement on AI safety is both necessary and achievable”, but added that it was “proving to be a long-term project—not something that is happening organically as AI becomes more capable or crosses certain thresholds.”

To that end, Anthropic will continue to provide safety recommendations for the AI industry, but the company will separate its own plans from its suggestions for the industry.

Tiff with Pentagon

The change comes at a time when Anthropic has been embroiled in a dispute with the Pentagon, and a day after Defense Secretary Pete Hegseth gave the company’s CEO Dario Amodei a Friday deadline to rollback AI safeguards.

Failing to do so, Hegseth warned, would put Anthropic at risk of losing a $200 million defence contract and being put on a government blacklist, reported CNN.

That said, a source familiar with developments told news outlet that the change in Anthropic’s safety policy was not related to the Pentagon case.

Latest

White House chief of staff to meet with Anthropic CEO over its new AI technology

White House chief of staff to meet with Anthropic CEO over its new AI technology

Backup calling, direct voicemail features in smartphones originated in India: Samsung official

Backup calling, direct voicemail features in smartphones originated in India: Samsung official

Samsung Galaxy S27 series details leak, Ultra likely to get UFS 5.0 storage

Samsung may roll out the Galaxy S27 lineup next year with UFS 5.0 storage. Galaxy S27 Ultra is said to be powered by Snapdragon 8 Elite Gen 6 Pro chipset. Here

AI growth hits a wall as data centre delays create industry bottlenecks

Several data centre projects in the US are running behind schedule, threatening the AI expansion plans of multiple companies. These delays have also impacted re

Scientists put fly brain in AI and it worked, they want to simulate human brain next

Eon Systems mapped the brain of a fruit fly and simulated its behaviour on a computer. The company ultimately aims to map the human brain and replicate it in di

Topics

Bengal elections: AI, body cams and 100-metre ‘Lakshman Rekha’ to guard booths

A sweeping security overhaul, AI surveillance, and a strict no-entry zone promise to transform voting in West Bengal. But how will this unprecedented plan tackl

CBSE 3-language policy push: What happens to foreign languages and teachers now?

CBSE’s three-language rule is not just about adding one more subject. It raises a harder question: in schools that already teach French, Spanish, German and o

Who is Srinivas Narayanan? IIT Madras graduate behind ChatGPT growth exits OpenAI

OpenAI executive Srinivas Narayanan, an IIT Madras alumnus who helped scale ChatGPT and API products, has announced his exit after three years. He said he plans

Bhubaneswar boy scores perfect 100% in CBSE Class 10, now targets NEET

A Bhubaneswar student, Ayusman scored 100% in CBSE Class 10 board results. Starting the preparation from March-April, he says he focused on consistency, concept

KV schools are topping India’s boards. So why are bureaucrats opting out?

How Kendriya Vidyalayas' demographic shift reflects broader educational trends

MVA faces consensus challenge ahead of MLC polls, Mahayuti may secure majority

Can Thackeray's candidacy unify MVA for upcoming MLC elections?

Word of the day: What ‘alacrity’ means and how to use it right

The word of the Day for April 18 is: Alacrity. Learn what it means and how to use it in daily conversation. Add it to your vocabulary and impress everyone aroun

Quote of the day by Ratan Tata: I don’t believe in work-life balance. I believe in…

Powerful words by Ratan Tata inspire millions seeking success, happiness, and purpose in life. Discover his wisdom on work-life integration, leadership, persona
spot_img

Related Articles

Popular Categories

spot_imgspot_img