Do you have knowledge of ‘dirty bombs’? The bizarre question being asked by Anthropic and OpenAI to new hires

If you happen to be someone who understands how dangerous weapons are designed or handled, the artificial intelligence industry may be looking for you. In a surprising twist, leading technology companies are now recruiting experts in chemical weapons, explosives and radiological threats. The goal is not to build such weapons but to prevent AI tools from helping others do so. According to a BBC report, US AI firm Anthropic has advertised a role requiring expertise in chemical weapons defence and dirty bombs, while ChatGPT developer OpenAI is offering salaries of up to $455,000 for researchers focused on biological and chemical risks.

Why Anthropic and ChatGPT are hiring experts in dirty bombs

As AI systems become increasingly capable of answering complex technical questions, companies are facing a new challenge. What if someone attempts to use these systems to obtain information about building weapons?

Anthropic’s job listing seeks candidates with experience in chemical weapons or explosives defence, along with knowledge of radiological dispersal devices, commonly known as dirty bombs.

The company says the role is intended to ensure that its AI models cannot be manipulated into generating harmful instructions.

According to the BBC, the expert would help strengthen safety policies and technical guardrails designed to prevent users from extracting dangerous information.

Anthropic is not the only company adopting this approach. OpenAI, the developer behind ChatGPT, has also advertised a position for a researcher specialising in biological and chemical risks.

The role focuses on studying how advanced AI models could potentially be misused and developing systems to prevent such behaviour. The company is offering salaries of up to $455,000 for experts who can help address these risks.

The hiring reflects growing recognition within the AI industry that powerful language models could inadvertently generate highly sensitive technical knowledge if proper safeguards are not in place.

Experts warn of regulatory gaps

While companies say these roles are meant to strengthen safeguards and prevent misuse, some researchers argue that the broader implications of exposing AI systems to sensitive weapons-related knowledge deserve closer examination. As AI models become increasingly capable of synthesising complex technical information, experts worry about whether it is possible to completely eliminate the risk of misuse once such knowledge becomes part of safety testing or evaluation.

Dr Stephanie Hare, a technology researcher and co-presenter of the BBC’s AI Decoded programme, has questioned whether it is entirely safe for AI systems to interact with information related to explosives or radiological weapons, even when the intention is to build protective guardrails. She also notes that there is currently no dedicated international treaty or regulatory framework governing how artificial intelligence systems should handle such sensitive knowledge.

Guardrails becoming a priority for AI developers

AI developers have increasingly warned that their technology could pose serious risks if misused. As a result, many companies are investing heavily in safety research.

Anthropic has previously stated that its AI systems should not be used in autonomous weapons or mass surveillance. Its co-founder Dario Amodei has argued that the technology is not yet reliable enough for such applications.

By hiring specialists who understand chemical weapons and explosive threats, companies hope to design safeguards that prevent AI from generating harmful instructions while allowing the technology to remain useful for research, education and legitimate problem-solving.

The unusual job listings reflect a growing reality in the AI era. As the technology becomes more powerful, the challenge is not just building smarter systems but ensuring they cannot be turned into dangerous tools.

Latest

From space to strike: Russia boosting Iran’s drone war, WSJ reports

WSJ reports Russia is sharing satellite imagery and advanced drone technology with Iran, boosting Tehran’s ability to target US forces in the Middle East amid

Ali Larijani, Iran’s National Security chief, killed in Israeli strike, Tehran confirms

The confirmation comes hours after Israel said to have killed Larijani, the most senior figure targeted since Khamenei's death at the start of war on Feb 28.

Ex-Navy SEAL backs Joe Kent amid Iran war resignation row; ‘sometimes the best…’

Shawn Ryan backed Joe Kent’s resignation, praising it as a powerful statement and alleging Israel dragged the US into war with Iran, amid mixed reactions.

‘Fly to India for free’: DHS uses Taj Mahal to promote ‘self-deportation’ with $2,600 incentive for undocumented migrants

US News: The US Department of Homeland Security promoted a “self-deportation” scheme using imagery of India’s Taj Mahal and offering financial incentives

From far right to anti-war: Joe Kent’s big shift via resignation letter to Trump

The resignation letter from Joe Kent, clearly distancing from Trump's war in Iran, was nothing short of public humility for the White House.

Topics

CBSE Class 12 Economics Question Paper 2026: Check full exam paper here

Check out the CBSE Class 12 Economics question paper 2026 in full here. Students who appeared for the board exam can go through the complete paper to analyse se

CBSE Class 12 economics paper moderately difficult with tricky case studies

The CBSE Class 12 Economics exam 2026 was moderate and balanced, say teachers. The paper followed the CBSE pattern and NCERT syllabus, with a mix of theory, num

Neural Dispatch: AI is slop, an unripe fruit and an insomniac

The biggest AI developments, decoded. 18 March 2026.

Heart surgeon with 25 years of experience warns minutes matter during a stroke; shares warning signs to never ignore

A stroke occurs when blood flow to the brain is disrupted, leading to brain cell death. Key symptoms can be remembered with BE FAST, Dr Jeremy suggests.

Reliance accelerates plans for Jio IPO, DRHP likely in the next 2-3 weeks

The Jio IPO DRHP will include the December-end financials, setting the stage for a highly anticipated listing at a valuation seen at $100-120 billion.

Oscars producers defend In Memoriam segment amid growing backlash over Dharmendra, Eric Dane omissions

Broadcast executives defended the decision to omit some names from the In Memoriam segment during the Oscars 2026 broadcast. 

Adani secures $1.7-bn takeover of bankrupt Jaypee Group in major infra win

The acquisition of Jaiprakash Associates, or Jaypee Group, adds cement, real estate and a Formula One racetrack to Adani Enterprises' portfolio.

Nirav Modi uses Bhandari judgment in bid to ‘reopen’ his extradition

International Business News: TOI correspondent from London: Fugitive jeweller Nirav Modi appeared at the high court here on Tuesday in a bid to get the court to
spot_img

Related Articles

Popular Categories

spot_imgspot_img