US artificial intelligence firm Anthropic is advertising for a specialist in chemical weapons and high‑yield explosives to strengthen safety measures and reduce the risk of its systems being misused.
In a job posting on LinkedIn, the company said it is looking for someone with at least five years’ experience in defence against chemical weapons and explosives, with expertise in radiological dispersal devices — commonly referred to as “dirty bombs”. Anthropic told the BBC the role is similar to other positions it has created in sensitive areas of risk mitigation.
The recruitment drive reflects growing concern within the AI industry about the potential for powerful systems to be used to design or instruct on the production of dangerous weapons. Anthropic’s posting comes as other leading developers, including OpenAI, advertise related roles: OpenAI lists a vacancy for a researcher specialising in biological and chemical risks, with a salary reportedly up to $455,000 (£335,000), almost double what Anthropic is offering.
Experts have voiced unease about the strategy of hiring weapons specialists to help secure AI systems, warning that giving AI tools access to detailed knowledge about hazardous materials — even with safety guardrails — could itself pose dangers if not carefully controlled.
“Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons?” asked Dr Stephanie Hare, tech researcher and co‑presenter of the BBC’s AI Decoded programme. “There is no international treaty or other regulation for this type of work and the use of AI with these types of weapons. All of this is happening out of sight.”
The issue has taken on added urgency amid heightened calls by the US government for AI firms to address misuse risks, even as the technology is being deployed in geopolitical contexts including recent military action in the Middle East and operations in Venezuela.
Anthropic’s recruitment comes against the backdrop of its ongoing legal dispute with the US Department of Defense. The firm was recently labelled a “supply chain risk” by the Pentagon after insisting that its tools not be used for fully autonomous weapons or domestic mass surveillance — safeguards that drew criticism from senior defence officials and triggered a lawsuit by Anthropic.
Anthropic’s AI assistant, Claude, continues to be used by some organisations, including in systems provided by Palantir, and remains in deployment despite the controversy around its relationship with the US military.
Author: Kieran Seymour
