Anthropic

AI Safety Researcher Quits Anthropic, Warns of ‘Perilous’ Times

A senior artificial intelligence safety researcher has resigned from leading AI firm Anthropic, issuing a stark warning about the direction of the world and the technology he once worked to safeguard.

Mrinank Sharma, who led a team focused on AI safety at the company, announced his departure in a resignation letter posted on social media platform X. In the message, he wrote that “the world is in peril,” citing not only artificial intelligence but also bioweapons and a range of interconnected global crises.

Sharma said he plans to return to the UK to study poetry and focus on writing, adding that he intends to “become invisible for a period of time.”

Anthropic, founded in 2021 by former employees of OpenAI, has positioned itself as a safety-focused alternative within the rapidly expanding generative AI sector. The company is best known for its chatbot Claude and has frequently highlighted its commitment to reducing the risks posed by advanced AI systems.

During his tenure, Sharma worked on research into AI safeguards, including efforts to address the tendency of generative systems to overly flatter users, the potential misuse of AI in biological threats, and the broader societal effects of human interaction with AI assistants.

While he said he valued his time at the company, Sharma suggested that aligning corporate decisions with ethical principles can be challenging. He wrote that organisations, including Anthropic, often face pressure that can conflict with core values.

His resignation comes amid growing debate within the AI industry about commercial pressures and safety standards. This week, a former researcher at OpenAI also stepped down, raising concerns about the company’s decision to introduce advertising into its flagship chatbot, ChatGPT.

Anthropic has branded itself as a “public benefit corporation” dedicated to ensuring AI systems remain aligned with human interests. The company has published safety evaluations of its models and warned of risks such as cyber misuse of advanced AI tools.

However, it has not been without controversy. In 2025, Anthropic agreed to pay $1.5 billion to settle a class action lawsuit brought by authors who alleged their works had been used without permission to train AI systems.

Meanwhile, tensions between major AI players have become more visible. Anthropic recently released an advertisement criticising OpenAI’s move toward ads within ChatGPT, prompting a public response from OpenAI chief executive Sam Altman.

Industry observers say the departures highlight a broader unease among some researchers about how quickly AI systems are being commercialised. With companies racing to expand their products and revenue streams, debates over ethics, regulation and long-term societal impact are intensifying.

For Sharma, the next chapter will be far removed from Silicon Valley’s AI labs. In stepping away from one of the sector’s most prominent firms, he has added his voice to a growing chorus urging caution at what many describe as a pivotal moment for artificial intelligence.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *