Dave Willner

OpenAI’s Head of Trust and Safety is Stepping Down

OpenAI’s Head of Trust and Safety, Dave Willner, has announced his plans to step down from his position, as he revealed in a LinkedIn post.

Willner has been leading the artificial intelligence firm’s trust and safety team since February 2022, but he will now transition into an advisory role to prioritize spending more time with his family.

This decision comes at a crucial time for OpenAI, as the company has faced increasing scrutiny from lawmakers, regulators, and the public concerning the safety of its AI products and their potential impact on society. The viral success of OpenAI’s AI chatbot, ChatGPT, in late 2022 brought even more attention and questions about the safety measures in place.

During a Senate panel hearing in March, OpenAI’s CEO, Sam Altman, called for AI regulation and expressed concerns about the potential misuse of AI technology, especially in manipulating voters and spreading disinformation, particularly during upcoming elections.

In his post, Willner, who has previous experience at Facebook and Airbnb, acknowledged that OpenAI is currently in a high-intensity phase of development and that his role has expanded significantly since he joined the company.

OpenAI released a statement about Willner’s departure, praising his work in operationalizing their commitment to the safe and responsible use of AI technology. The company confirmed that their Chief Technology Officer, Mira Murati, will serve as the interim manager for the trust and safety team, while Willner will continue to advise the team until the end of the year.

Looking ahead, OpenAI is actively searching for a technically-skilled lead to further their mission, focusing on the design, development, and implementation of systems that ensure the safe use and scalable growth of their AI technology.

As part of their ongoing efforts to address concerns and establish responsible AI practices, OpenAI is collaborating with regulators in the United States and other regions to develop guidelines and guardrails for rapidly advancing AI technology. Recently, OpenAI joined forces with six other leading AI companies in making voluntary commitments, endorsed by the White House, to enhance the safety and trustworthiness of AI systems and products. This includes subjecting new AI systems to external testing before public release and clearly labeling AI-generated content.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *