Microsoft signage is seen at the company's headquarters in Redmond, Washington, US

Microsoft Employee Raises Alarm Over AI Tool’s Creation of Offensive Images

A Microsoft employee has voiced concerns over the potential harm caused by the company’s artificial intelligence systems, particularly in generating offensive images, prompting a letter to the US Federal Trade Commission.

Shane Jones, a principal software engineering lead at Microsoft, highlighted issues with the AI text-to-image generator Copilot Designer, citing its tendency to produce inappropriate or offensive images, including sexualized depictions of women, even in response to benign prompts.

Jones, who specializes in testing products for vulnerabilities, expressed dismay over the tool’s marketing as safe, including for children, despite known risks. He called for Copilot Designer’s removal from public use until better safeguards are implemented, or at least restricting its marketing to adults.

This development underscores broader concerns surrounding AI-generated images and their potential for harm, especially in spreading offensive or misleading content. Jones’ letter follows recent incidents of AI-generated pornographic images and historically inaccurate depictions, prompting calls for increased scrutiny and regulation in the AI industry.

Jones’ efforts extend beyond the FTC, as he has also raised his concerns with Microsoft’s board of directors, urging investigations into the company’s responsible AI practices and disclosure of known risks to consumers. Despite encountering obstacles, including alleged interference from Microsoft’s legal department, Jones remains committed to addressing the risks associated with AI technology.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *