Instagram is launching more safety features for teens

Instagram Probes AI Accounts Sexualising Disabled People

Instagram and its parent company Meta have launched an investigation into AI-generated profiles on the platform that appear to sexualise disabled people.

The move follows an investigation by the BBC, which identified dozens of accounts posting artificial images and videos of women portrayed with disabilities, including Down’s syndrome and vitiligo. Many of the images depict women with missing limbs, visible scars or using wheelchairs, often presented in sexualised poses or revealing clothing.

Several of the accounts have gained large followings in a short period. One profile claiming to represent conjoined twins attracted around 400,000 followers within months of joining the platform late last year.

Campaigners and charities have condemned the trend. Kamran Mallick, chief executive of Disability Rights UK, described the accounts as deeply disturbing, saying they exploit disabled identities for profit and entertainment while stripping people of dignity and agency.

Medical charities have also raised alarms. Gemini Untwined, which supports surgery for rare cases of conjoined twins, said depicting such conditions as entertainment was morally unacceptable and ignored the serious medical and emotional challenges faced by affected families.

Experts say the issue highlights broader concerns about generative artificial intelligence. Dr Amy Gaeta of the University of Cambridge, who studies AI, gender and disability, said the growing availability of image-generation tools has made it easier to create harmful content. She warned that biases in the data used to train AI systems can result in hypersexualised depictions of disabled people, sometimes even without explicit prompts.

UK media regulator Ofcom said it is monitoring how AI technology is evolving and assessing potential risks. The regulator noted that online safety rules require platforms to tackle illegal content and protect users, particularly children, from abusive or harmful material.

The Equality and Human Rights Commission also criticised the accounts highlighted by the BBC, calling them “deeply disturbing” and stressing the need for strong regulation to prevent harm in digital spaces.

Disability equality charity Scope said the images amounted to discrimination disguised as online content, warning that they are often built using real people’s images without consent and fuel harassment through poorly moderated comment sections.

In response, a Meta spokesperson said the company is investigating the reported content and reiterated that it removes material that promotes sexual exploitation or targets people based on protected characteristics, including disability.

However, researchers argue that moderation measures remain inadequate and can be bypassed. Dr Gaeta said stronger accountability from technology companies is needed, alongside broader efforts to address ableism and discrimination online.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *