Locked Out and Branded: Instagram Users Speak Out on Wrongful Bans and Shattered Trust
For years, 26-year-old Yassmine Boussihmed poured her heart into building her boutique dress brand online. Her Instagram page was more than a storefront – it was a community. Then, one morning in April, it was gone.
Without warning, her account was shut down, accused of breaking Meta’s integrity rules. More than 5,000 followers vanished. So did a steady stream of customers. “I trusted social media to grow my business – and it helped – but it’s also the thing that’s let me down,” she said from her shop in Eindhoven.
When the BBC raised her case with Meta, her accounts reappeared. She cried with relief – only to watch her personal profile disappear again minutes later.
Yassmine is one of thousands of people worldwide who say they’ve been wrongly locked out of Instagram or Facebook, often accused of something far more serious: violating rules on child sexual exploitation (CSE).
Many say they were never told what post broke the rules. Several told the BBC the accusations have caused deep anxiety, reputational damage, and fears that police might knock on their door.
A Disturbing Accusation
Lucia, a 21-year-old from Austin, Texas, was banned for more than two weeks. The reason: an alleged breach of Meta’s CSE policy. She suspects the AI systems mistook a bikini photo of her and a friend – both adults – for underage imagery.
“It’s one of the most revolting accusations you can face,” she said. As an aspiring lawyer hoping to work in juvenile justice, she fears the suspension could tarnish her career. Her account was restored just hours after the BBC contacted Meta – no explanation given.
When the Community is Your Lifeline
For 55-year-old Duncan Edmonstone, Instagram isn’t just a pastime – it’s a source of strength in his battle with stage four lung cancer. He leans on private Facebook groups for medical advice and emotional support.
So when he was banned for nearly two weeks in June, accused of breaking cybersecurity rules, the loss hit hard. “These groups aren’t just social,” he said. “They can change the course of someone’s treatment.”
The teacher accused – twice
Ryan, a former teacher in London, knows the emotional whiplash of being banned, reinstated, and banned again. Accused in May of breaching CSE rules, he spent weeks appealing. In June, a human reviewer allegedly confirmed the violation – only for Instagram to later send an apology: We got this wrong.
Then, hours after the BBC put questions to Meta about his case, he was banned again – this time from both Instagram and Facebook. His Facebook account returned two days later; Instagram remains locked.
“Sorry we called you a paedophile for two months – here’s your account back,” is how he sums up the experience. “It’s devastating. I’m constantly on edge.”
A Wave of Wrongful Bans?
More than 36,000 people have signed a petition accusing Meta of letting AI systems wrongly ban accounts and handle appeals. On Reddit, in Facebook groups, and across X (formerly Twitter), frustrated users swap stories and advice. Many believe the only way to get human help is to pay for Meta Verified – a service that doesn’t always solve the problem.
Meta hasn’t publicly acknowledged any widespread issue. The company says AI plays a central role in its content moderation, alongside human reviewers. In July, it reported removing more than 635,000 accounts over sexualised content involving children.
For those caught in the crossfire, that explanation offers little comfort. “It’s not just about losing followers or photos,” said one banned user. “It’s about having your name linked to something vile, and no way to clear it.”