WeTransfer

WeTransfer Clarifies AI Policy After Customer Backlash Over Terms of Service Update

WeTransfer has publicly confirmed that it does not use files uploaded to its platform to train artificial intelligence (AI) models, following a wave of backlash from users who raised concerns over recent changes to the company’s terms of service.

The file-sharing company faced criticism on social media after updating its legal terms, with some users interpreting the changes as giving WeTransfer the right to use user content for AI development or sell it to third parties. The clause in question mentioned using content to “improve performance of machine learning models that enhance our content moderation process,” which sparked alarm among creatives and professionals who rely on the service to share sensitive or original work.

In a statement to the BBC, a WeTransfer spokesperson clarified the company’s stance: “We don’t use machine learning or any form of AI to process content shared via WeTransfer, nor do we sell content or data to any third parties.”

WeTransfer has since revised the wording in its terms of service, citing a desire to eliminate confusion. The company explained that the original clause was added to allow for potential use of AI in content moderation – to detect and block harmful or inappropriate material – not for training generative AI systems.

The updated terms now read: “You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.” These changes will take effect from August 8 for existing users.

Despite the clarification, some users in creative industries, such as illustrators and performers, expressed frustration and indicated they were considering switching to alternative platforms. Similar concerns surfaced in December 2023 when Dropbox faced comparable accusations, prompting it to reassure users that their files were not being used to train AI models.

Legal experts say the confusion reflects broader trust issues between consumers and tech companies. Mona Schroedel, a data protection lawyer at Freeths, told the BBC that sudden policy changes can leave users vulnerable, especially when companies have the power to alter terms unilaterally. “All companies are keen to cash in on the AI craze, and what AI needs more than anything is data,” she said.

Schroedel warned that while some updates may seem routine, they can carry “hidden risks,” particularly when services are widely used and users feel locked in. “People often have little choice but to accept changes, even if they’re uneasy about them,” she added.

WeTransfer said it will continue to listen to user feedback and ensure its policies are transparent going forward.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *