WeTransfer says files not used to train AI after backlash


WeTransfer has confirmed it does not use files uploaded to their service to train artificial intelligence (AI) models.

The file sharing company had received lots of criticism from customers on social media after changing its terms of service, which some interpreted as allowing it the right to use files for AI training.

A WeTransfer spokeswoman told BBC News: “We don’t use machine learning or any form of AI to process content shared via WeTransfer, nor do we sell content or data to any third parties.”

The firm has now updated its terms, saying it has “made the language easier to understand” to avoid confusion.

WeTransfer said the clause was initially added to “include the possibility of using AI to improve content moderation” and to identify harmful content.

The terms had said WeTransfer could use content for purposes “including to improve performance of machine learning models that enhance our content moderation process”.

It also included the right for WeTransfer to “reproduce, distribute, modify,” or “publicly display” files uploaded to the service.

Some users on social media interpreted this as WeTransfer giving itself the right to share or sell the files uploaded by users to AI companies.

People working in the creative industries, including an illustrator and an actor, posted on X to say they used the service to send work and were considering changing to alternative providers.

WeTransfer said it updated the clause on Tuesday, “as we’ve seen this passage may have caused confusion for our customers.”

Clause 6.3 in the terms of service now says: “You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.”

The changes come into effect on 8 August for existing users.

The rival file-sharing platform Dropbox also had to clarify it was not using files uploaded to its service to train AI models, after social media outcry in December 2023.

Tech outlet The Register commented at the time that even though the claim turned out not to be true, the fact that there was such a strong reaction against it showed a lack of trust in tech companies by their users.

Mona Schroedel, a data protection specialist lawyer at Freeths, told BBC News terms of service and privacy policy changes “can come with hidden risks”.

“All companies are keen to cash in on the AI craze and what AI needs more than anything is data,” she said.

“So it is a skip and a hop to trying to use existing data for machine learning exercises under the guise of legitimate interest to improve service provision.”

Users can also be placed in a “difficult position” if terms of a service they are embedded in or rely on suddenly change, she added, saying they may be left with little choice but to continue using it.

Additional reporting by Liv McMahon



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *