Adobe has clarified its Terms of Use update, which initially led users to believe their unpublished work was fodder for AI training.
Adobe says it does not train its Firefly generative AI models on user content and that it “will never assume ownership of a customer’s work.”
Backlash snowballed this week after Adobe users were notified of an update to its Terms of Use policy. The language that caught people’s attention was a sentence in Section 2.2 that said “Our automated systems may analyze your Content and Creative Cloud Customer Fonts (defined in section 3.10 (Creative Cloud Customer Fonts) below) using techniques such as machine learning in order to improve our Services and Software and the user experience.”
Tweet may have been deleted
Adobe clarified in a blog post that this part of the policy was not new, and refers specifically to moderating illegal content, such as child sexual abuse material or content that violates its terms, like the creation of spam or phishing attempts. “Given the explosion of Generative AI and our commitment to responsible innovation, we have added more human moderation to our content submissions review processes,” said the blog post. So more human moderation has been added, not more automated moderation.
Mashable Light Speed
But because Adobe did not specify exactly what had been updated in the Terms, users assumed this section meant unpublished work, including confidential content, could be used to train its AI models. “Our commitments to our customers have not changed,” said the blog post. Adobe explained that its Firefly models are trained on licensed content, including Adobe Stock, and public domain content. Adobe also asserted that its users own the work they create on its apps.
Tweet may have been deleted
What exactly did Adobe update in its Terms of Service?
In the blog post, the company highlighted in pink text where language had been changed. In the privacy section, instead of “‘will only'” access,” the text was changed to “‘may’ access, view, or listen to your Content” and added the line “‘through both automated and manual methods, but only’ in limited ways…” Later on in the paragraph, Adobe added the line “as further set forth in Section 4.1 below.” Section 4.1 is where details of illegal or banned content is outlined. This paragraph also added, “including manual review” which refers to methods employed for screening Adobe’s content. The section also changed “child ‘pornography'” to “child ‘sexual abuse material.'”
So nothing has substantially changed in Adobe’s privacy policy, which may come as a relief to users who rely on it for their creative pursuits. However, the reaction to Adobe’s vaguely-worded update is an indication of the fear and mistrust creatives feel as generative AI threatens to upend their professions. AI models like ChatGPT, Dall-E, Gemini, Copilot, and Midjourney have come under fire for being trained on content scraped from the web that in turn automates writing and image creation. Then there’s OpenAI’s Sora, which has not been publicly released, but is believed to be trained on videos from YouTube and elsewhere.
All in all, it’s a tense time for creatives and there’s a reason why they’re wary of changes that may threaten their livelihood.
Topics
Artificial Intelligence
Privacy
#Adobe #train #Firefly #Gen #models #customer #content #Company #responds #backlash
Source link
#Adobe #train #Firefly #Gen #models #customer #content #Company #responds #backlash