3.1 Adobe’s Approach to Content Moderation
Adobe believes that maintaining engaging and trustworthy communities that foster creativity requires clear guidelines for acceptable behavior and robust processes for consistent content enforcement. Our Policies, which may be found at our Transparency Center, establish standards for user conduct across all Adobe products and services. We discover policy-violative and alleged illegal content through in-product reporting, our publicly available reporting channels and through automated technologies.
When content violates our Policies, we take action against it globally. Although our Policies typically cover material that is locally illegal, Adobe is also committed to respecting applicable laws of the EU and its member states. If we determine that content violates local law but does not otherwise violate our Policies, we may disable it locally by blocking it in the relevant jurisdiction.
Content Reporting Mechanisms
In-Product Reporting
With many Adobe products and services, users can report content they believe violates our Policies via in-product reporting options. Those reporting options are detailed on a per-product basis here. For any other products and services, users and non-users may always reach out to abuse@adobe.com to file a report with Adobe’s Trust & Safety team. Whenever someone reports an alleged violation of our Policies, our team may review the content in question to determine whether a violation took place and action that content accordingly.
Reporting Forms
Anyone in the EU can report content on Adobe products or services that they believe violates applicable laws of the EU or its Member States through our Illegal Content Reporting Form. Reporters are asked to provide additional context about the allegedly illegal content, including the basis for the report and the country where they allege the law has been violated. Whenever someone reports an alleged violation of our Policies, our team may review the content in question to determine whether a violation took place and action that content accordingly.
Anyone around the world can report an intellectual property violation by visiting our infringement reporting form. We also accept notices via mail or fax as detailed in our Intellectual Property Removal Policy.
Intellectual Property Removal Policy
At Adobe, we respect the intellectual property rights of others, and we expect our users to do the same. As outlined above, intellectual property infringement is a violation of our Policies across all Adobe products and services. We disable content in response to complete and valid notices of infringement. When one effective notice is filed with Adobe against a user regarding one or more pieces of allegedly infringing content, the user will receive one 'strike' against their account. If the user receives three 'strikes' within a one-year period, their user account will be terminated. Our Intellectual Property Removal Policy is set out here.
Abusive Content Detection
To enforce our Policies on a global scale, Adobe relies on a variety of tools and mechanisms to detect and remove potentially violative content that is hosted on our servers. We utilize different measures depending on whether the content at issue is posted on an online platform (such as posted to Behance), shared using a publicly accessible link (such as shared via a public Adobe Express page) (collectively, “publicly accessible content”) or if the content is kept in private cloud storage. We do not utilize any of these measures on locally stored content.
Fully Automated Tools
Our automated tools use multiple signals to detect and remove publicly accessible content that may violate our Policies. For example, these tools enable us to detect and automatically remove fraud or phishing and spam content on products such as Adobe Express. They also enable us to detect and remove content on Behance that might violate our nudity or violence and gore policies. Classifiers assign scores to text, images, and videos detected across our products and services and remove the content based on these scores. We never automatically remove content located in private storage. Using these automated models helps us detect more problematic content and make quicker enforcement decisions, which in turn helps keep our communities safe.
Hybrid Tools
In addition to fully automated content removal, in some cases, we supplement automatic detection of violative publicly accessible content with human review to ensure the accuracy of our actions. Classifiers assign scores to text, images, and videos detected across our products and services, and our Trust & Safety team reviews the detected content and takes appropriate enforcement action. In all situations, human review of content only occurs after it has been flagged by our abuse detection models or reported by another user. We also use this hybrid system of review to combat child sexual abuse material, which may also be stored in private cloud storage, as detailed below.
Tools Used to Combat Child Sexual Abuse Material (CSAM)
- Adobe is deeply committed to keeping children safe online and doing our part to fight the spread of child sexual abuse material (CSAM). We have a zero-tolerance policy against any material uploaded to our servers, whether publicly accessible or kept in private storage, that sexualizes, sexually exploits, or endangers children. As part of these efforts, we employ several mechanisms combining automatic detection of content with human review, including:
- We utilize multiple methods to detect CSAM, such as sophisticated machine learning models and hash matching technology. These scans enable us to compare digital signatures (or “hashes”) of images and videos uploaded by any Adobe user to our servers against databases of known CSAM hashes. Adobe reports all confirmed CSAM to the National Center for Missing and Exploited Children (NCMEC), and our Trust & Safety team reviews all material reported to NCMEC to ensure a high level of accuracy and quality of all reports.
- Our Trust & Safety team will then review, report, and remove CSAM discovered by our machine learning models and hash matching technology, as well as by user reports or account investigations. We share hashes of previously undetected CSAM to the NCMEC-managed industry hash list to help prevent the redistribution of CSAM on other platforms. We continue to advance Adobe’s CSAM detection capabilities and build wellness features into our moderation tools.
Content Enforcement
Content Enforcement Teams
Adobe has several specially trained teams in place to ensure that reported content across our broad set of products and services is promptly reviewed and appropriately assessed:
- The Trust & Safety team is responsible for moderation of content on our user-generated content products and services (such as Behance) that violates the Adobe Content Policies.
- In addition, Adobe Stock has a team of individuals responsible for reviewing contributor submissions prior to allowing them to be offered for licensing as part of the Adobe Stock collection.
- Lastly, we have a team of IP agents who are specifically trained to handle IP infringement claims across all our products and services.
Our content enforcement teams receive detailed training on a variety of topics during onboarding and are updated on new laws or relevant political or historical contexts. In the event of particularly complex reports, content enforcement teams can discuss with leadership or escalate to members of Adobe’s IP and Trust & Safety Legal and Policy teams, who may in turn consult with both internal and external specialists with expertise in the laws of the EU and the Member States.
Content and Account Actions
Adobe acts quickly to take appropriate action against violations of our Policies or applicable law. Our Trust & Safety, Stock and IP teams review content that has been reported or detected for potential violations. When we determine that content is violative, we take action that may include:
- Global Deactivation: We first review content for violations of our Policies, and violative content is deactivated globally. When we deactivate content, it is no longer available to users or non-users anywhere in the world.
- Local Block: Adobe may restrict access to content in each relevant jurisdiction if we determine that the content violates local law but does not violate our policies. When we locally block content, that content is not visible to users or non-users in the relevant jurisdiction but remains visible elsewhere.
- Limiting Distribution: When we limit distribution of a piece of content, that content will remain on the product or service but may not be visible to certain users.
We consider several factors when determining the appropriate content enforcement action. As described above, we may make the decision to globally deactivate or locally block the content based on the nature of the violation or other context specific to each case. We also may decide on a case-by-case basis to allow certain content to remain on our products and services but take steps to ensure that it cannot be discovered accidentally or viewed by certain users.
Adobe may also restrict or limit the distribution or capabilities of any account for violations of our policies.
Notice and Appeals
For our in-scope products and services, Adobe provides email notice of content enforcement actions to impacted users and individuals or entities who report content.
Both impacted users and reporters can appeal our content enforcement decisions. If a user or reporter believes that our decision was made in error, they can file an appeal via our appeals form or via email. Some users may have additional appeal options or redress mechanisms available under their local law.
When we send an email to a user or reporter to detail an enforcement action, we typically provide a link to our appeals form within that email. If an appeal is submitted via the form, the user and reporter will receive additional updates via email. If a user or reporter chooses to contact us via email instead of through the appeals form, we will also send additional updates via email.
3.2 Number of Content Moderation Actions Taken at Adobe’s Initiative6
Adobe considers content moderation actions taken at our own initiative to be actions taken on content available in the EU on the basis that the content violates the Policies or is deemed to be illegal but was not formally reported to Adobe via an Article 9 order or Article 16 notice.
These content moderation actions include both proactive and reactive enforcement. Proactive enforcement occurs when an Adobe employee or contractor or Adobe technology identifies potentially policy-violating content, and actions that content based on our Policies. Reactive enforcement occurs when a user or other external entity reports content to Adobe and that content is actioned if in violation of our Policies.
Type of Policy Violation
|
Adobe Creative Cloud storage7
|
Adobe Document Cloud
|
Adobe Express
|
Adobe InDesign
|
Adobe Photoshop Express
|
Adobe Photoshop Lightroom
|
Adobe Portfolio
|
Adobe Stock
|
Behance
|
Child Sexualization or Exploitation
|
0
|
0
|
0
|
0
|
0
|
0
|
0
|
0
|
374
|
Fraud or Phishing
|
164
|
537
|
550
|
113
|
0
|
0
|
9
|
0
|
26
|
Hate content
|
0
|
0
|
3
|
0
|
0
|
0
|
0
|
0
|
0
|
Intellectual Property
|
0
|
0
|
0
|
0
|
0
|
0
|
0
|
22,342
|
0
|
Nudity and Sexual Content
|
0
|
0
|
1
|
0
|
479
|
92
|
0
|
0
|
55,209
|
Posting of Private Information
|
0
|
1
|
0
|
0
|
1
|
0
|
0
|
0
|
51
|
Profanity
|
0
|
0
|
0
|
0
|
0
|
40
|
0
|
0
|
26
|
Regulated Goods and Services
|
0
|
0
|
0
|
0
|
0
|
0
|
0
|
0
|
1
|
Spam
|
83
|
62
|
0
|
0
|
7
|
4
|
0
|
0
|
7,710
|
Violence and Gore
|
0
|
0
|
0
|
0
|
1
|
7
|
0
|
0
|
36,862
|
Other8
|
1
|
2
|
0
|
0
|
7
|
34
|
0
|
61,7209
|
7
|