butterfly graphic on a teal and purple background
butterfly graphic on a teal and purple background
butterfly graphic on a teal and purple background

Responsible innovation in the age of generative AI.

Our approach to generative AI with Adobe Firefly is built on more than a decade’s experience integrating AI into our products. As we harness its power across our applications, we’re more committed than ever to thoughtful, responsible development.

Our approach to generative AI with Adobe Firefly.

We do not and have never trained Adobe Firefly on customer content.
We only train Adobe Firefly on content where we have permission or rights to do so.
We do not mine content from the web to train Adobe Firefly.
We compensate creators who contribute to Adobe Stock for use of their content in training Adobe Firefly.
We defend the intellectual property rights of the creative community through advocating for the Federal Anti-Impersonation Right Act.
We do not claim any ownership of your content, including content you create with Adobe Firefly.
We believe in protecting creator rights and founded the Content Authenticity Initiative (CAI) focused on ensuring transparency in content ownership and how it was created.
We developed Adobe Firefly to prevent it from creating content that infringes copyright or intellectual property rights and it is designed to be commercially safe.
We explicitly prohibit and take steps to prevent third parties from training on customer content hosted on our servers (such as on Behance).

training icon

Training

AI is only as good as the data on which it’s trained and what results are considered appropriate depends on each use case. That’s why we build datasets specifically to meet the needs of each of our businesses, to ensure that we have diverse and ethical results that are appropriate for the way in which the AI will be used.

testing icon

Testing

We conduct rigorous and continuous testing of AI-powered features and products to mitigate against harmful biases and stereotypes. This includes automated testing and human evaluation.

impact assessments icon

Impact Assessments

Engineers developing any AI-powered features submit an AI Ethics Impact Assessment. This is a multipart impact assessment designed to identify features and products that can perpetuate harmful biases and stereotypes. This allows our AI Ethics team to focus their efforts on features and products with the highest potential ethical impact, without slowing down the pace of innovation.

diverse human oversight icon

Diverse human oversight

AI-powered features with the highest potential ethical impact are reviewed by a diverse cross-functional AI Ethics Review Board. Diversity of personal and professional backgrounds and experiences is critical to identifying potential issues from a variety of perspectives that a less diverse team might not see. Hear from members of our board. Watch the video.

feedback icon

Feedback

We provide feedback mechanisms so users can report potentially biased outputs and we can remediate any concerns. AI is an ongoing journey and we want to work together with our community to continue to make our tools and products better for everyone. If you have a question about AI ethics or want to report a possible AI ethics issue, please contact us.