Firefly 3
Accueil » Adobe updates Photoshop: new generative AI model for Firefly, more control and realism, but what about the dataset?

Adobe updates Photoshop: new generative AI model for Firefly, more control and realism, but what about the dataset?

This article is also available in: Français

Above: AI-generated images published by Adobe to promote Firefly improvements

Adobe unveils a new update for Photoshop. Available in beta, it includes the new Firefly Image Model 3, the latest generative AI model developped by Adobe.
Here is a summary of the advances, but also the ethical issues posed by this new version.

Example of an image generated using a reference image – example provided by Adobe, created using generative AI

Among the advancements brought by Firefly Image Model 3, here are the most noteworthy:

  • More realistic humain, especially when it comes to faces.
  • The ability to use reference images to guide image generation.
  • Text to image generation from scratch, using a blank canvas.
  • The ability to generate variants, which improves control over the outcome;
  • A function that enhances the sharpness and clarity of images.

Overall, the update brings both more precise and more controllable results. This second point is a major issue for generative AI, as the lack of control of many tools makes them difficult to use in production.

Image generated from a blank project using a prompt – example provided by Adobe

Ethics and sources: Adobe’s stance

Adobe has always positioned itself as an ethical company when it comes to generative AI. This constrats with tools such as Midjourney which are trained using images scrapped online without the permission of artists and studios, which can lead to worrying results. Adobe, on the other hand, explains that Firefly was trained on Adobe Stock content used with permission as well as on royalty-free images.
With this choice, Adobe voluntarily deprives itself of training sources used by competitors, which could result in a less advanced generative AI model. However, in return, Adobe touts “an ethical approach to artificial intelligence” and explains that pictures created using Firefly are “commercially safe,” meaning they can be used commercially without copyright infrigement concern.

Adobe also co-founded the Content Authenticity Initiative (CAI), a coalition of entities (companies, NGOs, academics),”working to promote adoption of an open industry standard for content authenticity and provenance”.
As a direct consequence of this initiative, Adobe integrates a “content credentials” system on the images produced with Firefly to offer a clear and transparent chain on the authorship of the works and whether or not AI was involved. This same system is also supported by companies like Microsoft, Leica, Publicis.

Bloomberg’s revelations

But a few days ago, Bloomberg revealed that practice deviates from theory. Indeed, within the Adobe Stock images used to train Firefly, there are… Images created with third-party generative AIs, like Midjourney. About 5% of the training images are concerned, according to Adobe’s statement to Bloomberg.
So, while Firefly is not directly trained with images obtained online without the artists’ permission, it is the case indirectly for a small portion of the dataset.

The article thus reminds us of the importance of having as much information as possible on AI datasets.

What about Firefly Image Model 3?

At the time Bloomberg published their article, only Firely Image Model 2 was released. We therefore posed three questions to Adobe:

  • Bloomberg recently published an article informing us that about 5% of Firefly’s training data is generated by AI via tools like Midjourney. Is this also the case with the new version of Firefly?
  • If so, doesn’t this call into question the “commercially safe” aspect to which some studios and artists are sensitive, whether for legal, ethical, or commercial reasons? What guarantees does Adobe provide?
  • Finally, does Adobe plan to release a more restricted version of Firefly, which would not be trained on content generated by third-party AIs?

Here is Adobe’s response:

Adobe has not sourced training data from other Generative AI providers.
Firefly generative AI models were trained on licensed content, such as Adobe Stock, and public domain content where copyright has expired.
Adobe Stock includes Generative AI content which is part of the Adobe Stock offerings, some of which may be used to train Adobe Firefly.
People who submit content to Adobe Stock are required to attest that they have the rights to submit the content and must flag whether they used Generative AI to create the asset they’re submitting.
You can find more information in the following links :

In summary, even if Adobe has not directly relied on other generative AI companies, indirectly, images from these platforms are indeed in the training database used for Firefly. And Adobe announces neither a change for the new version 3, nor a version of Firefly that would be purged of these third-party AI images.

In other words, if you use generative AI tools within Photoshop, it raises the same kind of ethical questions as Midjourney does.

It should be highlighted that during the press conference, Adobe reaffirmed providing a “commercially safe” product. Furthermore, Adobe has been communicating about the Adobe Stock moderation team, whose mission is to eliminate visuals that do not respect platform rules: presence of brands, logos, references to artist names, known personalities, recognizable intellectual properties.
In other words, as long as the moderation team is effective, Firefly will probably not add these types of elements to your projects, which would pose very direct legal issues.

Questions remain

The next question is, of course, how studios and artists who use Adobe Firefly technology will react to Bloomberg’s revelations and Adobe’s statements.

3DVF will continue to closely monitor the developments and issues raised by generative AIs.

Leave a comment

A Lire également