Skip to main content

What images should be labeled "Made with AI"?


Instagram, like many platforms, has started automatically labeling images that are “Made with AI”. That sounds good, right? But the devil is in the details.

Some photographers have noticed that Instagram has added the “Made with AI” label to their real photos. But it’s not a bug.

It turns out that using Adobe Photoshop’s generative AI-powered editing tools, even to just remove a speck of dust from the lens, will result in the label being added. That’s because Adobe automatically adds metadata indicating AI tools were used to create or edit an image.

Here’s an example. I used the generative AI fill in Adobe Express to edit the very top right corner of a photo. See before and after (click to enlarge, arrow points to changed section).

Composite of before and after images showing a small area edited with generative AI fill.

You can check the Content Credentials metadata embedded in images at contentcredentials.org/verify
Output of Content Credentials for my photo showing the image was generated with an AI tool
When I do that for my edited photo, it indicates "this image was generated with an AI tool". That isn't  quite right, as only a small portion of the image was generated with AI. It does correctly show that I used Adobe Firefly in Adobe Express to make "other edits".

Here’s why this is controversial:
  • Instagram (which likely relies on the Content Credentials metadata) applies the same “Made with AI” label whether the image was entirely generated, or if a few pixels were changed.
  • Non-generative AI editing tools can make the same changes, but the resulting images aren’t labeled.
  • It’s easy to bypass the automatic labeling by sharing a screenshot of the edited image, or copy-pasting the edited image into another document, rather than the edited image itself.
The problem is that the label focuses on the tool used, rather than the resulting image. Misleading images are misleading whether generative AI was used to create them or not.

And considering that smartphone cameras are “AI powered” now, and most photographers do at least a little editing (removing noise, adjusting the exposure or color balance, and so forth), it’s hard to know what shouldn’t be labeled.

The automated labeling makes it easy for platforms to look like they are doing something about fabricated images, without requiring them to make any judgment calls.

It seems pretty clear to me that the way this has been implemented isn't going to be useful. Photographers are (rightly) upset that their photos with minor edits are labeled the same way as completely fake images. 

For comparison, what does the Content Credentials verification tool show when a completely fabricated image is analyzed?

Firefly generated image: Photo realistic image of a selfie of a middle aged woman with shoulder length hair wearing a white cowboy hat.

Screenshot of Content Credentials Verify output for AI-generated image
The summary says "This image was generated with an AI tool", which is the same as the summary for my edited photo. 

But there is one big difference: under "Actions" it shows "Created a new file or content", where my edited image shows "Other edits: Made other changes". 

At the least Instagram and other platforms could use a differently worded label based on this information ("created" rather than "edited"). 

But "edited" is still quite broad, and misleads because not all edited images have a label. 

I don't know what the solution is, but I don't think it's this.

Related: Matt Growcoot at PetaPixel says Instagram’s ‘Made With AI’ tag is inaccurate, misleading, and needs to go.

Comments