Here’s How Google Will Help You Identify AI Images

Google AI images

Image of a Google building
Google is implementing an innovative system that will help users differentiate between real and AI-generated / edited images. Credit: Anthony Quintano-cc-by-2.0/WikimediaCommons

Tech giant Google is planning on launching a technology that will help users determine whether an image was taken with a camera or if it has either been altered by editing software such as Photoshop or generated by AI models.

It is expected that, during fall 2024, Google’s search results will include a feature that flags images to let users know if an image is real, altered in any way, or AI-generated. The label will most likely say “about this image.”

The technology that Google is planning to incorporate is part of the Coalition for Content Provenance and Authenticity (CP2A), which is a large group attempting to regulate AI-generated images.

But what exactly is the CP2A, and how will it identify AI images?

CP2A authentication is a technical standard for images that includes information about where images come from or how they are created. It works in conjunction with both hardware and software to create digital trails.

The system has been backed by major online players and corporations such as Amazon, Microsoft, Adobe, OpenAI, and now Google.

However, despite support from major players, adoption of the system has been slow, so much so that integration between CP2A and Google Search will be the system’s first major test.

It was Google itself that developed the latest CP2A technical standard. Version 2.1 of the system will be used in an upcoming CP2A trust list that will enable search engines like Google Search to determine whether images are real or AI-generated.

Laurie Richardson, who is the vice president of trust and safety at Google explained, “For example, if the data shows an image was taken by a specific camera model, the trust list helps validate that this piece of information is accurate.”

The system is also expected to work for Google ads

Google is also planning to incorporate this AI image-identifying technology into Google ads to ensure transparency between users and manufacturers. Laurie Richardson expanded on this subject, saying, “Our goal is to ramp this up over time and use C2PA signals to inform how we enforce key policies.”

It is also expected that Google will implement CP2A information to viewers on YouTube when content is recorded with a camera. It is expected this update will be available by the end of the year.

The implementation of CP2A, or at the very least the attempts to do so, shows that Google is interested in transparency and being able to show users the difference between real images and AI-generated ones. However, the implementation of CP2A data is already proving to be extremely challenging.

This is mainly due to hardware limitations in the cameras themselves. There is a limited number of cameras from brands such as Leica and Sony that support the CP2A’s technical standard which adds key information to the image itself. Nikon and Cannon have agreed to adopt the standard in some of their newer cameras, but crucially, neither Apple nor Google have committed to adopting it for their phones.



Leave a Reply

Your email address will not be published. Required fields are marked *