A method to prevent images from being manipulated by AI has been developed by the Computer Science and Artificial Intelligence Laboratory (CSAIL), a unit of Massachusetts Institute of Technology (MIT) in the United States.
The tool, called PhotoGuard, guards against improper usage of photographs by modifying pixels in a way that makes it challenging for artificial intelligence to comprehend. According to the paper, these “perturbations” are imperceptible to the human eye but are interpreted by AI.
Using an “encoder” approach, which alters the information that describes the precise position and color of pixels in an image, is one way to put this into practice. The AI is unable to comprehend what it is looking at as a result of this modification.
Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor
The second technique, referred to as a “diffusion” attack, makes the image appear to be another in the eyes of an AI. This is accomplished by using data from a different target image and changing the pixels in the source image to match that, masking the image in the AI’s vision.
Although the methods work, they aren’t yet error-proof. Cropping, flipping, or adding digital noise to the image would still allow anyone to reverse engineer it.
This occurs at a time when generative artificial intelligence is growing, raising the possibility that our photographs will be altered and used against us. This has already been demonstrated in images, from Pope Francis donning a puffy jacket to a fake Mark Zuckerberg ranting about how he rules the world.