According to the news site Yassad News, quoting Engjet, at the same time as artificial intelligence becomes more advanced and the activities of companies to differentiate their products from competitors, chatbots all over the Internet offer the ability to edit images and also create them. Companies like “Shutterstock” and “Adobe” are leaders in this field. But with these artificial intelligence capabilities, there are also challenges such as manipulating images or stealing them. The technique of tagging images reduces the possibility of theft, and now MIT CSAIL’s “Photoguard” technique can prevent manipulation of images. PhotoGuard works by replacing the pixels in an image, thereby disrupting the ability of artificial intelligence to understand the image.
The encoder attack method to present these disturbances targets the latent representation of the algorithmic model of the target image. More precisely, complex mathematical calculations are used that target the position and color of each pixel in the image, and in this way, artificial intelligence cannot understand the content in front of it.
The “spill” attack method is more advanced and more severe, and it camouflages an image as a different one to AI. This method defines an image as the target and enhances the perturbations in the image to be similar to its target. Any attempt to change the said photo with the help of artificial intelligence on the fake photo will lead to the creation of an unrealistic image. Hadi Salman, senior research author and PhD student at MIT says:
The encoder attack makes the AI model think that the input image to be modified is another photo. However, the “drop” attack causes the artificial intelligence model to make corrections in another image. This method is not unhackable and hackers may try to reverse engineer protected images.