Generative AI, misinformation, and content authenticity
As a philosopher, historian, and educator by training, and a technologist by profession, this initiative really hits my sweet spot. The image below shows how, even before AI and digital technologies, altering the public record through manipulating photographs was possible.
Now, of course, spreading misinformation and disinformation is so much easier, especially on social networks. This series of posts from the Content Authenticity Initiative outlines ways in which the technology they are developing can be prove whether or not an image has been altered.
Of course, unless verification is built into social networks, this is only likely to be useful to journalists and in a court of law. After all, people tend to reshare whatever chimes with their worldview.
Although it varies in form and creation, generative AI content (a.k.a. deepfakes) refers to images, audio, or video that has been automatically synthesized by an AI-based system. Deepfakes are the latest in a long line of techniques used to manipulate reality — from Stalin's darkroom to Photoshop to classic computer-generated renderings. However, their introduction poses new opportunities and risks now that everyone has access to what was historically the purview of a small number of sophisticated organizations.Source: From the darkroom to generative AI | Content Authenticity InitiativeEven in these early days of the AI revolution, we are seeing stunning advances in generative AI. The technology can create a realistic photo from a simple text prompt, clone a person’s voice from a few minutes of an audio recording, and insert a person into a video to make them appear to be doing whatever the creator desires. We are also seeing real harms from this content in the form of non-consensual sexual imagery, small- to large-scale fraud, and disinformation campaigns.
Building on our earlier research in digital media forensics techniques, over the past few years my research group and I have turned our attention to this new breed of digital fakery. All our authentication techniques work in the absence of digital watermarks or signatures. Instead, they model the path of light through the entire image-creation process and quantify physical, geometric, and statistical regularities in images that are disrupted by the creation of a fake.