Tag: deepfakes (page 1 of 3)

AI-synthesized faces are here to fool you

No-one who’s been paying attention should be in the last surprised that AI-synthesized faces are now so good. However, we should probably be a bit concerned that research seems to suggest that they seem to be rated as “more trustworthy” than real human faces.

The recommendations by researchers for “incorporating robust watermarks into the image and video synthesis networks” are kind of ridiculous to enforce in practice, so we need to ensure that we’re ready for the onslaught of deepfakes.

This is likely to have significant consequences by the end of this year at the latest, with everything that’s happening in the world at the moment…

Synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces and are judged more trustworthy. This hyperphotorealism is consistent with recent findings. These two studies did not contain the same diversity of race and gender as ours, nor did they match the real and synthetic faces as we did to minimize the chance of inadvertent cues. While it is less surprising that White male faces are highly realistic—because these faces dominate the neural network training—we find that the realism of synthetic faces extends across race and gender. Perhaps most interestingly, we find that synthetically generated faces are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, synthetically generated faces have emerged on the other side of the uncanny valley. This should be considered a success for the fields of computer graphics and vision. At the same time, easy access (https://thispersondoesnotexist.com) to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns, with serious implications for individuals, societies, and democracies.

Source: AI-synthesized faces are indistinguishable from real faces and more trustworthy | PNAS

Deepfake maps

There’s plenty to be concerned about in the world at the moment, and this just adds to the party. At a time when most of navigate by following a blue dot around a smartphone screen, we’re susceptible to manipulation on a number of fronts.

In a paper published online last month, University of Washington professor Bo Zhao employed AI techniques similar to those used to create so-called deepfakes to alter satellite images of several cities. Zhao and colleagues swapped features between images of Seattle and Beijing to show buildings where there are none in Seattle and to remove structures and replace them with greenery in Beijing.

Zhao used an algorithm called CycleGAN to manipulate satellite photos. The algorithm, developed by researchers at UC Berkeley, has been widely used for all sorts of image trickery. It trains an artificial neural network to recognize the key characteristics of certain images, such as a style of painting or the features on a particular type of map. Another algorithm then helps refine the performance of the first by trying to detect when an image has been manipulated.

Source: Deepfake Maps Could Really Mess With Your Sense of the World | WIRED

Even while a thing is in the act of coming into existence, some part of it has already ceased to be

Cat next to laptop which has on-screen "System error"

💻 Zoom and gloom

🤖 ‘Machines set loose to slaughter’: the dangerous rise of military AI

📏 Wittgenstein’s Ruler: When Our Opinions Speak More About Us Instead The Topic

🤨 Inside the strange new world of being a deepfake actor

🎡 Japanese Amusement Park Turns Ferris Wheel Into Wi-Fi Enabled Remote Workspace


Quotation-as-title from Marcus Aurelius. Image from top-linked post.