Tag: machine learning (page 1 of 4)

AI-synthesized faces are here to fool you

No-one who’s been paying attention should be in the last surprised that AI-synthesized faces are now so good. However, we should probably be a bit concerned that research seems to suggest that they seem to be rated as “more trustworthy” than real human faces.

The recommendations by researchers for “incorporating robust watermarks into the image and video synthesis networks” are kind of ridiculous to enforce in practice, so we need to ensure that we’re ready for the onslaught of deepfakes.

This is likely to have significant consequences by the end of this year at the latest, with everything that’s happening in the world at the moment…

Synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces and are judged more trustworthy. This hyperphotorealism is consistent with recent findings. These two studies did not contain the same diversity of race and gender as ours, nor did they match the real and synthetic faces as we did to minimize the chance of inadvertent cues. While it is less surprising that White male faces are highly realistic—because these faces dominate the neural network training—we find that the realism of synthetic faces extends across race and gender. Perhaps most interestingly, we find that synthetically generated faces are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, synthetically generated faces have emerged on the other side of the uncanny valley. This should be considered a success for the fields of computer graphics and vision. At the same time, easy access (https://thispersondoesnotexist.com) to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns, with serious implications for individuals, societies, and democracies.

Source: AI-synthesized faces are indistinguishable from real faces and more trustworthy | PNAS

Brand-safe influencers and the blurring of reality

Earlier this week, in a soon-to-be released episode of the Tao of WAO podcast, we were talking about the benefits and pitfalls of NGOs like Greenpeace partnering with influencers. The upside? Engaging with communities that would otherwise be hard-to-reach. The downside? Influencers can be unpredictable.

It’s somewhat inevitable, therefore, that “brand-safe” fictional influencers would emerge. As detailed in this article, not only are teams of writers creating metaverses in which several characters exist, but they’re using machine learning to allow fans/followers to “interact”.

The boundary between the real and fictional is only going to get more blurred.

FourFront is part of a larger wave of tech startups devoted to, as aspiring Zuckerbergs like to say, building the metaverse, which can loosely be defined as “the internet” but is more specifically the interconnected, augmented reality virtual space that real people share. It’s an undoubtedly intriguing concept for people with a stake in the future of technology and entertainment, which is to say, the entirety of culture. It’s also a bit of an ethical minefield: Isn’t the internet already full of enough real-seeming content that is a) not real and b) ultimately an effort to make money? Are the characters exploiting the sympathies of well-meaning or media illiterate audiences? Maybe!

On the other hand, there’s something sort of darkly refreshing about an influencer “openly” being created by a room of professional writers whose job is to create the most likable and interesting social media users possible. Influencers already have to walk the delicate line between aspirational and inauthentic, to attract new followers without alienating existing fans, to use their voice for change while remaining “brand-safe.” The job has always been a performance; it’s just that now that performance can be convincingly replicated by a team of writers and a willing actor.

Source: What’s the deal with fictional influencers? | Vox

Generative art

We’re going to see a lot more of this in the next few years, along with the predictable hand-wringing about what constitutes ‘art’.

Me? I love it and would happily hang it on my wall — or, more appropriately, show it on my Smart TV.

Fidenza is my most versatile generative algorithm to date. Although it is not overly complex, the core structures of the algorithm are highly flexible, allowing for enough variety to produce continuously surprising results. I consider this to be one of the most interesting ways to evaluate the quality of a generative algorithm, and certainly one that is unique to the medium. Striking the right balance of unpredictability and quality is a difficult challenge for even the best artists in this field. This is why I’m so excited that Fidenza is being showcased on Art Blocks, the only site in existence that perfectly suits generative art and raises the bar for developing these kinds of high-quality generative art algorithms.

Source: Fidenza — Tyler Hobbs