Tag: machine learning (page 2 of 5)

Brand-safe influencers and the blurring of reality

Earlier this week, in a soon-to-be released episode of the Tao of WAO podcast, we were talking about the benefits and pitfalls of NGOs like Greenpeace partnering with influencers. The upside? Engaging with communities that would otherwise be hard-to-reach. The downside? Influencers can be unpredictable.

It’s somewhat inevitable, therefore, that “brand-safe” fictional influencers would emerge. As detailed in this article, not only are teams of writers creating metaverses in which several characters exist, but they’re using machine learning to allow fans/followers to “interact”.

The boundary between the real and fictional is only going to get more blurred.

FourFront is part of a larger wave of tech startups devoted to, as aspiring Zuckerbergs like to say, building the metaverse, which can loosely be defined as “the internet” but is more specifically the interconnected, augmented reality virtual space that real people share. It’s an undoubtedly intriguing concept for people with a stake in the future of technology and entertainment, which is to say, the entirety of culture. It’s also a bit of an ethical minefield: Isn’t the internet already full of enough real-seeming content that is a) not real and b) ultimately an effort to make money? Are the characters exploiting the sympathies of well-meaning or media illiterate audiences? Maybe!

On the other hand, there’s something sort of darkly refreshing about an influencer “openly” being created by a room of professional writers whose job is to create the most likable and interesting social media users possible. Influencers already have to walk the delicate line between aspirational and inauthentic, to attract new followers without alienating existing fans, to use their voice for change while remaining “brand-safe.” The job has always been a performance; it’s just that now that performance can be convincingly replicated by a team of writers and a willing actor.

Source: What’s the deal with fictional influencers? | Vox

Generative art

We’re going to see a lot more of this in the next few years, along with the predictable hand-wringing about what constitutes ‘art’.

Me? I love it and would happily hang it on my wall — or, more appropriately, show it on my Smart TV.

Fidenza is my most versatile generative algorithm to date. Although it is not overly complex, the core structures of the algorithm are highly flexible, allowing for enough variety to produce continuously surprising results. I consider this to be one of the most interesting ways to evaluate the quality of a generative algorithm, and certainly one that is unique to the medium. Striking the right balance of unpredictability and quality is a difficult challenge for even the best artists in this field. This is why I’m so excited that Fidenza is being showcased on Art Blocks, the only site in existence that perfectly suits generative art and raises the bar for developing these kinds of high-quality generative art algorithms.

Source: Fidenza — Tyler Hobbs

AI-generated misinformation is getting more believable, even by experts

I’ve been using thispersondoesnotexist.com for projects recently and, honestly, I wouldn’t be able to tell that most of the faces it generates every time you hit refresh aren’t real people.

For every positive use of this kind of technology, there are of course negatives. Misinformation and disinformation is everywhere. This example shows how even experts in critical fields such as cybersecurity, public safety, and medicine can be fooled, too.

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation—flagged and unflagged—has been aimed at the general public. Imagine the possibility of misinformation—information that is false or misleading—in scientific and technical fields like cybersecurity, public safety, and medicine.There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that it’s possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.

Source: False, AI-generated cybersecurity news was able to fool experts | Fast Company