Tag: disinformation (page 1 of 4)

Mainstream social media is a behaviour-modification system

A couple of years ago I would have said that this analogy of an atom bomb being exploded over our information ecosystem is a bit extreme. Not now.

I’ve said this over and over, that, really, this is like when 140,000 people died instantly in Hiroshima and Nagasaki. The same thing has happened in our information ecosystem, but it is silent and it is insidious. This is what I said in the Nobel lecture: An atom bomb has exploded in our information ecosystem. And here’s the reason why. I peg it to when journalists lost the gatekeeping powers. I wish we still had the gatekeeping powers, but we don’t.

So what happened? Content creation was separated from distribution, and then the distribution had completely new rules that no one knew about. We experienced it in motion. And by 2018, MIT writes a paper that says that lies laced with anger and hate spread faster and further than facts. This is my 36th year as a journalist. I spent that entire time learning how to tell stories that will make you care. But when we’re up against lies, we just can’t win, because facts are really boring. Hard to capture your amygdala the way lies do.

[…]

Today we live in a behavior-modification system. The tech platforms that now distribute the news are actually biased against facts, and they’re biased against journalists. E. O. Wilson, who passed away in December, studied emergent behavior in ants. So think about emergent behavior in humans. He said the greatest crisis we face is our Paleolithic emotions, our medieval institutions, and our godlike technology. What travels faster and further? Hate. Anger. Conspiracy theories. Do you wonder why we have no shared space? I say this over and over. Without facts, you can’t have truth. Without truth, you can’t have trust. Without these, we have no shared space and democracy is a dream.

Source: Maria Ressa: How Disinformation Manipulates Elections | The Atlantic

AI-synthesized faces are here to fool you

No-one who’s been paying attention should be in the last surprised that AI-synthesized faces are now so good. However, we should probably be a bit concerned that research seems to suggest that they seem to be rated as “more trustworthy” than real human faces.

The recommendations by researchers for “incorporating robust watermarks into the image and video synthesis networks” are kind of ridiculous to enforce in practice, so we need to ensure that we’re ready for the onslaught of deepfakes.

This is likely to have significant consequences by the end of this year at the latest, with everything that’s happening in the world at the moment…

Synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces and are judged more trustworthy. This hyperphotorealism is consistent with recent findings. These two studies did not contain the same diversity of race and gender as ours, nor did they match the real and synthetic faces as we did to minimize the chance of inadvertent cues. While it is less surprising that White male faces are highly realistic—because these faces dominate the neural network training—we find that the realism of synthetic faces extends across race and gender. Perhaps most interestingly, we find that synthetically generated faces are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, synthetically generated faces have emerged on the other side of the uncanny valley. This should be considered a success for the fields of computer graphics and vision. At the same time, easy access (https://thispersondoesnotexist.com) to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns, with serious implications for individuals, societies, and democracies.

Source: AI-synthesized faces are indistinguishable from real faces and more trustworthy | PNAS

Audrey Watters on the technology of wellness and mis/disinformation

Audrey Watters is turning her large brain to the topic of “wellness” and, in this first article, talks about mis/disinformation. This is obviously front of mind for me given my involvement in user research for the Zappa project from Bonfire.

In February 2014, I happened to catch a couple of venture capitalists complaining about journalism on Twitter. (Honestly, you could probably pick any month or year and find the same.) “When you know about a situation, you often realize journalists don’t know that much,” one tweeted. “When you don’t know anything, you assume they’re right.” Another VC responded, “there’s a name for this and I think Murray Gell-Mann came up with it but I’m sick today and too lazy to search for it.” A journalist helpfully weighed in: “Michael Crichton called it the ”Murray Gell-Mann Amnesia Effect,” providing a link to a blog with an excerpt in which Crichton explains the concept.

Source: The Technology of Wellness, Part 1: What I Don’t Know | Hack Education