Generative AI, misinformation, and content authenticity

    As a philosopher, historian, and educator by training, and a technologist by profession, this initiative really hits my sweet spot. The image below shows how, even before AI and digital technologies, altering the public record through manipulating photographs was possible.

    Now, of course, spreading misinformation and disinformation is so much easier, especially on social networks. This series of posts from the Content Authenticity Initiative outlines ways in which the technology they are developing can be prove whether or not an image has been altered.

    Of course, unless verification is built into social networks, this is only likely to be useful to journalists and in a court of law. After all, people tend to reshare whatever chimes with their worldview.

    Although it varies in form and creation, generative AI content (a.k.a. deepfakes) refers to images, audio, or video that has been automatically synthesized by an AI-based system. Deepfakes are the latest in a long line of techniques used to manipulate reality — from Stalin's darkroom to Photoshop to classic computer-generated renderings. However, their introduction poses new opportunities and risks now that everyone has access to what was historically the purview of a small number of sophisticated organizations.

    Even in these early days of the AI revolution, we are seeing stunning advances in generative AI. The technology can create a realistic photo from a simple text prompt, clone a person’s voice from a few minutes of an audio recording, and insert a person into a video to make them appear to be doing whatever the creator desires. We are also seeing real harms from this content in the form of non-consensual sexual imagery, small- to large-scale fraud, and disinformation campaigns.

    Building on our earlier research in digital media forensics techniques, over the past few years my research group and I have turned our attention to this new breed of digital fakery. All our authentication techniques work in the absence of digital watermarks or signatures. Instead, they model the path of light through the entire image-creation process and quantify physical, geometric, and statistical regularities in images that are disrupted by the creation of a fake.

    Source: From the darkroom to generative AI | Content Authenticity Initiative

    An end to rabbit hole radicalization?

    A new peer-reviewed study suggests that YouTube’s efforts to stop people being radicalized through its recommendation algorithm have been effective. The study monitored 1,181 people’s YouTube activity and found that only 6% watched extremist videos, with most of these deliberately subscribing to extremist channels.

    Interestingly, though, the study cannot account for user behaviour prior to YouTube’s 2019 algorithm changes, which means we can only wonder about how influential the platform was in terms of radicalization up to and including pretty significant elections.

    Around the time of the 2016 election, YouTube became known as a home to the rising alt-right and to massively popular conspiracy theorists. The Google-owned site had more than 1 billion users and was playing host to charismatic personalities who had developed intimate relationships with their audiences, potentially making it a powerful vector for political influence. At the time, Alex Jones’s channel, Infowars, had more than 2 million subscribers. And YouTube’s recommendation algorithm, which accounted for the majority of what people watched on the platform, looked to be pulling people deeper and deeper into dangerous delusions.

    The process of “falling down the rabbit hole” was memorably illustrated by personal accounts of people who had ended up on strange paths into the dark heart of the platform, where they were intrigued and then convinced by extremist rhetoric—an interest in critiques of feminism could lead to men’s rights and then white supremacy and then calls for violence. Most troubling is that a person who was not necessarily looking for extreme content could end up watching it because the algorithm noticed a whisper of something in their previous choices. It could exacerbate a person’s worst impulses and take them to a place they wouldn’t have chosen, but would have trouble getting out of.

    […]

    The… research is… important, in part because it proposes a specific, technical definition of ‘rabbit hole’. The term has been used in different ways in common speech and even in academic research. Nyhan’s team defined a “rabbit hole event” as one in which a person follows a recommendation to get to a more extreme type of video than they were previously watching. They can’t have been subscribing to the channel they end up on, or to similarly extreme channels, before the recommendation pushed them. This mechanism wasn’t common in their findings at all. They saw it act on only 1 percent of participants, accounting for only 0.002 percent of all views of extremist-channel videos.

    Nyhan was careful not to say that this paper represents a total exoneration of YouTube. The platform hasn’t stopped letting its subscription feature drive traffic to extremists. It also continues to allow users to publish extremist videos. And learning that only a tiny percentage of users stumble across extremist content isn’t the same as learning that no one does; a tiny percentage of a gargantuan user base still represents a large number of people.

    Source: The World Will Never Know the Truth About YouTube’s Rabbit Holes | The Atlantic

    Bad Bard

    Google is obviously a little freaked-out by tools such as ChatGPT and their potentially ability to destroy large sections of their search business. However, it seems like they didn’t do even the most cursory checks of the promotional material they put out as part of the hurried launch for ‘Bard’.

    This, of course, is our future: ‘truthy’ systems leading individuals, groups, and civilizations down the wrong path. I’m not optimistic about our future.

    Google Bard screenshot

    In the advertisement, Bard is given the prompt: "What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year old about?"

    Bard responds with a number of answers, including one suggesting the JWST was used to take the very first pictures of a planet outside the Earth’s solar system, or exoplanets. This is inaccurate.

    Source: Google AI chatbot Bard offers inaccurate information in company ad | Reuters

    Should we "resist trying to make things better" when it comes to online misinformation?

    This is a provocative interview with Alex Stamos, “the former head of security at Facebook who now heads up the Stanford Internet Observatory, which does deep dives into the ways people abuse the internet”. His argument is that social media companies (like Twitter) sometimes try to hard to make the world better, which he thinks should be “resisted”.

    I’m not sure what to make of this. On the one hand, I think we absolutely do need to be worried about misinformation. On the other, he does have a very good point about people being complicit in their own radicalisation. It’s complicated.

    I think what has happened is there was a massive overestimation of the capability of mis- and disinformation to change people’s minds — of its actual persuasive power. That doesn’t mean it’s not a problem, but we have to reframe how we look at it — as less of something that is done to us and more of a supply and demand problem. We live in a world where people can choose to seal themselves into an information environment that reinforces their preconceived notions, that reinforces the things they want to believe about themselves and about others. And in doing so, they can participate in their own radicalization. They can participate in fooling themselves, but that is not something that’s necessarily being done to them.

    […]

    The fundamental problem is that there’s a fundamental disagreement inside people’s heads — that people are inconsistent on what responsibility they believe information intermediaries should have for making society better. People generally believe that if something is against their side, that the platforms have a huge responsibility. And if something is on their side, [the platforms] should have no responsibility. It’s extremely rare to find people who are consistent in this.

    […]

    Any technological innovation, you’re going to have some kind of balancing act. The problem is, our political discussion of these things never takes those balances into effect. If you are super into privacy, then you have to also recognize that when you provide people private communication, that some subset of people will use that in ways that you disagree with, in ways that are illegal in ways, and sometimes in some cases that are extremely harmful. The reality is that we have to have these kinds of trade-offs.

    Source: Are we too worried about misinformation? | Vox

    Every complex problem has a solution which is simple, direct, plausible — and wrong

    This is a great article by Michał Woźniak (@rysiek) which cogently argues that the problem with misinformation and disinformation does not come through heavy-handed legislation, or even fact-checking, but rather through decentralisation of funding, technology, and power.

    I really should have spoken with him when I was working on the Bonfire Zappa report.

    While it is possible to define misinformation and disinformation, any such definition necessarily relies on things that are not easy (or possible) to quickly verify: a news item’s relation to truth, and its authors’ or distributors’ intent.

    This is especially valid within any domain that deals with complex knowledge that is highly nuanced, especially when stakes are high and emotions heat up. Public debate around COVID-19 is a chilling example. Regardless of how much “own research” anyone has done, for those without an advanced medical and scientific background it eventually boiled down to the question of “who do you trust”. Some trusted medical professionals, some didn’t (and still don’t).

    […]

    Disinformation peddlers are not just trying to push specific narratives. The broader aim is to discredit the very idea that there can at all exist any reliable, trustworthy information source. After all, if nothing is trustworthy, the disinformation peddlers themselves are as trustworthy as it gets. The target is trust itself.

    […]

    I believe that we are looking for solutions to the wrong aspects of the problem. Instead of trying to legislate misinformation and disinformation away, we should instead be looking closely at how is it possible that it spreads so fast (and who benefits from this). We should be finding ways to fix the media funding crisis; and we should be making sure that future generations receive the mental tools that would allow them to cut through biases, hoaxes, rhetorical tricks, and logical fallacies weaponized to wage information wars.

    Source: Fighting Disinformation: We’re Solving The Wrong Problems / Tactical Media Room

    Every complex problem has a solution which is simple, direct, plausible — and wrong

    This is a great article by Michał Woźniak (@rysiek) which cogently argues that the problem with misinformation and disinformation does not come through heavy-handed legislation, or even fact-checking, but rather through decentralisation of funding, technology, and power.

    I really should have spoken with him when I was working on the Bonfire Zappa report.

    While it is possible to define misinformation and disinformation, any such definition necessarily relies on things that are not easy (or possible) to quickly verify: a news item’s relation to truth, and its authors’ or distributors’ intent.

    This is especially valid within any domain that deals with complex knowledge that is highly nuanced, especially when stakes are high and emotions heat up. Public debate around COVID-19 is a chilling example. Regardless of how much “own research” anyone has done, for those without an advanced medical and scientific background it eventually boiled down to the question of “who do you trust”. Some trusted medical professionals, some didn’t (and still don’t).

    […]

    Disinformation peddlers are not just trying to push specific narratives. The broader aim is to discredit the very idea that there can at all exist any reliable, trustworthy information source. After all, if nothing is trustworthy, the disinformation peddlers themselves are as trustworthy as it gets. The target is trust itself.

    […]

    I believe that we are looking for solutions to the wrong aspects of the problem. Instead of trying to legislate misinformation and disinformation away, we should instead be looking closely at how is it possible that it spreads so fast (and who benefits from this). We should be finding ways to fix the media funding crisis; and we should be making sure that future generations receive the mental tools that would allow them to cut through biases, hoaxes, rhetorical tricks, and logical fallacies weaponized to wage information wars.

    Source: Fighting Disinformation: We’re Solving The Wrong Problems / Tactical Media Room

    Audrey Watters on the technology of wellness and mis/disinformation

    Audrey Watters is turning her large brain to the topic of “wellness” and, in this first article, talks about mis/disinformation. This is obviously front of mind for me given my involvement in user research for the Zappa project from Bonfire.

    In February 2014, I happened to catch a couple of venture capitalists complaining about journalism on Twitter. (Honestly, you could probably pick any month or year and find the same.) “When you know about a situation, you often realize journalists don’t know that much,” one tweeted. “When you don’t know anything, you assume they’re right.” Another VC responded, “there’s a name for this and I think Murray Gell-Mann came up with it but I’m sick today and too lazy to search for it.” A journalist helpfully weighed in: “Michael Crichton called it the ”Murray Gell-Mann Amnesia Effect," providing a link to a blog with an excerpt in which Crichton explains the concept.
    Source: The Technology of Wellness, Part 1: What I Don't Know | Hack Education

    You cannot 'solve' online misinformation

    Matt Baer, who founded the excellent platform write.as, weighs in on misinformation and disinformation.

    This is something I’m interested in anyway given my background in digital literacies, but especially at the moment because of the user research I’m doing around the Zappa project.

    Seems to me that a space made up of humans is always going to have (very human) lying and deception, and the spread of misinformation in the form of simply not having all the facts straight. It's a fact of life, and one you can never totally design or regulate out of existence.

    I think the closest “solution” to misinformation (incidental) and disinformation (intentional) online is always going to be a widespread understanding that, as a user, you should be inherently skeptical of what you see and hear digitally.

    […]

    As long as human interactions are mediated by a screen (or goggles in the coming “metaverse”), there will be a certain loss of truth, social clues, and context in our interactions — clues that otherwise help us determine “truthiness” of information and trustworthiness of actors. There will also be a constant chance for middlemen to meddle in the medium, for better or worse, especially as we get farther from controlling the infrastructure ourselves.

    Source: “Solving” Misinformation | Matt

    AI-generated misinformation is getting more believable, even by experts

    I’ve been using thispersondoesnotexist.com for projects recently and, honestly, I wouldn’t be able to tell that most of the faces it generates every time you hit refresh aren’t real people.

    For every positive use of this kind of technology, there are of course negatives. Misinformation and disinformation is everywhere. This example shows how even experts in critical fields such as cybersecurity, public safety, and medicine can be fooled, too.

    If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation—flagged and unflagged—has been aimed at the general public. Imagine the possibility of misinformation—information that is false or misleading—in scientific and technical fields like cybersecurity, public safety, and medicine.There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that it’s possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.
    Source: False, AI-generated cybersecurity news was able to fool experts | Fast Company