AI generated images in a time of war

    It’s one thing user-generated content being circulated around social media for the purposes of disinformation. It’s another thing entirely when Adobe’s stock image marketplace is selling AI-generated ‘photos’ of destroyed buildings in Gaza.

    This article in VICE includes a comment from an Adobe spokesperson who references the Content Authenticity Initiative. But this just puts the problem on the user rather than the marketplace. People looking to download AI-generated images to spread disinformation, don’t care about the CAI, and will actively look for ways to circumvent it.

    Screenshot of Adobe stock images site with AI-generated image titled "Destroyed buildings in Gaza town of Gaza strip in Israel, Affected by war."
    Adobe is selling AI-generated images showing fake scenes depicting bombardment of cities in both Gaza and Israel. Some are photorealistic, others are obviously computer-made, and at least one has already begun circulating online, passed off as a real image.

    As first reported by Australian news outlet Crikey, the photo is labeled “conflict between Israel and palestine generative ai” and shows a cloud of dust swirling from the tops of a cityscape. It’s remarkably similar to actual photographs of Israeli airstrikes in Gaza, but it isn’t real. Despite being an AI-generated image, it ended up on a few small blogs and websites without being clearly labeled as AI.

    […]

    As numerous experts have pointed out, the collapse of social media and the proliferation of propaganda has made it hard to tell what’s actually going on in conflict zones. AI-generated images have only muddied the waters, including over the last several weeks, as both sides have used AI-generated imagery for propaganda purposes. Further compounding the issue is that many publicly-available AI generators are launched with few guardrails, and the companies that build them don’t seem to care.

    Source: Adobe Is Selling AI-Generated Images of Violence in Gaza and Israel | VICE

    2024 is going to be a wild ride of AI-generated content

    It’s on the NSFW side of things, but if you’re in any doubt that we’re entering a crazy world of AI-generated content, just check out this post.

    As I’ve said many times before, the porn industry is interesting in terms of technological innovation. If we take an amoral stance, then there’s a lot of ‘content creators’ in that industry, and as the post I quote below points out, that there are going to be a lot of fake content creators over the next few months and years.

    It is imperative to identify content sources you believe to be valuable now. Nothing new in the future will be credible. 2024 is going to be a wild ride of AI-generated content. We are never going to know what is real anymore.

    There will be some number of real people who will probably replace themselves with AI content if they can make money from it. This will result in doubting real content. Everything becomes questionable and nothing will suffice as digital proof any longer.

    […]

    Our understanding of what is happening will continue to lag further and further behind what is happening.

    Some will make the argument “But isn’t this simply the same problems we already deal with today?”. It is; however, the ability to produce fake content is getting exponentially cheaper while the ability to detect fake content is not improving. As long as fake content was somewhat expensive, difficult to produce, and contained detectable digital artifacts, it at least could be somewhat managed.

    Source: Post-truth society is near | Mind Prison

    An end to rabbit hole radicalization?

    A new peer-reviewed study suggests that YouTube’s efforts to stop people being radicalized through its recommendation algorithm have been effective. The study monitored 1,181 people’s YouTube activity and found that only 6% watched extremist videos, with most of these deliberately subscribing to extremist channels.

    Interestingly, though, the study cannot account for user behaviour prior to YouTube’s 2019 algorithm changes, which means we can only wonder about how influential the platform was in terms of radicalization up to and including pretty significant elections.

    Around the time of the 2016 election, YouTube became known as a home to the rising alt-right and to massively popular conspiracy theorists. The Google-owned site had more than 1 billion users and was playing host to charismatic personalities who had developed intimate relationships with their audiences, potentially making it a powerful vector for political influence. At the time, Alex Jones’s channel, Infowars, had more than 2 million subscribers. And YouTube’s recommendation algorithm, which accounted for the majority of what people watched on the platform, looked to be pulling people deeper and deeper into dangerous delusions.

    The process of “falling down the rabbit hole” was memorably illustrated by personal accounts of people who had ended up on strange paths into the dark heart of the platform, where they were intrigued and then convinced by extremist rhetoric—an interest in critiques of feminism could lead to men’s rights and then white supremacy and then calls for violence. Most troubling is that a person who was not necessarily looking for extreme content could end up watching it because the algorithm noticed a whisper of something in their previous choices. It could exacerbate a person’s worst impulses and take them to a place they wouldn’t have chosen, but would have trouble getting out of.

    […]

    The… research is… important, in part because it proposes a specific, technical definition of ‘rabbit hole’. The term has been used in different ways in common speech and even in academic research. Nyhan’s team defined a “rabbit hole event” as one in which a person follows a recommendation to get to a more extreme type of video than they were previously watching. They can’t have been subscribing to the channel they end up on, or to similarly extreme channels, before the recommendation pushed them. This mechanism wasn’t common in their findings at all. They saw it act on only 1 percent of participants, accounting for only 0.002 percent of all views of extremist-channel videos.

    Nyhan was careful not to say that this paper represents a total exoneration of YouTube. The platform hasn’t stopped letting its subscription feature drive traffic to extremists. It also continues to allow users to publish extremist videos. And learning that only a tiny percentage of users stumble across extremist content isn’t the same as learning that no one does; a tiny percentage of a gargantuan user base still represents a large number of people.

    Source: The World Will Never Know the Truth About YouTube’s Rabbit Holes | The Atlantic

    Should we "resist trying to make things better" when it comes to online misinformation?

    This is a provocative interview with Alex Stamos, “the former head of security at Facebook who now heads up the Stanford Internet Observatory, which does deep dives into the ways people abuse the internet”. His argument is that social media companies (like Twitter) sometimes try to hard to make the world better, which he thinks should be “resisted”.

    I’m not sure what to make of this. On the one hand, I think we absolutely do need to be worried about misinformation. On the other, he does have a very good point about people being complicit in their own radicalisation. It’s complicated.

    I think what has happened is there was a massive overestimation of the capability of mis- and disinformation to change people’s minds — of its actual persuasive power. That doesn’t mean it’s not a problem, but we have to reframe how we look at it — as less of something that is done to us and more of a supply and demand problem. We live in a world where people can choose to seal themselves into an information environment that reinforces their preconceived notions, that reinforces the things they want to believe about themselves and about others. And in doing so, they can participate in their own radicalization. They can participate in fooling themselves, but that is not something that’s necessarily being done to them.

    […]

    The fundamental problem is that there’s a fundamental disagreement inside people’s heads — that people are inconsistent on what responsibility they believe information intermediaries should have for making society better. People generally believe that if something is against their side, that the platforms have a huge responsibility. And if something is on their side, [the platforms] should have no responsibility. It’s extremely rare to find people who are consistent in this.

    […]

    Any technological innovation, you’re going to have some kind of balancing act. The problem is, our political discussion of these things never takes those balances into effect. If you are super into privacy, then you have to also recognize that when you provide people private communication, that some subset of people will use that in ways that you disagree with, in ways that are illegal in ways, and sometimes in some cases that are extremely harmful. The reality is that we have to have these kinds of trade-offs.

    Source: Are we too worried about misinformation? | Vox

    Every complex problem has a solution which is simple, direct, plausible — and wrong

    This is a great article by Michał Woźniak (@rysiek) which cogently argues that the problem with misinformation and disinformation does not come through heavy-handed legislation, or even fact-checking, but rather through decentralisation of funding, technology, and power.

    I really should have spoken with him when I was working on the Bonfire Zappa report.

    While it is possible to define misinformation and disinformation, any such definition necessarily relies on things that are not easy (or possible) to quickly verify: a news item’s relation to truth, and its authors’ or distributors’ intent.

    This is especially valid within any domain that deals with complex knowledge that is highly nuanced, especially when stakes are high and emotions heat up. Public debate around COVID-19 is a chilling example. Regardless of how much “own research” anyone has done, for those without an advanced medical and scientific background it eventually boiled down to the question of “who do you trust”. Some trusted medical professionals, some didn’t (and still don’t).

    […]

    Disinformation peddlers are not just trying to push specific narratives. The broader aim is to discredit the very idea that there can at all exist any reliable, trustworthy information source. After all, if nothing is trustworthy, the disinformation peddlers themselves are as trustworthy as it gets. The target is trust itself.

    […]

    I believe that we are looking for solutions to the wrong aspects of the problem. Instead of trying to legislate misinformation and disinformation away, we should instead be looking closely at how is it possible that it spreads so fast (and who benefits from this). We should be finding ways to fix the media funding crisis; and we should be making sure that future generations receive the mental tools that would allow them to cut through biases, hoaxes, rhetorical tricks, and logical fallacies weaponized to wage information wars.

    Source: Fighting Disinformation: We’re Solving The Wrong Problems / Tactical Media Room

    Every complex problem has a solution which is simple, direct, plausible — and wrong

    This is a great article by Michał Woźniak (@rysiek) which cogently argues that the problem with misinformation and disinformation does not come through heavy-handed legislation, or even fact-checking, but rather through decentralisation of funding, technology, and power.

    I really should have spoken with him when I was working on the Bonfire Zappa report.

    While it is possible to define misinformation and disinformation, any such definition necessarily relies on things that are not easy (or possible) to quickly verify: a news item’s relation to truth, and its authors’ or distributors’ intent.

    This is especially valid within any domain that deals with complex knowledge that is highly nuanced, especially when stakes are high and emotions heat up. Public debate around COVID-19 is a chilling example. Regardless of how much “own research” anyone has done, for those without an advanced medical and scientific background it eventually boiled down to the question of “who do you trust”. Some trusted medical professionals, some didn’t (and still don’t).

    […]

    Disinformation peddlers are not just trying to push specific narratives. The broader aim is to discredit the very idea that there can at all exist any reliable, trustworthy information source. After all, if nothing is trustworthy, the disinformation peddlers themselves are as trustworthy as it gets. The target is trust itself.

    […]

    I believe that we are looking for solutions to the wrong aspects of the problem. Instead of trying to legislate misinformation and disinformation away, we should instead be looking closely at how is it possible that it spreads so fast (and who benefits from this). We should be finding ways to fix the media funding crisis; and we should be making sure that future generations receive the mental tools that would allow them to cut through biases, hoaxes, rhetorical tricks, and logical fallacies weaponized to wage information wars.

    Source: Fighting Disinformation: We’re Solving The Wrong Problems / Tactical Media Room

    AI-synthesized faces are here to fool you

    No-one who’s been paying attention should be in the last surprised that AI-synthesized faces are now so good. However, we should probably be a bit concerned that research seems to suggest that they seem to be rated as “more trustworthy” than real human faces.

    The recommendations by researchers for “incorporating robust watermarks into the image and video synthesis networks” are kind of ridiculous to enforce in practice, so we need to ensure that we’re ready for the onslaught of deepfakes.

    This is likely to have significant consequences by the end of this year at the latest, with everything that’s happening in the world at the moment…

    Synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces and are judged more trustworthy. This hyperphotorealism is consistent with recent findings. These two studies did not contain the same diversity of race and gender as ours, nor did they match the real and synthetic faces as we did to minimize the chance of inadvertent cues. While it is less surprising that White male faces are highly realistic—because these faces dominate the neural network training—we find that the realism of synthetic faces extends across race and gender. Perhaps most interestingly, we find that synthetically generated faces are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, synthetically generated faces have emerged on the other side of the uncanny valley. This should be considered a success for the fields of computer graphics and vision. At the same time, easy access (https://thispersondoesnotexist.com) to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns, with serious implications for individuals, societies, and democracies.
    Source: AI-synthesized faces are indistinguishable from real faces and more trustworthy | PNAS

    Audrey Watters on the technology of wellness and mis/disinformation

    Audrey Watters is turning her large brain to the topic of “wellness” and, in this first article, talks about mis/disinformation. This is obviously front of mind for me given my involvement in user research for the Zappa project from Bonfire.

    In February 2014, I happened to catch a couple of venture capitalists complaining about journalism on Twitter. (Honestly, you could probably pick any month or year and find the same.) “When you know about a situation, you often realize journalists don’t know that much,” one tweeted. “When you don’t know anything, you assume they’re right.” Another VC responded, “there’s a name for this and I think Murray Gell-Mann came up with it but I’m sick today and too lazy to search for it.” A journalist helpfully weighed in: “Michael Crichton called it the ”Murray Gell-Mann Amnesia Effect," providing a link to a blog with an excerpt in which Crichton explains the concept.
    Source: The Technology of Wellness, Part 1: What I Don't Know | Hack Education

    You cannot 'solve' online misinformation

    Matt Baer, who founded the excellent platform write.as, weighs in on misinformation and disinformation.

    This is something I’m interested in anyway given my background in digital literacies, but especially at the moment because of the user research I’m doing around the Zappa project.

    Seems to me that a space made up of humans is always going to have (very human) lying and deception, and the spread of misinformation in the form of simply not having all the facts straight. It's a fact of life, and one you can never totally design or regulate out of existence.

    I think the closest “solution” to misinformation (incidental) and disinformation (intentional) online is always going to be a widespread understanding that, as a user, you should be inherently skeptical of what you see and hear digitally.

    […]

    As long as human interactions are mediated by a screen (or goggles in the coming “metaverse”), there will be a certain loss of truth, social clues, and context in our interactions — clues that otherwise help us determine “truthiness” of information and trustworthiness of actors. There will also be a constant chance for middlemen to meddle in the medium, for better or worse, especially as we get farther from controlling the infrastructure ourselves.

    Source: “Solving” Misinformation | Matt

    AI-generated misinformation is getting more believable, even by experts

    I’ve been using thispersondoesnotexist.com for projects recently and, honestly, I wouldn’t be able to tell that most of the faces it generates every time you hit refresh aren’t real people.

    For every positive use of this kind of technology, there are of course negatives. Misinformation and disinformation is everywhere. This example shows how even experts in critical fields such as cybersecurity, public safety, and medicine can be fooled, too.

    If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation—flagged and unflagged—has been aimed at the general public. Imagine the possibility of misinformation—information that is false or misleading—in scientific and technical fields like cybersecurity, public safety, and medicine.There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that it’s possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.
    Source: False, AI-generated cybersecurity news was able to fool experts | Fast Company

    Epistemological chaos and denialism

    Good stuff from Cory Doctorow on how Big Tobacco invented the playbook and it’s been refined in other industries (especially online) ever since.

    Denial thrives on epistemological chaos: a denialist doesn’t want to convince you that smoking is safe, they just want to convince that it’s impossible to say whether smoking is safe or not. Denial weaponizes ideas like “balance,” demanding that “both sides” of every issue be presented so the public can decide. Don’t get me wrong, I’m as big a believer in dialetical materialism as you are likely to find, but I also know that keeping an open mind doesn’t require that you open so wide that your brains fall out.

    The bad-faith “balance” game is used by fraudsters and crooks to sow doubt. It’s how homeopaths, anti-vaxers, eugenicists, raw milk pushers and other members of the Paltrow-Industrial Complex played the BBC and other sober-sided media outlets, demanding that they be given airtime to rebut scientists’ careful, empirical claims with junk they made up on the spot.

    This is not a harmless pastime. The pandemic revealed the high price of epistemological chaos, of replacing informed debate with cynical doubt. Argue with an anti-vaxer and you’ll soon realize that you don’t merely disagree on what’s true — you disagree on whether there is such a thing as truth, and, if there is, how it can be known.

    Source: I quit. | Cory Doctorow | Medium

    Fighting health disinformation on Wikipedia

    This is great to see:

    As part of efforts to stop the spread of false information about the coronavirus pandemic, Wikipedia and the World Health Organization announced a collaboration on Thursday: The health agency will grant the online encyclopedia free use of its published information, graphics and videos.

    Donald G. McNeil Jr., Wikipedia and W.H.O. Join to Combat Covid Misinformation (The New York Times)

    Compared to Twitter's dismal efforts at fighting disinformation, the collaboration is welcome news.

    The first W.H.O. items used under the agreement are its “Mythbusters” infographics, which debunk more than two dozen false notions about Covid-19. Future additions could include, for example, treatment guidelines for doctors, said Ryan Merkley, chief of staff at the Wikimedia Foundation, which produces Wikipedia.

    Donald G. McNeil Jr., Wikipedia and W.H.O. Join to Combat Covid Misinformation (The New York Times)

    More proof that the for-profit private sector is in no way more 'innovative' or effective than non-profits, NGOs, and government agencies.

    Perceptions of the past

    The History teacher in me likes this simple photo quiz site that shows how your perception of the past can easily be manipulated by how photographs are presented.