I listened to an interesting episode of the Your Undivided Attention podcast a few days ago which approached questions around AI from the perspective of myth.

One of the points that was made was that we’ve lost the ability for councils of elders to stop things from happening because it’s likely to be dangerous for community cohesion. Now it’s “move fast and break things”. With AI the ‘things’ could be democracy, civilization, or perhaps even the planet.

The token gestures discussed in this article from companies like OpenAI are like spitting in the wind. I mean, it’s great that people can’t just ask ChatGPT to create something impersonating a politician, and that images will be watermarked as generated by AI. But even I wouldn’t find it that hard to generate reasonably-convincing deepfakes given available tools.

As I’ve found through work I’ve done on disinformation, people are looking for content which confirms their existing beliefs. This means that you don’t have to create things that are particularly sophisticated for disinformation to go viral. And then by the time it’s debunked, more stuff has come out. It’s a game of whack-a-mole, except (to extend the metaphor) the moles have the potential to explode.

OpenAI logo

Yesterday TikTok presented me with what appeared to be a deepfake of Timothee Chalamet sitting in Leonardo Dicaprio’s lap and yes, I did immediately think “if this stupid video is that good imagine how bad the election misinformation will be.” OpenAI has, by necessity, been thinking about the same thing and today updated its policies to begin to address the issue.

In addition to being firmer in its policies on election misinformation OpenAI also plans to incorporate the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials into images generated by Dall-E “early this year”. Currently Microsoft, Amazon, Adobe, and Getty are also working with C2PA to combat misinformation through AI image generation.

…Given that AI is itself a rapidly changing tool that regularly surprises us with wonderful poetry and outright lies it’s not clear how well this will work to combat misinformation in the election season. For now your best bet will continue to be embracing media literacy. That means questioning every piece of news or image that seems too good to be true and at least doing a quick Google search if your ChatGPT one turns up something utterly wild.

Source: Here’s OpenAI’s big plan to combat election misinformation | The Verge