AI-generated image of Sam Altman wearing magnifying lenses while creating a miniature theatre scene

This article in The New York Times is about the launch of Sora 2, a new AI generative video tool from Open AI. If you want to see how problematic the content it produces can be, check out this video from tech reporter Drew Harwell of The Washington Post.

This is a classic case of technological innovation moving well ahead of regulation. At a time when when US politics has tipped over from libertarianism to authoritarianism, the chances of these kinds of things being used for disinformation is absolutely huge. I mean, we’re at the stage where I, who pride myself on being able to tell when something is fake, just can’t tell the difference.

When you’re being shown these kinds of things over and over again in your social media feeds, there’s just no time to check what’s real and what’s not. So you just end up believing anything. We’re in very weird, and very dangerous times.

George Orwell famously said: “Who controls the past controls the future: who controls the present controls the past.” The video I link to above shows fake clips of Martin Luther King and JFK. We are, as the kids say, “so cooked.”

Sora — as well as Google’s Veo 3 and other tools like it — could become increasingly fertile breeding grounds for disinformation and abuse, experts said. While worries about A.I.’s ability to enable misleading content and outright fabrications have risen steadily in recent years, Sora’s advances underscore just how much easier such content is to produce, and how much more convincing it is.

Increasingly realistic videos are more likely to lead to consequences in the real world by exacerbating conflicts, defrauding consumers, swinging elections or framing people for crimes they did not commit, experts said.

[…]

Sora, which is currently accessible only through an invitation from an existing user, does not require users to verify their accounts — meaning they may be able to sign up with a name and profile image that is not theirs. (To create an A.I. likeness, users must upload a video of themselves using the app. In tests by The Times, Sora rejected attempts to make A.I. likenesses using videos of famous people.) The app will generate content involving children without issue, as well as content featuring long-dead public figures such as the Rev. Dr. Martin Luther King Jr. and Michael Jackson.

The app would not produce videos of President Trump or other world leaders. But when asked to create a political rally with attendees wearing “blue and holding signs about rights and freedoms,” Sora produced a video featuring the unmistakable voice of former President Barack Obama.

Until recently, videos were reasonably reliable as evidence of actual events, even after it became easy to edit photographs and text in realistic ways. Sora’s high-quality video, however, raises the risk that viewers will lose all trust in what they see, experts said. Sora videos feature a moving watermark identifying them as A.I. creations, but experts said such marks could be edited out with some effort.

[…]

“Now I’m getting really, really great videos that reinforce my beliefs, even though they’re false, but you’re never going to see them because they were never delivered to you,” said Kristian J. Hammond, a professor who runs the Center for Advancing Safety of Machine Intelligence at Northwestern University. “The whole notion of separated, balkanized realities, we already have, but this just amplifies it.”

Source: The New York Times

Image: InfoCity