2024 is going to be a wild ride of AI-generated content

    It’s on the NSFW side of things, but if you’re in any doubt that we’re entering a crazy world of AI-generated content, just check out this post.

    As I’ve said many times before, the porn industry is interesting in terms of technological innovation. If we take an amoral stance, then there’s a lot of ‘content creators’ in that industry, and as the post I quote below points out, that there are going to be a lot of fake content creators over the next few months and years.

    It is imperative to identify content sources you believe to be valuable now. Nothing new in the future will be credible. 2024 is going to be a wild ride of AI-generated content. We are never going to know what is real anymore.

    There will be some number of real people who will probably replace themselves with AI content if they can make money from it. This will result in doubting real content. Everything becomes questionable and nothing will suffice as digital proof any longer.

    […]

    Our understanding of what is happening will continue to lag further and further behind what is happening.

    Some will make the argument “But isn’t this simply the same problems we already deal with today?”. It is; however, the ability to produce fake content is getting exponentially cheaper while the ability to detect fake content is not improving. As long as fake content was somewhat expensive, difficult to produce, and contained detectable digital artifacts, it at least could be somewhat managed.

    Source: Post-truth society is near | Mind Prison

    What people are really using generative AI for

    As I’ve written several times before here on Thought Shrapnel, society seems to act as though the giant, monolithic, hugely profitable porn industry just doesn’t… exist? This despite the fact it tends to be a driver of technical innovation. I won’t get into details, but feel free to search for phrases such as ‘teledildonics’.

    So this article from the new (and absolutely excellent) 404 Media on a venture capitalist firm’s overview of the emerging generative AI industry shouldn’t come as too much of a surprise. As a society and as an industry, we don’t make progress on policy, ethics, and safety by pretending things aren’t happening.

    As the father, seeing this kind of news is more than a little disturbing. And we don’t deal with all of it by burying our head in the sand, shaking our head, or crossing our fingers.

    The Andreesen Horowitz (also called a16z) analysis is derived from crude but telling data—internet traffic. Using website traffic tracking company Similarweb, a16z ranks the top 50 generative AI websites on the internet by monthly visits, as of June 2023. This data provides an incomplete picture of what people are doing with AI because it’s not tracking use of popular AI apps like Replika (where people sext with virtual companions) or Telegram chatbots like Forever Companion, which allows users to talk to chatbots trained on the voices of influencers like Amouranth and Caryn Marjorie (who just want to talk about sex).

    […]

    What I can tell you without a doubt by looking at this list of the top 50 generative AI websites is that, as has always been the case online and with technology generally, porn is a major driving force in how people use generative AI in their day to day lives.

    […]

    Even if we put ethical questions aside, it is absurd that a tech industry kingmaker like a16z can look at this data, write a blog titled “How Are Consumers Using Generative AI?” and not come to the obvious conclusion that people are using it to jerk off. If you are actually interested in the generative AI boom and you are not identifying porn as core use for the technology, you are either not paying attention or intentionally pretending it’s not happening.

    Source: 404 Media Generative AI Market Analysis: People Love to Cum

    We need to talk about AI porn

    Thought Shrapnel is a prude-free zone, especially as the porn industry tends to be a technological innovator. It’s important to say, though, that the objectification of women and non-consensual generation of pornography is not just a bad thing but societally corrosive.

    By now, we’re familiar with AI models being able to create images of almost anything. I’ve read of wonderful recent advances in the world of architecture, for example. Some of the most popular AI generators have filters to prevent abuse, but of course there are many others.

    As this article details, a lot of porn has already been generated. Again, prudishness aside relating to people’s kinks, there are all kind of philosophical, political, legal, and issues at play here. Child pornography is abhorrent; how is our legal system going to deal with AI generated versions? What about the inevitable ‘shaming’ of people via AI generated sex acts?

    All of this is a canary in the coalmine for what happens in society at large. And this is why philosophical training is important: it helps you grapple with the implications of technology, the ‘why’ as well as the what. I’ve got a lot more thoughts on this, but I actually think it would be a really good topic to discuss as part of the next season of the WAO podcast.

    “Create anything,” Mage.Space’s landing page invites users with a text box underneath. Type in the name of a major celebrity, and Mage will generate their image using Stable Diffusion, an open source, text-to-image machine learning model. Type in the name of the same celebrity plus the word “nude” or a specific sex act, and Mage will generate a blurred image and prompt you to upgrade to a “Basic” account for $4 a month, or a “Pro Plan” for $15 a month. “NSFW content is only available to premium members.” the prompt says.

    […]

    Since Mage by default saves every image generated on the site, clicking on a username will reveal their entire image generation history, another wall of images that often includes hundreds or thousands of AI-generated sexual images of various celebrities made by just one of Mage’s many users. A user’s image generation history is presented in reverse chronological order, revealing how their experimentation with the technology evolves over time.

    Scrolling through a user’s image generation history feels like an unvarnished peek into their id. In one user’s feed, I saw eight images of the cartoon character from the children’s’ show Ben 10, Gwen Tennyson, in a revealing maid’s uniform. Then, nine images of her making the “ahegao” face in front of an erect penis. Then more than a dozen images of her in bed, in pajamas, with very large breasts. Earlier the same day, that user generated dozens of innocuous images of various female celebrities in the style of red carpet or fashion magazine photos. Scrolling down further, I can see the user fixate on specific celebrities and fictional characters, Disney princesses, anime characters, and actresses, each rotated through a series of images posing them in lingerie, schoolgirl uniforms, and hardcore pornography. Each image represents a fraction of a penny in profit to the person who created the custom Stable Diffusion model that generated it.

    […]

    Generating pornographic images of real people is against the Mage Discord community’s rules, which the community strictly enforces because it’s also against Discord’s platform-wide community guidelines. A previous Mage Discord was suspended in March for this reason. While 404 Media has seen multiple instances of non-consensual images of real people and methods for creating them, the Discord community self-polices: users flag such content, and it’s removed quickly. As one Mage user chided another after they shared an AI-generated nude image of Jennifer Lawrence: “posting celeb-related content is forbidden by discord and our discord was shut down a few weeks ago because of celeb content, check [the rules.] you can create it on mage, but not share it here.”

    Source: Inside the AI Porn Marketplace Where Everything and Everyone Is for Sale | 404 Media

    Censorship and the porn tech stack

    They say that technical innovation often comes from the porn industry, but the same is true of new forms of censorship.

    For those who don’t know or remember, Tumblr used to have a policy around porn that was literally “Go nuts, show nuts. Whatever.” That was memorable and hilarious, and for many people, Tumblr both hosted and helped with the discovery of a unique type of adult content.

    […]

    [N]o modern internet service in 2022 can have the rules that Tumblr did in 2007. I am personally extremely libertarian in terms of what consenting adults should be able to share, and I agree with “go nuts, show nuts” in principle, but the casually porn-friendly era of the early internet is currently impossible….

    […]

    If you wanted to start an adult social network in 2022, you’d need to be web-only on iOS and side load on Android, take payment in crypto, have a way to convert crypto to fiat for business operations without being blocked, do a ton of work in age and identity verification and compliance so you don’t go to jail, protect all of that identity information so you don’t dox your users, and make a ton of money. I estimate you’d need at least $7 million a year for every 1 million daily active users to support server storage and bandwidth (the GIFs and videos shared on Tumblr use a ton of both) in addition to hosting, moderation, compliance, and developer costs.

    Source: Matt on Tumblr | Why “Go Nuts, Show Nuts” Doesn’t Work in 2022

    Image: Alexander Grey on Unsplash

    There’s no viagra for enlightenment

    This quotation from the enigmatic Russell Brand seemed appropriate for the subject of today's article: the impact of so-called 'deepfakes' on everything from porn to politics.

    First, what exactly are 'deepfakes'? Mark Wilson explains in an article for Fast Company:

    In early 2018, [an anonymous Reddit user named Deepfakes] uploaded a machine learning model that could swap one person’s face for another face in any video. Within weeks, low-fi celebrity-swapped porn ran rampant across the web. Reddit soon banned Deepfakes, but the technology had already taken root across the web–and sometimes the quality was more convincing. Everyday people showed that they could do a better job adding Princess Leia’s face to The Force Awakens than the Hollywood special effects studio Industrial Light and Magic did. Deepfakes had suddenly made it possible for anyone to master complex machine learning; you just needed the time to collect enough photographs of a person to train the model. You dragged these images into a folder, and the tool handled the convincing forgery from there.

    Mark Wilson

    As you'd expect, deepfakes bring up huge ethical issues, as Jessica Lindsay reports for Metro. It's a classic case of our laws not being able to keep up with what's technologically possible:

    With the advent of deepfake porn, the possibilities have expanded even further, with people who have never starred in adult films looking as though they’re doing sexual acts on camera.

    Experts have warned that these videos enable all sorts of bad things to happen, from paedophilia to fabricated revenge porn.

    [...]

    This can be done to make a fake speech to misrepresent a politician’s views, or to create porn videos featuring people who did not star in them.

    Jessica Lindsay

    It's not just video, either, with Google's AI now able to translate speech from one language to another and keep the same voice. Karen Hao embeds examples in an article for MIT Technology Review demonstrating where this is all headed.

    The results aren’t perfect, but you can sort of hear how Google’s translator was able to retain the voice and tone of the original speaker. It can do this because it converts audio input directly to audio output without any intermediary steps. In contrast, traditional translational systems convert audio into text, translate the text, and then resynthesize the audio, losing the characteristics of the original voice along the way.

    Karen Hao

    The impact on democracy could be quite shocking, with the ability to create video and audio that feels real but is actually completely fake.

    However, as Mike Caulfield notes, the technology doesn't even have to be that sophisticated to create something that can be used in a political attack.

    There’s a video going around that purportedly shows Nancy Pelosi drunk or unwell, answering a question about Trump in a slow and slurred way. It turns out that it is slowed down, and that the original video shows her quite engaged and articulate.

    [...]

    In musical production there is a technique called double-tracking, and it’s not a perfect metaphor for what’s going on here but it’s instructive. In double tracking you record one part — a vocal or solo — and then you record that part again, with slight variations in timing and tone. Because the two tracks are close, they are perceived as a single track. Because they are different though, the track is “widened” feeling deeper, richer. The trick is for them to be different enough that it widens the track but similar enough that they blend.

    Mike Caulfield

    This is where blockchain could actually be a useful technology. Caulfield often talks about the importance of 'going back to the source' — in other words, checking the provenance of what it is you're reading, watching, or listening. There's potential here for checking that something is actually the original document/video/audio.

    Ultimately, however, people believe what they want to believe. If they want to believe Donald Trump is an idiot, they'll read and share things showing him in a negative light. It doesn't really matter if it's true or not.


    Also check out: