Quotation-as-title by Ralph Waldo Emerson. Image from top-linked post.
Tag: MIT Technology Review
This quotation from the enigmatic Russell Brand seemed appropriate for the subject of today’s article: the impact of so-called ‘deepfakes’ on everything from porn to politics.
First, what exactly are ‘deepfakes’? Mark Wilson explains in an article for Fast Company:
In early 2018, [an anonymous Reddit user named Deepfakes] uploaded a machine learning model that could swap one person’s face for another face in any video. Within weeks, low-fi celebrity-swapped porn ran rampant across the web. Reddit soon banned Deepfakes, but the technology had already taken root across the web–and sometimes the quality was more convincing. Everyday people showed that they could do a better job adding Princess Leia’s face to The Force Awakens than the Hollywood special effects studio Industrial Light and Magic did. Deepfakes had suddenly made it possible for anyone to master complex machine learning; you just needed the time to collect enough photographs of a person to train the model. You dragged these images into a folder, and the tool handled the convincing forgery from there.Mark Wilson
As you’d expect, deepfakes bring up huge ethical issues, as Jessica Lindsay reports for Metro. It’s a classic case of our laws not being able to keep up with what’s technologically possible:
With the advent of deepfake porn, the possibilities have expanded even further, with people who have never starred in adult films looking as though they’re doing sexual acts on camera.
Experts have warned that these videos enable all sorts of bad things to happen, from paedophilia to fabricated revenge porn.
This can be done to make a fake speech to misrepresent a politician’s views, or to create porn videos featuring people who did not star in them.Jessica Lindsay
It’s not just video, either, with Google’s AI now able to translate speech from one language to another and keep the same voice. Karen Hao embeds examples in an article for MIT Technology Review demonstrating where this is all headed.
The results aren’t perfect, but you can sort of hear how Google’s translator was able to retain the voice and tone of the original speaker. It can do this because it converts audio input directly to audio output without any intermediary steps. In contrast, traditional translational systems convert audio into text, translate the text, and then resynthesize the audio, losing the characteristics of the original voice along the way.Karen Hao
The impact on democracy could be quite shocking, with the ability to create video and audio that feels real but is actually completely fake.
However, as Mike Caulfield notes, the technology doesn’t even have to be that sophisticated to create something that can be used in a political attack.
There’s a video going around that purportedly shows Nancy Pelosi drunk or unwell, answering a question about Trump in a slow and slurred way. It turns out that it is slowed down, and that the original video shows her quite engaged and articulate.
In musical production there is a technique called double-tracking, and it’s not a perfect metaphor for what’s going on here but it’s instructive. In double tracking you record one part — a vocal or solo — and then you record that part again, with slight variations in timing and tone. Because the two tracks are close, they are perceived as a single track. Because they are different though, the track is “widened” feeling deeper, richer. The trick is for them to be different enough that it widens the track but similar enough that they blend.Mike Caulfield
This is where blockchain could actually be a useful technology. Caulfield often talks about the importance of ‘going back to the source’ — in other words, checking the provenance of what it is you’re reading, watching, or listening. There’s potential here for checking that something is actually the original document/video/audio.
Ultimately, however, people believe what they want to believe. If they want to believe Donald Trump is an idiot, they’ll read and share things showing him in a negative light. It doesn’t really matter if it’s true or not.
Also check out:
- The guy who made a tool to track women in porn videos is sorry (MIT Technology Review) — “Under GDPR, personal data (and especially sensitive biometric data) needs to be collected for specific and legitimate purposes. Scraping data to figure out if someone once appeared in porn is not that.”
- ‘The Next Backlash Is Going to Be Against Technology’ (Foreign Policy) — “We’ve seen the backlash against globalization. If anything, the dislocations and the adverse labor market implications of artificial intelligence, automation, new digital technologies—the impacts of those will be even larger.”
- MIT AI model is ‘significantly’ better at predicting breast cancer (Engadget) — “AI has potential to help fix the racial disparity in women’s healthcare as well. Since current guidelines for breast cancer are based on primarily white populations, this can lead to delayed detection among women of color.”
In my TEDx talk six years ago, I explained how the understanding and remixing of memes was a great way to develop digital literacies. At that time, they were beginning to be used in advertisements. Now, as we saw with Brexit and the most recent US Presidential election, they’ve become weaponised.
This article in the MIT Technology Review references one of my favourite websites, knowyourmeme.com, which tracks the origin and influence of various memes across the web. Researchers have taken 700,000 images from this site and used an algorithm to track their spread and development. In addition, they gathered 100 million images from other sources.
Spotting visually similar images is relatively straightforward with a technique known as perceptual hashing, or pHashing. This uses an algorithm to convert an image into a set of vectors that describe it in numbers. Visually similar images have similar sets of vectors or pHashes.
The team let their algorithm loose on a database of over 100 million images gathered from communities known to generate memes, such as Reddit and its subgroup The_Donald, Twitter, 4chan’s politically incorrect forum known as /pol/, and a relatively new social network called Gab that was set up to accommodate users who had been banned from other communities.
Whereas some things ‘go viral’ by accident and catch the original author(s) off-guard, some communities are very good at making memes that spread quickly.
Two relatively small communities stand out as being particularly effective at spreading memes. “We find that /pol/ substantially influences the meme ecosystem by posting a large number of memes, while The Donald is the most efficient community in pushing memes to both fringe and mainstream Web communities,” say Stringhini and co.
They also point out that “/pol/ and Gab share hateful and racist memes at a higher rate than mainstream communities,” including large numbers of anti-Semitic and pro-Nazi memes.
Seemingly neutral memes can also be “weaponized” by mixing them with other messages. For example, the “Pepe the Frog” meme has been used in this way to create politically active, racist, and anti-Semitic messages.
It turns out that, just like in evolutionary biology, creating a large number of variants is likely to lead to an optimal solution for a given environment.
The researchers, who have made their technique available to others to promote further analysis, are even able to throw light on the question of why some memes spread widely while others quickly die away. “One of the key components to ensuring they are disseminated is ensuring that new ‘offspring’ are continuously produced,” they say.
That immediately suggests a strategy for anybody wanting to become more influential: set up a meme factory that produces large numbers of variants of other memes. Every now and again, this process is bound to produce a hit.
For any evolutionary biologist, that may sound familiar. Indeed, it’s not hard to imagine a process that treats pHashes like genomes and allows them to evolve through mutation, reproduction, and selection.
As the article states, right now it’s humans creating these memes. However, it won’t be long until we have machines doing this automatically. After all, it’s been five years since the controversy about the algorithmically-created “Keep Calm and…” t-shirts for sale on Amazon.
It’s an interesting space to watch, particularly for those interested in digital literacies (and democracy).
Source: MIT Technology Review