In my TEDx talk six years ago, I explained how the understanding and remixing of memes was a great way to develop digital literacies. At that time, they were beginning to be used in advertisements. Now, as we saw with Brexit and the most recent US Presidential election, they’ve become weaponised.
This article in the MIT Technology Review references one of my favourite websites, knowyourmeme.com, which tracks the origin and influence of various memes across the web. Researchers have taken 700,000 images from this site and used an algorithm to track their spread and development. In addition, they gathered 100 million images from other sources.
Spotting visually similar images is relatively straightforward with a technique known as perceptual hashing, or pHashing. This uses an algorithm to convert an image into a set of vectors that describe it in numbers. Visually similar images have similar sets of vectors or pHashes.
The team let their algorithm loose on a database of over 100 million images gathered from communities known to generate memes, such as Reddit and its subgroup The_Donald, Twitter, 4chan’s politically incorrect forum known as /pol/, and a relatively new social network called Gab that was set up to accommodate users who had been banned from other communities.
Whereas some things ‘go viral’ by accident and catch the original author(s) off-guard, some communities are very good at making memes that spread quickly.
Two relatively small communities stand out as being particularly effective at spreading memes. “We find that /pol/ substantially influences the meme ecosystem by posting a large number of memes, while The Donald is the most efficient community in pushing memes to both fringe and mainstream Web communities,” say Stringhini and co.
They also point out that “/pol/ and Gab share hateful and racist memes at a higher rate than mainstream communities,” including large numbers of anti-Semitic and pro-Nazi memes.
Seemingly neutral memes can also be “weaponized” by mixing them with other messages. For example, the “Pepe the Frog” meme has been used in this way to create politically active, racist, and anti-Semitic messages.
It turns out that, just like in evolutionary biology, creating a large number of variants is likely to lead to an optimal solution for a given environment.
The researchers, who have made their technique available to others to promote further analysis, are even able to throw light on the question of why some memes spread widely while others quickly die away. “One of the key components to ensuring they are disseminated is ensuring that new ‘offspring’ are continuously produced,” they say.
That immediately suggests a strategy for anybody wanting to become more influential: set up a meme factory that produces large numbers of variants of other memes. Every now and again, this process is bound to produce a hit.
For any evolutionary biologist, that may sound familiar. Indeed, it’s not hard to imagine a process that treats pHashes like genomes and allows them to evolve through mutation, reproduction, and selection.
As the article states, right now it’s humans creating these memes. However, it won’t be long until we have machines doing this automatically. After all, it’s been five years since the controversy about the algorithmically-created “Keep Calm and…” t-shirts for sale on Amazon.
It’s an interesting space to watch, particularly for those interested in digital literacies (and democracy).
Source: MIT Technology Review
Comments are closed.