Tag: politics

Where memes come from

In my TEDx talk six years ago, I explained how the understanding and remixing of memes was a great way to develop digital literacies. At that time, they were beginning to be used in advertisements. Now, as we saw with Brexit and the most recent US Presidential election, they’ve become weaponised.

This article in the MIT Technology Review references one of my favourite websites, knowyourmeme.com, which tracks the origin and influence of various memes across the web. Researchers have taken 700,000 images from this site and used an algorithm to track their spread and development. In addition, they gathered 100 million images from other sources.

Spotting visually similar images is relatively straightforward with a technique known as perceptual hashing, or pHashing. This uses an algorithm to convert an image into a set of vectors that describe it in numbers. Visually similar images have similar sets of vectors or pHashes.

The team let their algorithm loose on a database of over 100 million images gathered from communities known to generate memes, such as Reddit and its subgroup The_Donald, Twitter, 4chan’s politically incorrect forum known as /pol/, and a relatively new social network called Gab that was set up to accommodate users who had been banned from other communities.

Whereas some things ‘go viral’ by accident and catch the original author(s) off-guard, some communities are very good at making memes that spread quickly.

Two relatively small communities stand out as being particularly effective at spreading memes. “We find that /pol/ substantially influences the meme ecosystem by posting a large number of memes, while The Donald is the most efficient community in pushing memes to both fringe and mainstream Web communities,” say Stringhini and co.

They also point out that “/pol/ and Gab share hateful and racist memes at a higher rate than mainstream communities,” including large numbers of anti-Semitic and pro-Nazi memes.

Seemingly neutral memes can also be “weaponized” by mixing them with other messages. For example, the “Pepe the Frog” meme has been used in this way to create politically active, racist, and anti-Semitic messages.

It turns out that, just like in evolutionary biology, creating a large number of variants is likely to lead to an optimal solution for a given environment.

The researchers, who have made their technique available to others to promote further analysis, are even able to throw light on the question of why some memes spread widely while others quickly die away. “One of the key components to ensuring they are disseminated is ensuring that new ‘offspring’ are continuously produced,” they say.

That immediately suggests a strategy for anybody wanting to become more influential: set up a meme factory that produces large numbers of variants of other memes. Every now and again, this process is bound to produce a hit.

For any evolutionary biologist, that may sound familiar. Indeed, it’s not hard to imagine a process that treats pHashes like genomes and allows them to evolve through mutation, reproduction, and selection.

As the article states, right now it’s humans creating these memes. However, it won’t be long until we have machines doing this automatically. After all, it’s been five years since the controversy about the algorithmically-created “Keep Calm and…” t-shirts for sale on Amazon.

It’s an interesting space to watch, particularly for those interested in digital literacies (and democracy).

Source: MIT Technology Review

The New Octopus: going beyond managerial interventions for internet giants

This article in Logic magazine was brought to my attention by a recent issue of Ian O’Byrne’s excellent TL;DR newsletter. It’s a long read, focusing on the structural power of internet giants such as Amazon, Facebook, and Google.

The author, K. Sabeel Rahman, is an assistant professor of law at Brooklyn Law School and a fellow at the Roosevelt Institute. He uses historical analogues to make his points, while noting how different the current state of affairs is from a century ago.

As in the Progressive Era, technological revolutions have radically transformed our social, economic, and political life. Technology platforms, big data, AI—these are the modern infrastructures for today’s economy. And yet the question of what to do about technology is fraught, for these technological systems paradoxically evoke both bigness and diffusion: firms like Amazon and Alphabet and Apple are dominant, yet the internet and big data and AI are technologies that are by their very nature diffuse.

The problem, however, is not bigness per se. Even for Brandeisians, the central concern was power: the ability to arbitrarily influence the decisions and opportunities available to others. Such unchecked power represented a threat to liberty. Therefore, just as the power of the state had to be tamed through institutional checks and balances, so too did this private power have to be contested—controlled, held to account.

This emphasis on power and contestation, rather than literal bigness, helps clarify the ways in which technology’s particular relationship to scale poses a challenge to ideals of democracy, liberty, equality—and what to do about it.

I think this is the thing that concerns me most. Just as the banks were ‘too big to fail’ during the economic crisis and had to be bailed out by the taxpayer, so huge technology companies are increasingly playing that kind of role elsewhere in our society.

The problem of scale, then, has always been a problem of power and contestability. In both our political and our economic life, arbitrary power is a threat to liberty. The remedy is the institutionalization of checks and balances. But where political checks and balances take a common set of forms—elections, the separation of powers—checks and balances for private corporate power have proven trickier to implement.

These various mechanisms—regulatory oversight, antitrust laws, corporate governance, and the countervailing power of organized labor— together helped create a relatively tame, and economically dynamic, twentieth-century economy. But today, as technology creates new kinds of power and new kinds of scale, new variations on these strategies may be needed.

“Arbitrary power is a threat to liberty.” Absolutely, no matter whether the company holding that power has been problematic in the past, has a slogan promising not to do anything wrong, or is well-liked by the public.

We need more than regulatory oversight of such organisations because of how insidious their power can be — much like the image of Luks’ octopus that accompanies this and the original post.

Rahman explains three types of power held by large internet companies:

First, there is transmission power. This is the ability of a firm to control the flow of data or goods. Take Amazon: as a shipping and logistics infrastructure, it can be seen as directly analogous to the railroads of the nineteenth century, which enjoyed monopolized mastery over the circulation of people, information, and commodities. Amazon provides the literal conduits for commerce.


A second type of power arises from what we might think of as a gatekeeping power. Here, the issue is not necessarily that the firm controls the entire infrastructure of transmission, but rather that the firm controls the gateway to an otherwise decentralized and diffuse landscape.

This is one way to understand the Facebook News Feed, or Google Search. Google Search does not literally own and control the entire internet. But it is increasingly true that for most users, access to the internet is mediated through the gateway of Google Search or YouTube’s suggested videos. By controlling the point of entry, Google exercises outsized influence on the kinds of information and commerce that users can ultimately access—a form of control without complete ownership.


A third kind of power is scoring power, exercised by ratings systems, indices, and ranking databases. Increasingly, many business and public policy decisions are based on big data-enabled scoring systems. Thus employers will screen potential applicants for the likelihood that they may quit, be a problematic employee, or participate in criminal activity. Or judges will use predictive risk assessments to inform sentencing and bail decisions.

These scoring systems may seem objective and neutral, but they are built on data and analytics that bake into them existing patterns of racial, gender, and economic bias.


Each of these forms of power is infrastructural. Their impact grows as more and more goods and services are built atop a particular platform. They are also more subtle than explicit control: each of these types of power enable a firm to exercise tremendous influence over what might otherwise look like a decentralized and diffused system.

As I quote Adam Greenfield as saying in Microcast #021 (supporters only!) this infrastructural power is less obvious because of the immateriality of the world controlled by internet giants. We need more than managerial approaches to solving the problems faced by their power.

A more radical response, then, would be to impose structural restraints: limits on the structure of technology firms, their powers, and their business models, to forestall the dynamics that lead to the most troubling forms of infrastructural power in the first place.

One solution would be to convert some of these infrastructures into “public options”—publicly managed alternatives to private provision. Run by the state, these public versions could operate on equitable, inclusive, and nondiscriminatory principles. Public provision of these infrastructures would subject them to legal requirements for equal service and due process. Furthermore, supplying a public option would put competitive pressures on private providers.


We can also introduce structural limits on technologies with the goal of precluding dangerous concentrations of power. While much of the debate over big data and privacy has tended to emphasize the concerns of individuals, we might view a robust privacy regime as a kind of structural limit: if firms are precluded from collecting or using certain types of data, that limits the kinds of power they can exercise.

Some of this is already happening, thankfully, through structural limitations such as GDPR. I hope this is the first step in a more coordinated response to internet giants who increasingly have more impact on the day-to-day lives of citizens than their governments.

Moving fast and breaking things is inevitable in moments of change. The issue is which things we are willing to break—and how broken we are willing to let them become. Moving fast may not be worth it if it means breaking the things upon which democracy depends.

It’s a difficult balance. However, just as GDPR has put in place mechanisms to prevent the over-reaching of governments and of companies, I think we could think differently about perhaps organisations with non-profit status and community ownership that could provide some of the infrastructure being built by shareholder-owned organisations.

Having just finished reading Utopia for Realists, I definitely think the left needs to think bigger than it’s currently doing, and really push that Overton window.

Source: Logic magazine (via Ian O’Byrne)

Living in a dictatorship

The historian and social commentator in me found this fascinating. This article quotes Twitter user G. Willow Wilson (who claims to have liven in a dictatorship) as saying:

It’s a mistake to think a dictatorship feels intrinsically different on a day-to-day basis than a democracy does. I’ve lived in one dictatorship and visited several others—there are still movies and work and school and shopping and memes and holidays.

The difference is the steady disappearance of dissent from the public sphere. Anti-regime bloggers disappear. Dissident political parties are declared “illegal.” Certain books vanish from the libraries.

If you click through to the actual Twitter thread, Wilson continues:

The genius of a true, functioning dictatorship is the way it carefully titrates justice. Once in awhile it will allow a sound judicial decision or critical op-ed to bubble up. Rational discourse is never entirely absent. There is plausible deniability.

Of course this isn’t a dictatorship. It’s only a temporary state of affairs. And we’re doing it for your benefit:

So if you’re waiting for the grand moment when the scales tip and we are no longer a functioning democracy, you needn’t bother. It’ll be much more subtle than that. It’ll be more of the president ignoring laws passed by congress. It’ll be more demonizing of the press.

That’s what concerns me when people say that they don’t care about privacy and security. Technology can help with resistance to autocracy.

Source: Kottke.org

Tribal politics in social networks

I’ve started buying the Financial Times Weekend along with The Observer each Sunday. Annoyingly, while the latter doesn’t have a paywall, the FT does which means although I can quote from, and link to, this article by Simon Kuper about tribal politics, many of you won’t be able to read it in full.

Kuper makes the point that in a world of temporary jobs, ‘broken’ families, and declining church attendance, social networks provide a place where people can find their ‘tribe’:

Online, each tribe inhabits its own filter bubble of partisan news. To blame this only on Facebook is unfair. If people wanted a range of views, they could install both rightwing and leftwing feeds on their Facebook pages — The Daily Telegraph and The Guardian, say. Most people choose not to, partly because they like living in their tribe. It makes them feel less lonely.

There’s a lot to agree with in this article. I think we can blame people for getting their news mainly through Facebook. I think we can roll our eyes at people who don’t think carefully about their information environment.

On the other hand, social networks are mediated by technology. And technology is never neutral. For example, Facebook has gone from saying that it couldn’t possibly be blamed for ‘fake news’ (2016) to investigating the way that Russian accounts may have manipulated users (2017) to announcing that they’re going to make some changes (2018, NSFW language in link).

We need to zoom out from specific problems in our society to the wider issues that underpin them. Kuper does this to some extent in this article, but the FT isn’t the place where you’ll see a robust criticism of the problems with capitalism. Social networks can, and have, been different — just think of what Twitter was like before becoming a publicly-traded company, for example.

My concern is that we need to sort out these huge, society-changing companies before they become too large to regulate.

Source: FT Weekend