Every billionaire really is a policy failure

A closeup of a US hundred dollar bill (Benjamin Franklin side).

I don’t really understand people who look at billionaires as anything other than an aberration of the system. They are not, in any way, people to be looked up to, imitated, or praised.

What probably makes it easier for me is that I see pretty much every form of hierarchical organisation-for-profit as something to be avoided. The CEO who employs downward pressure on wages, resists unionisation, and enjoys the fruits of other people’s labour, is merely different in terms of scale.

If multi-millionaires exist out of the normal cycle of everyday life, billionaires certainly do. That alone makes them spectacularly unfit to be anywhere near the levers of power, to dictate economic policy, or to make pronouncements that anyone in their right mind should listen to.

It’s a mind-bogglingly large sum of money, so let’s try to make it meaningful in day-to-day terms. If someone gave you $1,000 every single day and you didn’t spend a cent, it would take you three years to save up a million dollars. If you wanted to save a billion, you’d be waiting around 2,740 years… All this shows how the personal wealth of billionaires cannot be made through hard work alone. The accumulation of extreme wealth depends on other systems, such as exploitative labor practices, tax breaks, and loopholes that are beyond the reach of most ordinary people.

[…]

The notion that a billionaire has worked hard for every penny of their wealth is simply fanciful. The median U.S. salary is $34,612, but even if you tripled that and saved every penny for a lifetime, you still wouldn’t accumulate anywhere close to a billion dollars. Here, it’s also worth looking at Oxfam’s extensive study on extreme wealth, which found that approximately one-third of global billionaire fortunes were inherited. It’s not about working harder, smarter, or better. There are many factors built into our economic system that help extreme wealth to multiply fast. It’s a matter of being well-placed to benefit from the structures that favor capital and produce a profit off the back of exploitation.

[…]

Jeff Bezos could give every single one of his 876,000 employees a $105,000 bonus and he’d still be as rich as he was at the start of the pandemic.

[…]

It’s true that the billionaire class creates jobs and that wages have the potential to drive the economy, but that argument falters when workers barely have enough to survive. The potential to generate tax dollars from billion-dollar profits is enormous. Oxfam found that if the world’s richest 1% paid just 0.5% more in tax, we could educate all 262 million children who are currently out of school and provide health care to save the lives of 3.3 million. But given generous tax cuts and easily exploitable loopholes like the ability to register wealth in offshore tax havens, this rarely comes to pass.

[…]

Some favor the adoption of universal social security measures, paid for via progressive taxes. It’s been argued that Universal Basic Income, Guaranteed Minimum Income, and Universal Basic Services could aid prosperity in a world grappling with growing populations, societal aging, and climate breakdown. Piecemeal proposals are not enough to remedy a crisis of poverty in the midst of plenty. And a fair world would not further the acceleration of either.

Source: Teen Vogue

Image: Adam Nir

When everything is automated in an information vacuum, conspiracies abound

Man sitting on wall wearing a face mask with his arm resting on an Uber Eats delivery bag

I think it’s important to pay attention to what’s happening in the so-called “gig economy” as it’s effectively what capitalists would do to all of us if they could get away with it. In this case, The Guardian looks at couriers working for apps such as Uber Eats, Just Eat and Deliveroo.

Sure enough, the couriers have no real idea what’s going on in terms of allocation of work. So they turn to workarounds and conspiracy theories. I can’t imagine this being good for anyone’s mental health.

The couriers wonder why someone who has only just logged on gets a gig while others waiting longer are overlooked. Why, when the restaurant is busy and crying out for couriers, does the app say there are none available?

“We can never work out the algorithm,” one of the drivers says, requesting anonymity for fear of losing work. They wonder if the app ignores them if they’ve done a few jobs already that hour, and experiment with standing inside the restaurant, on the pavement or in the car park to see if subtle shifts in geolocation matter.

“It’s an absolute nightmare,” says the driver, adding that they permanently lost access to one of the platforms over a matter of a “max five minutes” wait in getting to a restaurant while he finished another job for a different app. Sometimes he gets logged out for a couple of hours because his beard has grown, confusing the facial recognition software.

“It’s not at all like being an employee,” he says. He is regularly frustrated by having to challenge what appeared to be shortfall in pay per job – sometimes just 10p, but at other times a few pounds. “There’s nobody you can talk to. Everything is automated.”

[…]

“Every worker should understand the basis on which they are paid,” [James] Farrar said [who has a lot of experience with gig economy apps]. “But you’re being gamed into deciding whether to accept a job or not. Will I get a better offer? It’s like gambling and it’s very distressing and stressful for people.

“You are completely in a vacuum about how best to do the job and because people often don’t understand how decisions are being made about their work, it encourages conspiracies.”

Source: The Guardian

Image: Sargis Chilingaryan

Monetising our own attention

Stock price chart

It has been A Week. So I’ve only just caught up Jay Springett’s weeknote from last week, in which he talks about the $TRUMP memecoin. Money hasn’t had any intrinsic value since the major currencies left the gold standard decades ago. Memecoins are like cryptocurrencies on steroids.

TRUMP Coin has sort of got lost in the noise in UK media due to the Tiktok shut down. But its way way way more insane, and way more significant news. In the last 48 hours Trump’s net worth increased by FIFTY BILLION (ILLIQUID) DOLLARS. Just days before he becomes president.

Jay quotes himself about real time attention markets and ‘economic entertainment’. It’s fascinating, especially if you read books like Clay Shirky’s Here Cones Everybody back in the day:

The rise of real time attention markets, economic entertainment, prediction markets (and the coming era of Power Fandoms) are a kind of revenge of late 90’s early 00’s Utopianism. The idea of cognitive surplus. We’re starting to see the kinds of swarm/group intelligences predicted by Shirky / Tapscott – but distorted through contemporary capitalism’s relentless logic. It took super liquid markets and meme coins for them to emerge.

He and some others have been discussing what all this means which led to a post by RM which channels the TV show Black Mirror. Even reading about this kind of stuff makes me feel about a million years old:

Personally, seeing your “value” as a volatile ticker must be truly psychologically draining. Imagine scaling that to a presidency. One day, your market cap soars; the next, an unpopular move collapses the coin. It’s like living in a Black Mirror episode where “market cap” equals self-worth and “24h volume” measures relevance.

[…]

Is this the world’s most ingenious social experiment, rewriting power, brand, and money dynamics? Or an accidental time bomb threatening presidential credibility? Unlike stocks reacting to politics, this directly monetizes an individual’s persona, allowing real-time buying and selling of reputation.

What does all of this mean in practice? I have no idea.

Source: thejaymo

Image: Maxim Hopman

Action stopping short of introducing compulsory national ID cards

Person holding black phone

It sounds like the UK government is preparing to bring in a dedicated app, initially for digital driving licenses — as is happening elsewhere in the world — but eventually for everything from tax payment to benefit claims and reminding people what their National Insurance number is.

This is a fascinating area for me, for a couple of reasons. First, the technology mentioned (“allowing users to hide their addresses in certain situations”) make me think this is very likely to be based on the Verifiable Credentials standard. This is the one that Open Badges, which I’ve been working on now for 14 years, is based.

Second, there’s a huge resistance in this country to the idea of ID cards. That means initiatives such as this can aim for the kind of utility which ID cards would provide, but have to present in such a way that is not ‘ID card-like’. Perhaps an app that focuses on providing immediate value in several area will help with this.

Third, and finally, I’m delighted that it seems that the GOV.UK team which will be behind this have decided not to go with a solution based on Google/Apple wallets. It would have been a terrible decision to do that, akin to handing over the keys to the digital kingdom to non-state actors.

The virtual wallet is understood to have security measures similar to many banking apps, and only owners of respective licences will be able to access it through inbuilt security features in smartphones, such as biometrics and multi-factor authentication.

The voluntary digital option is to be introduced later this year, according to the Times. Possible features include allowing users to hide their addresses in certain situations, such as in bars or shops, and using virtual licences for age verification at supermarket self-checkouts.

The government is said to be considering integrating other services into the app, such as tax payments, benefits claims and other forms of identification such as national insurance numbers, but will stop short of introducing compulsory national ID cards, which were pushed for by former prime minister Tony Blair and William Hague.

Source: The Guardian

Image: Robin Worrall

At least until we’re dead, education’s purpose to help us survive and thrive, not just get a job

A glass sphere on a log

Next time someone even suggests that education is merely the means of eventually finding ‘employment’ I’m just going to 301 redirect them to this magnificent rant by my extraordinarily talented colleague, Laura Hilliger.

I will be brief because some of my readers are not here for educational philosophy. For decades many in my network have championed actual education, the long-stretch goal of which is essentially self-actualisation. This is a term popularised by Maslow, but even Aristotle was pontificating about our human states of becoming. Education is, briefly, not only acquiring skills but realising our free will, potential and unique unicorn properties so that we can survive the shitshow that is existence. At least until we’re dead, education’s purpose to help us survive and thrive, not just get a job.

In society, education is both contrasted and conflated with other terms like learning, training or skill development. The field is semantically messy, and at the end of the day many don’t care about actual education. For society writ-large, the purpose of education is not self-actualisation, but rather compliance, conformance and control. I’m not talking about educators, you fluffy, beautiful bandits of resistance leaders, I’m talking about the systems around and through which people have access to education. Learning to learn, being intellectually curious, bravely looking the human condition in the face – these are not economically responsible endeavours. Thus, they have traditionally been reserved for the privileged (and the possessed).

Source: Freshly Brewed Thoughts

Image: Look Up Look Down Photography

The time to prepare is now

Repeating image of four skulls with increasing doubling, blurring, ghosting, pixelation, and horizontal glitching.

Matt Web thinks that countries need to be thinking about building a ‘strategic fact reserve’. It’s an interesting proposition but also… how has it come to this?!

[I]f I were to rank AI (not today’s AI but once it is fully developed and integrated) I’d say it’s probably not as critical as infrastructure or capacity as energy, food or an education system.

But probably it’s probably on par with GPS. Which underpins everything from logistics to automating train announcements to retail.

[…]

I think we’re all assuming that the Internet Archive will remain available as raw feedstock, that Wikipedia will remain as a trusted source of facts to steer it; that there won’t be a shift in copyright law that makes it impossible to mulch books into matrices, and that governments will allow all of this data to cross borders once AI becomes part of national security.

Everything I’ve said is super low likelihood, but the difficulty with training data is that you can’t spend your way out of the problem in the future. The time to prepare is now.

[…]

Probably the best way to start is to take a snapshot of the internet and keep it somewhere really safe. We can sift through it later; the world’s data will never be more available or less contaminated than it is today. Like when GitHub stored all public code in an Arctic vault (02/02/2020): a very-long-term archival facility 250 meters deep in the permafrost of an Arctic mountain. Or the Svalbard Global Seed Vault.

But actually I think this is a job for librarians and archivists.

Source: Interconnected

Image: Kathryn Conrad

A vector for deciding who is disposable

A bird sitting on top of a dirt hill

I grew up under a government led by Margaret Thatcher. Thatcherism was a rejection of solidarity, the welfare state, unions, and a belief in neoliberalism, austerity, and British nationalism. It an absolute breath of fresh air, therefore, when in 1997 as a 16 year old I witnessed ‘New’ Labour sweeping to victory in the General Election.

What followed was revolutionary, at least in the place I grew up: Surestart centres, investment in public services, and a real sense of togetherness throughout society. They lost power 15 years ago, and the period of Tory rule up to the middle of last year introduced Austerity 2.0, the polarisation of society, and chronic underfunding of the NHS and other essential services.

It’s surprising, therefore, that the first six months of Keir Starmer’s Labour government hasn’t felt like much of a change from the Tory status quo. Perhaps the most obvious example of this is the recent announcement that AI will be ‘mainlined into the veins’ of the UK, using rhetoric one would expect from the right wing of politics. As I read one person on social media as saying, this would have been very different had Starmer and co been seeking the support of the TUC and the Joseph Rowntree Foundation.

I’ve been listening to Helen Beetham’s new podcast in which she interviews Dan McQuillan, author of Resisting AI: An Anti-Fascist Approach to Artificial Intelligence. It’s not one of those episodes where you can be casually doing something else and half-listening, which is why I haven’t finished it yet. It has, however, prompted me to explore Dan’s blog, which is where I came across this post on ‘AI as Algorithmic Thatcherism’, written in late 2023,

It’s extraordinarily disingenous for the government to say that the move proposed is going to ‘create jobs’, as the explicit goal of ‘efficiency’ is to remove bottlenecks. Those are usually human-shaped. Maybe we should stop speedrunning towards dystopia? We need to prepare for post-capitalism; it’s just a shame that our government is doubling down on hypercapitalism.

One thing that these models definitely do, though, is transfer control to large corporations. The amount of computing power and data required is so incomprehensibly vast that very few companies in the world have the wherewithal to train them. To promote large language models anywhere is privatisation by the back door. The evidence so far suggests that this will be accompanied by extensive job losses, as employers take AI’s shoddy emulation of real tasks as an excuse to trim their workforce. The goal isn’t to “support” teachers and healthcare workers but to plug the gaps with AI instead of with the desperately needed staff and resources.

Real AI isn’t sci-fi but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations. AI is Thatcherism in computational form. Like Thatcher herself, real world AI boosts bureaucratic cruelty towards the most vulnerable. Case after case, from Australia to the Netherlands, has proven that unleashing machine learning in welfare systems amplifies injustice and the punishment of the poor. AI doesn’t provide insights as it’s just a giant statistical guessing game. What it does do is amplify thoughtlessness, a lack of care, and a distancing from actual consequences. The logics of ranking and superiority are buried deep in the make up of artificial intelligence; married to populist politics, it becomes another vector for deciding who is disposable.

[…]

Shouldn’t we be resisting this gigantic, carbon emitting version of automated Thatcherism before it’s allowed to trash our remaining public services? It might be tempting to wait for a Labour victory at the next election; after all, they claim to back workplace protections and the social contract. Unfortunately they aren’t likely to restrain AI; if anything, the opposite. Under the malign influence of true believers like the Tony Blair Institute, whose vision for AI is a kind of global technocratic regime change, Labour is putting its weight behind AI as an engine of regeneration. It looks like stopping the megamachine is going to be down to ordinary workers and communities. Where is Ned Ludd when you need him?

Source: danmcquillan.org

Image: Mike Newbry

The time has come now for many, many people to forge post-capitalist lives, careers, professions, and futures

Traffic cone in long grass

You may have noticed that nostalgia is, well, a vibe at the moment. Why is that? Because the present kinda sucks. Why does it suck? Because we live in completely unequal societies, increasingly ruled by demagogues.

Umair Haque, who used to be omni-present pre-pandemic on Medium seems to now have his own Ghost-powered publication and has written about post-capitalism. It’s long, with short paragraphs, and lots of italicising. But he knows what he’s talking about.

I’ve excerpted the key points, but I’d recommend clicking through and looking at the bullet point list of things he suggests reorientating one’s life and career towards. It was pretty reaffirming for me, with a January of not really enough work on, to know that getting a corporate job isn’t really a long-term solution.

The idea of late capitalism means all that. It means that people are immiserated, exploited, ruined, left desperate. That inequality soars. That there’s no future. That societies lose hope. But instead of coming together and having some kind of constructive revolution, and here we don’t have to agree with Marx, they have a fascist meltdown, which I think we can all agree is a Bad Thing.

People turn on one another. Societies shut down. Companies turn ultra-predatory. Cronyism runs rampant. Economies slide into depression. And instead of some form of positive collective action, the answer to all this tends to be conflict, and maybe even World War.

That’s late capitalism. It’s not just “this is dystopia” or “everything sucks” or even “I’m exploited to the bone.” It has that historical meaning, the very specific one: instead of doing anything positive, making wise decisions, people turn regressive, lose their thinking minds, turn on each other, and instead of the sort of class war Marx envisioned, turn to demagogues who end up starting very real ones instead.

[…]

If you’re middle aged, I’d bet that the above is already beginning to happen to you. You’re being forced out, at least if you’re in a corporate career. Every mistake isn’t just “I could lose the promotion,” it turned into “I could lose this job,” and now it’s, “that’s the end of my career, because I’ll never find another one.”

Understand that and face it. It is true. This trend of forcing middle aged people out—no matter what their accomplishments are—is here to stay now. It is never going away. This is what the “job market” is and will be for the rest of our lives, and probably beyond, because what did we learn earlier? Late capitalism recurs. It isn’t even a “stage,” as Marx’s descendants thought, but something more like a chronic condition. And we, unfortunately, have it.

[…]

The time has come now for many, many people to forge post-capitalist lives, careers, professions, and futures. They might not know it yet. Their despair and bewilderment is a reflection of how little this guiding principle is discussed, understood, or talked about. That doesn’t mean that they all have to go out and be activists or revolutionaries, lol, not at all, we just discussed how being a creator is something that’s post-capitalist.

[…]

What does it mean to “be a post-capitalist"? Many of us are starting to find out. It means running a network, community, organization, thingie, maybe a business, in certain dimensions but not along strictly profit-maximizing capitalist lines, but more humanistic ones, in a sense, and that’s not a bad thing, when you think about it.

Source: the issue.

Image: Kevin Jarrett

One of the most disconnecting forces is our expectations of how others should be

A man sitting at a table talking to a woman

Years ago, I read The Art of Travel by Alain de Botton. It was so long ago that it was the first time I’d been introduced to Seneca’s observation that you can travel, but you can’t escape yourself.

This article by Phillipa Perry — whose books How to Stay Sane and The Book You Wish Your Parents Had Read (and Your Children Will be Glad That You Did) I’d highly recommend — points out that many of our problems stem from (how we conceptualise) our relationships with others.

Often, we believe the solution to our problems lies outside ourselves, believing that if we leave the job, the relationship, everything will be fine. Of course, that can sometimes be true and it’s important to be alert to situations which are truly damaging. But the path towards feeling more connected to others usually starts from within. We must examine how we talk to ourselves, uncover the covert beliefs we live by, and confront the darker aspects of our psyche. One of the most disconnecting forces is our expectations of how others should be – but learning to accept people and things we cannot change can help us become more sanguine.

Source: The Guardian

A certain brand of artistic criticism and commentary has become surprisingly rare

A skeleton, presumably representing Death, lifting his cloak to show some people a rainbow on a screen

Good stuff from Erik Hoel about, effectively, the need for more cultural criticism around the use of technology in society. Any article that appropriately quotes Neil Postman is alright by me, and the art (included here) from Alexander Naughton which accompanies the article? Wow.

[L]ately some decisions have been explicitly boundary-pushing in a shameless “Let’s speedrun to a bad outcome” way. I think most people would share the worry that a world where social media reactivity stems mainly from bots represents a step toward dystopia, a last severing of a social life that has already moved online. So news of these sorts of plans has come across to me about as sympathetically as someone putting on their monocle and practicing their Dr. Evil laugh in public.

Why the change? Why, especially, the brazenness?

Admittedly, any answer to this question will ignore some set of contributing causal factors. Here in the early days of the AI revolution, we suddenly have a bunch of new dimensions along which to move toward a dystopia, which means people are already fiddling with the sliders. That alone accounts for some of it.

But I think a major contributing cause is a more nebulous cultural reason, one outside tech itself, in that a certain brand of artistic criticism and commentary has become surprisingly rare. In the 20th century a mainstay of satire was skewering greedy corporate overreach, a theme that cropped up across different media and genres, from film to fiction. Many older examples are, well, obvious.

Source: The Intrinsic Perspective

The feedback has to be orders of magnitude faster than the situation being controlled

A roller coaster lit up at night with red lights

Tom Watson wrote up a workshop he ran on organisational resilience recently, quoting and linking to one of Roger Swannell’s weeknotes about feedback loops. The full quotation from Swannell, taken from his blog, reads:

One of the insights I found interesting is that for feedback loops to work effectively, the feedback has to be orders of magnitude faster than the situation being controlled. So, if we’re shipping fortnightly, then the feedback would have to be hourly in order for us to have any sense of what effect we’re having. In practice, it’s usually the other way round and feedback is much slower than the situation.

Watson goes on to discuss this in terms of organisational resilience, mapping on single-loop _(Are we doing things right?__, double-loop (Are we doing the right things?), and triple-loop learning (How do we decide what’s right?) onto the “Anticipate, Prepare, Respond, Adapt” approach to organisational resilience.

Interestingly, the three things he suggests to help build organisational resilience (continual monitoring, open working, monthly reflections) are things central to our co-op:

[H]ere’s a couple of ideas to try that can help move our approach to learning forward.

  1. Make sure your monitoring and metrics allow you to answer the question “Are we doing things right?” in a timely manner. Short timeframes are generally better. Align to any decisions you need to make.

  2. Embrace open working - devote 20 -30 minutes a week to allow you and your team to reflect on what is going well, what isn’t, what is challenging, what people are seeing.

  3. Put in monthly/quarterly sessions - maybe an hour where you explore the question “Why do we do it this way?” on a specific topic as a team. Use the weeknotes to start the culture of open reflection, use them to identify common topics that might be coming up.

Doing these 3 things will move you from being only in the Response phase, into anticipate and prepare phases. Or if you prefer from single to double loop learning.

Sources: Tomcw.xyz / Roger Swannell

Image: Aleksandr Popov

Who wants to have to speak the language of search engines to find what you need?

Students at computers with screens that include a representation of a retinal scanner with pixelation and binary data overlays and a brightly coloured datawave heatmap at the top.

It’s about a decade since I gave up on Google search. While I use Google services extensively for work and other areas of my life, search and personal email are not two of them. Instead, I use DuckDuckGo and, more recently, Perplexity Pro.

The latter is excellent, bypassing advertising and paid placements, acting as a natural language search agent for synthesising information. I tend to use it for information that would take several searches. Yesterday, for example, I gave it the following query: “I need a tool that can automatically take screenshot of a web page and then stitch them together. It should then make an animated gif, scrolling through the page from top to bottom. The website requires a login, so ideally it should be a Chrome browser extension." It gave me several options, approaching my request from multiple angles as there wasn’t a solution that did exactly what I needed.

Although this article in MIT Technology Review mentions Perplexity, it weirdly focuses mainly on Google and OpenAI. There’s no mention that you can choose between LLMs in Perplexity (I use Claude 3.5 Haiku) and the two issues it raises are copyright and hallucinations, rather than sustainability and privacy. Claude 3.5 Haiku is one of the lighter weight models when it comes to environmental impact, but it still consumes a lot more energy (and water, to cool the data centres) than a single DuckDuckGo search.

And then, when it comes to privacy, while it’s great that an LLM can personalise results based on what it already knows about you, there’s an amount of trust there that I’m increasingly wary of giving to companies like OpenAI. I cancelled and then resubscribed to ChatGPT last week. I’m not sure how long I can stomach the Sam Altman circus.

Ultimately, agentic search, where you ask a question in natural language and it shows you the sources it used to synthesise the answer, is the future. Perplexity seems pretty fair in this regard, pulling in my colleague Laura’s post as part of a response about the way that technology has shifted power over the last century. For me, this kind of thing is even more of a reason to work in the open.

There’s a critical digital literacies issue here, one that’s hinted in the last paragraph of the article (included below) and discussed in Helen Beetham’s podcast episode with Dan MacQuillan, author of Resisting AI: an Anti-fascist Approach to Artificial Intelligence. When “the answer” is presented to you, there’s less incentive to do your own work in finding your own interpretation. I think that is definitely a risk. Although, given that the internet is a giant justification machine already, I’m entirely sure it will necessarily make things worse — just perhaps make people a bit lazier.

The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.

More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.

[…]

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.

Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?

Source: MIT Technology Review

Image: Kathryn Conrad

AI slop as engagement bait

gray bucket on wooden table

A couple of months ago, I wrote a short post trying to define ‘AI slop’. It was the kind of post I write so that I can, myself, link back to something in passing as I write about related issues. It made me smile, therefore, that the (self-proclaimed) “world’s only lovable tech journalist” Mike Elgan included a link to it in a recent Computerworld article.

I’m surprised he didn’t link to the Wikipedia article on the subject, but then the reason I felt that I needed to write my post was that I didn’t feel the definitions there sufficed. I could have edited the article, but Wikipedia doesn’t include original content, and so I would have had to find a better definition instead of writing my own.

The interesting thing now is that I could potentially edit the Wikipedia article and include my definition because it’s been cited in Computerworld. But although I’ve got editing various pages of world’s largest online encyclopedia on my long-list of things to do, the reality is that I can’t be done with the politics. Especially at the moment.

According to Meta, the future of human connection is basically humans connecting with AI.

[…]

Meta treats the dystopian “Dead Internet Theory” — the belief that most online content, traffic, and user interactions are generated by AI and bots rather than humans — as a business plan instead of a toxic trend to be opposed.

[…]

All this intentional AI fakery takes place on platforms where the biggest and most harmful quality is arguably bottomless pools of spammy AI slop generated by users without content-creation help from Meta.

The genre uses bad AI-generated, often-bizarre images to elicit a knee-jerk emotional reaction and engagement.

In Facebook posts, these “engagement bait” pictures are accompanied by strange, often nonsensical, and manipulative text elements. The more “successful” posts have religious, military, political, or “general pathos” themes (sad, suffering AI children, for example).

The posts often include weird words. Posters almost always hashtag celebrity names. Many contain information about unrelated topics, like cars. Many such posts ask, “Why don’t pictures like this ever trend?”

These bizarre posts — anchored in bad AI, bad taste, and bad faith — are rife on Facebook.

You can block AI slop profiles. But they just keep coming — believe me, I tried. Blocking, reporting, criticizing, and ignoring have zero impact on the constant appearance of these posts, as far as I can tell.

Source: Computerworld

Image: pepe nero

LinkedIn has become a hellish waiting room giving off Beetlejuice vibe

The word 'LinkedIn' in white letters on a black background

I hate LinkedIn. I hate the the performativity, the way it makes me feel unworthy, as not enough. I hate the way that it promotes a particular mindset and approach to the world which does not mesh with my values.. I also hate the fact that you can’t be unduly critical of the platform itself (try it: “something is wrong, try again later” the pop-up messag insists)

This post by Amy Santee, which I discovered via a link from Matt Jukes, lists a lot of things wrong with the platform. I’ve quit LinkedIn before, but then felt like I needed to return a decade ago when becoming a consultant. And that’s the reason I stay: with the demise of Twitter, the only reliable way I can get in touch with the remnants of my professional community is through LinkedIn.

It really sucks. I appreciate what Santee suggests in terms of connecting with people via a newsletter, but that feels too broadcast-like for me. I crave community, not self-serving replies on ‘content’.

The mass tech layoffs of 2022-2024 have resulted in an explosion of people looking for work in an awful market where there just aren’t enough jobs in some fields like recruiting, product design, user research, and even engineering and game development (the latter of which are faring better).

As a result, LinkedIn has become a hellish waiting room giving off Beetlejuice vibes, where unfortunate souls are virtually required to spend inordinate amounts of time scavenging for jobs, filling out redundant applications, performing professionalism and feigning excitement in their posts, and bootlicking the companies that laid them off.

[…]

We labor and post and connect and cross our fingers in desperation, getting sucked into the noise, customizing our feeds (this makes me imagine cows at a trough), scrolling and searching and DMing, trying to beat the algorithm (just post a selfie!) or do something to stand out, all with the hopes of obtaining a prized golden ticket to participate in capitalism for a damn paycheck. We feel bad about ourselves and want to give up when someone else somehow gets a job. We may joke about our own demise. We share that we are about to become homeless or that we’re skipping meals. We express our anger at the system, and we’re more aware than ever before of other people’s suffering at the hands of this system.

[…]

The way I experience the algorithm is that it seems to randomly decide whether or not my posts are worth showing other people, or at least it feels that way because I don’t understand how it works. Linkedin definitely isn’t forthright about it. In its current form, the algorithm can be prohibitive for getting my ideas out there, having conversations, sharing my podcast episodes and blog posts, getting people to attend my events, and doing any of the stuff I used to enjoy about this place.

[…]

The execs and shareholders of LinkedIn (acquired by Microsoft in 2016) are the primary beneficiaries in all of this, and they will do anything to keep their monopolistic grip on our time, our lives, and our data (we are the product, too). This is all on purpose. LinkedIn continues to win big from the explosion in user activity, ad revenue, subscriptions, job posting fees, unpaid AI training via “Top Voice” user content, and the gobs of our data we gift them, in exchange for the displeasure of being linked in until hopefully something else comes around.

Source: The Jaw Breaker Weekly-ish

Image: Kim Menikh

Must-reads for sports fans

Composite image of Lionel Messi from an article in The Athletic

One of the things that I spend a lot of my time doing every week is watching football (soccer). Yet I don’t write about it anywhere. Whether it’s watching one of our teenagers in matches every weekend, or professionals play in stadiums or on TV, there’s a reason it’s called “the beautiful game”.

As a sucker for all things Adidas, I accrue a number of points each year in their ‘Adiclub’ members area which are of a “use it or lose it” nature. Having heard good things about The Athletic, a publication created by two former Strava employees who sold it to The New York Times in 2022, I exchanged some points for a year’s subscription. I have to say I’m already hooked.

The depth is staggering and the use of images fantastic. Here, for example, are articles on the issues around redeveloping Newcastle United’s stadium, the reason that Trent Alexander-Arnold had such a poor game against Manchester United, and a wonderful article (which I sent to my daughter) about the art of ‘scanning’ for midfield players. The use of gifs in the latter is 👌

I realise that this reads like a sponsored post, and I’m not a big fan of the editorial cowardice shown by The New York Times, but this is me just pointing out the good stuff. If you consider yourself a sports fan, I’d highly recommend getting yourself access.

Source: The Athletic

Promising Trouble's advice on UK Online Safety Act compliance

Black and purple computer keyboard

The UK’s Online Safety Act is due to come into effect soon (17th March 2025), and everyone seems to be a bit confused about it. For example, I filled in the online self-assessment tool on behalf of one of our clients, who we helped set up an online forum last year. It looks like they’re going to have to carry out an impact assessment.

Rachel Coldicutt has been doing some work, including reaching out to Ofcom, the communications regulator. The best mental model I’ve got for what she’s found is that it’s a bit like GDPR. Except people are even less aware and organised.

For volunteers and small activist organisations, it just becomes yet another layer of bureaucracy to deal with. Although “small low-risk user-to-user services” are defined as “fewer than 7 million users” I can imagine this will have a negative effect on people thinking about setting up, or continuing to run, online community groups.

Five things you need you run a small, low-risk user-to-user service This is set out in more detail on pages 2-5 of this document and can be summarised as follows.

have an individual accountable for illegal content safety duties and reporting and complaints duties a content moderation function to review and assess illegal and suspected illegal content, with swift takedown measures an easy-to-find and user complaints system and process, backed up by an appropriate process to deal with complaints and appeals, with the exception of manifestly unfounded claims easy-to-find, understandable terms and conditions the ability to remove accounts for proscribed organisations

Source: Promising Trouble

Image: 𝙂𝙧𝙚𝙜𝙤𝙧𝙮 𝙂𝙖𝙡𝙡𝙚𝙜𝙤𝙨

Bridging Dictionary

Screenshot of the Bridging Dictionary

On the one hand, I really like this new ‘Bridging Dictionary’ from MIT’s Center for Constructive Communication. On the other hand, it kind of presupposes that people on each side of the political spectrum argue in good faith and are interested in the other side’s opinion.

To be honest, it feels like the kind of website we used to see a decade ago when we we’d started to see the impact of people getting their news via algorithm-based social media feeds.

The most interesting thing for me, given that I get the majority of my news from centrist and centre-left publications, is seeing which words tend to be used by, for example, Fox News. The equivalent here in the UK, I guess, would be GB News or the Daily Mail.

Welcome to the Bridging Dictionary, a dynamic prototype from MIT’s Center for Constructive Communication that identifies how words common in American political discourse are used differently across the political divide. In addition to contrasting usage by the political left and right, the dictionary identifies some less polarizing–or bridging–alternatives.

Source: BridgingDictionary.org

A feedback loop of nonsense and violence

3D render of a red maze with a blue ball in the middle. The balls can come out of one of two exists: 'True Facts' or 'Fake News'

Unless you’ve been living under a rock for the past few days, you should by now be aware of the news that Meta products, including Facebook and Instagram, will replace teams of content moderators with ‘community notes’.

People on social media seem to think that merely linking to a bad news story and telling their network that “this is bad” is in any way a form of protest or activism. Not using stuff is protest; doing something about Meta’s influence in the world is activism.

Anyway, the best take I’ve seen on this whole thing is, unsurprisingly, from Ryan Broderick, who not only diagnoses what’s happened over the last four years, but predicts what will happen as a result. The only good thing to come of this whole debacle is that there have been some fantastic parody news stories, including this one.

[C]ontent moderation, as we’ve understood, it effectively ended on January 6th, 2021… [T]he way I look at it is that the Insurrection was the first time Americans could truly see the radicalizing effects of algorithmic platforms like Facebook and YouTube that other parts of the world, particularly the Global South, had dealt with for years. A moment of political violence Silicon Valley could no longer ignore or obfuscate the way it had with similar incidents in countries like Myanmar, India, Ethiopia, or Brazil. And once faced with the cold, hard truth of what their platforms had been facilitating, companies like Google and Meta, at least internally, accepted that they would never be able to moderate them at scale. And so they just stopped.

This explains Meta’s pivot to, first, the metaverse, which failed, and, more recently, AI, which hasn’t yet, but will. It explains YouTube’s own doomed embrace of AI and its broader transition into a Netflix competitor, rather than a platform for true user-generated content. Same with Twitter’s willingness to sell to Elon Musk, Google’s enshittification, and, relatedly, Reddit’s recent stagnant googlification. After 2021, the major tech platforms we’ve relied on since the 2010s could no longer pretend that they would ever be able to properly manage the amount of users, the amount of content, the amount of influence they “need” to exist at the size they “need” to exist at to make the amount of money they “need” to exist.

And after sleepwalking through the Biden administration and doing the bare minimum to avoid any fingers pointed their direction about election interference last year, the companies are now fully giving up. Knowing the incoming Trump administration will not only not care, but will even reward them for it.

[…]

[I]t is also safe to assume that the majority of internet users right now — both ones too young to remember a pre-moderated internet and ones too normie to have used it at the time — do not actually understand what that is going to look and feel like. But I can tell you where this is all headed, though much of this is already happening.

Under Zuckerberg’s new “censorship”-free plan, Meta’s social networks will immediately fill up with hatred and harassment. Which will make a fertile ground for terrorism and extremism. Scams and spam will clog comments and direct messages. And illicit content, like non-consensual sexual material, will proliferate in private corners of networks like group messages and private Groups. Algorithms will mindlessly spread this slop, boosted by the loudest, dumbest, most reactionary users on the platform, helping it evolve and metastasize into darker, stickier social movements. And the network will effectively break down. But Meta is betting that the average user won’t care or notice. AI profiles will like their posts, comment on them, and even make content for them. A feedback loop of nonsense and violence. Our worst, unmoderated impulses, shared by algorithm and reaffirmed by AI. Where nothing has to be true and everything is popular. A world where if Meta does inspire conspiracy theories, race riots, or insurrections, no one will actually notice. Or, at the very least, be so divided on what happened that Meta doesn’t get blamed for it again.

Source: Garbage Day

Image: Hartono Creative Studio

The internet may function not so much as a brainwashing engine but as a justification machine

Illustration of the Edison multipolar dynamo

“Do your own research” is the mantra of the conspiracy theorist. It turns out that if you search for evidence of something on the internet, you’ll find it. Want proof that the earth is flat? There’s plenty of nutjob articles, videos, and podcasts for that. As there is for almost anything you can possibly imagine.

This post by Charlie Warzel and Mike Caulfield focuses on the attack on the US Capitol four years ago for The Atlantic is based on this larger observation about the internet as a ‘justification machine’. As an historian, it makes me sad that when people refer to the “wider context” of a present-day event, they rarely go back more than a few months — or, at the most, a few years.

For example, I read a fantastic book on the history of Russia over the holiday period which really helped me understand the current invasion of Ukraine. I haven’t seen that mentioned once as part of the news cycle. It’s always on to the next thing, almost always presented through the partisan lens of some flavour of capitalism.

Lately, our independent work has coalesced around a particular shared idea: that misinformation is powerful, not because it changes minds, but because it allows people to maintain their beliefs in light of growing evidence to the contrary. The internet may function not so much as a brainwashing engine but as a justification machine. A rationale is always just a scroll or a click away, and the incentives of the modern attention economy—people are rewarded with engagement and greater influence the more their audience responds to what they’re saying—means that there will always be a rush to provide one. This dynamic plays into a natural tendency that humans have to be evidence foragers, to seek information that supports one’s beliefs or undermines the arguments against them. Finding such information (or large groups of people who eagerly propagate it) has not always been so easy. Evidence foraging might historically have meant digging into a subject, testing arguments, or relying on genuine expertise. That was the foundation on which most of our politics, culture, and arguing was built.

The current internet—a mature ecosystem with widespread access and ease of self-publishing—undoes that.

[…]

Conspiracy theorizing is a deeply ingrained human phenomenon, and January 6 is just one of many crucial moments in American history to get swept up in the paranoid style. But there is a marked difference between this insurrection (where people were presented with mountains of evidence about an event that played out on social media in real time) and, say, the assassination of John F. Kennedy (where the internet did not yet exist and people speculated about the event with relatively little information to go on). Or consider the 9/11 attacks: Some did embrace conspiracy theories similar to those that animated false-flag narratives of January 6. But the adoption of these conspiracy theories was aided not by the hyperspeed of social media but by the slower distribution of early online streaming sites, message boards, email, and torrenting; there were no centralized feeds for people to create and pull narratives from.

The justification machine, in other words, didn’t create this instinct, but it has made the process of erasing cognitive dissonance far more efficient. Our current, fractured media ecosystem works far faster and with less friction than past iterations, providing on-demand evidence for consumers that is more tailored than even the most frenzied cable news broadcasts can offer.

[…]

The justification machine thrives on the breakneck pace of our information environment; the machine is powered by the constant arrival of more news, more evidence. There’s no need to reorganize, reassess. The result is a stuckness, a feeling of being trapped in an eternal present tense.

Source: The Atlantic

Image: British Library

You will always be boring if you can't make your own choices

A hand reaching towards floating abstract shapes and spheres in various colours.

I like this post by Adam Singer as it builds on my last post about increasing one’s serendipity surface, as well as an article I published over a decade ago entitled curate or be curated. The latter covered some of the same ground as Singer’s post, riffing on the idea of the ‘filter bubble’.

Algorithms are literally everywhere in our lives these days, and coupled with AI we are likely to live templated lives. I’m currently composing this post while listening to music coming out of a speaker driven by the iPod I built a couple of years ago. I’m reading a book that I found in a second-hand bookstore. I hesitate to use the word ‘resistance’ but these are small ways in which I ensure that my world isn’t dictated by someone else’s choices for me.

We’ve never had more freedom, more choices. But in reality, most people are subtly funneled into the same streams, the same pools of ‘socially approved’ culture, cuisine and ideas. Remixes and memes abound, but almost no one shares anything weird, original or different. People wake up, perhaps with ambitions to make unique choices they believe are their own, only to find that the options have been filtered, curated, and ‘tailored to existing tastes’ by algorithms that claim to know them best. This only happens as these algorithms prioritize popularity or even just safe choices over individuality. They don’t lead you down our own path or really care what’s interesting and unknown, they lead us down paths proven profitable, efficient, safe. If you work in a creative sector (and many of us do) you already know how dangerous this is professionally, not to mention spiritually.

Algorithms might make for comfortable consumers, but they cannot produce thoughtful creators, and they are slowly taking your ability to choose from you. You might think you’re choosing, but you never really are. When your ideas, interests, and even daily meals are largely inspired by whatever was already approved, already done, already voted on and liked, you’re only experiencing life as an echo of the masses (or the machines, if personalized based on historic preference). And in this echo chamber, genuine discovery is rare, even radical.

Of course, it’s very easy to live like this, as we live in a society totally biased to pain avoidance and ease (it’s so ingrained much of the medical establishment only treats symptoms, not causes). There’s a an unconscious allure in this conformity, a feeling of belonging, of social safety, it’s a warm blanket you aren’t alone in the cosmos. But at what cost? In blending into the mainstream wasteland, you risk losing something deeply human: your impulse to explore, the courage to confront the unfamiliar, the potential to define yourself on your own terms. You don’t get real creativity without courage, and no one has this until they stop looking to the crowd for consensus approval.

Source: Hot Takes

Image: Google Deepmind