A vector for deciding who is disposable

A bird sitting on top of a dirt hill

I grew up under a government led by Margaret Thatcher. Thatcherism was a rejection of solidarity, the welfare state, unions, and a belief in neoliberalism, austerity, and British nationalism. It an absolute breath of fresh air, therefore, when in 1997 as a 16 year old I witnessed ‘New’ Labour sweeping to victory in the General Election.

What followed was revolutionary, at least in the place I grew up: Surestart centres, investment in public services, and a real sense of togetherness throughout society. They lost power 15 years ago, and the period of Tory rule up to the middle of last year introduced Austerity 2.0, the polarisation of society, and chronic underfunding of the NHS and other essential services.

It’s surprising, therefore, that the first six months of Keir Starmer’s Labour government hasn’t felt like much of a change from the Tory status quo. Perhaps the most obvious example of this is the recent announcement that AI will be ‘mainlined into the veins’ of the UK, using rhetoric one would expect from the right wing of politics. As I read one person on social media as saying, this would have been very different had Starmer and co been seeking the support of the TUC and the Joseph Rowntree Foundation.

I’ve been listening to Helen Beetham’s new podcast in which she interviews Dan McQuillan, author of Resisting AI: An Anti-Fascist Approach to Artificial Intelligence. It’s not one of those episodes where you can be casually doing something else and half-listening, which is why I haven’t finished it yet. It has, however, prompted me to explore Dan’s blog, which is where I came across this post on ‘AI as Algorithmic Thatcherism’, written in late 2023,

It’s extraordinarily disingenous for the government to say that the move proposed is going to ‘create jobs’, as the explicit goal of ‘efficiency’ is to remove bottlenecks. Those are usually human-shaped. Maybe we should stop speedrunning towards dystopia? We need to prepare for post-capitalism; it’s just a shame that our government is doubling down on hypercapitalism.

One thing that these models definitely do, though, is transfer control to large corporations. The amount of computing power and data required is so incomprehensibly vast that very few companies in the world have the wherewithal to train them. To promote large language models anywhere is privatisation by the back door. The evidence so far suggests that this will be accompanied by extensive job losses, as employers take AI’s shoddy emulation of real tasks as an excuse to trim their workforce. The goal isn’t to “support” teachers and healthcare workers but to plug the gaps with AI instead of with the desperately needed staff and resources.

Real AI isn’t sci-fi but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations. AI is Thatcherism in computational form. Like Thatcher herself, real world AI boosts bureaucratic cruelty towards the most vulnerable. Case after case, from Australia to the Netherlands, has proven that unleashing machine learning in welfare systems amplifies injustice and the punishment of the poor. AI doesn’t provide insights as it’s just a giant statistical guessing game. What it does do is amplify thoughtlessness, a lack of care, and a distancing from actual consequences. The logics of ranking and superiority are buried deep in the make up of artificial intelligence; married to populist politics, it becomes another vector for deciding who is disposable.

[…]

Shouldn’t we be resisting this gigantic, carbon emitting version of automated Thatcherism before it’s allowed to trash our remaining public services? It might be tempting to wait for a Labour victory at the next election; after all, they claim to back workplace protections and the social contract. Unfortunately they aren’t likely to restrain AI; if anything, the opposite. Under the malign influence of true believers like the Tony Blair Institute, whose vision for AI is a kind of global technocratic regime change, Labour is putting its weight behind AI as an engine of regeneration. It looks like stopping the megamachine is going to be down to ordinary workers and communities. Where is Ned Ludd when you need him?

Source: danmcquillan.org

Image: Mike Newbry

The time has come now for many, many people to forge post-capitalist lives, careers, professions, and futures

Traffic cone in long grass

You may have noticed that nostalgia is, well, a vibe at the moment. Why is that? Because the present kinda sucks. Why does it suck? Because we live in completely unequal societies, increasingly ruled by demagogues.

Umair Haque, who used to be omni-present pre-pandemic on Medium seems to now have his own Ghost-powered publication and has written about post-capitalism. It’s long, with short paragraphs, and lots of italicising. But he knows what he’s talking about.

I’ve excerpted the key points, but I’d recommend clicking through and looking at the bullet point list of things he suggests reorientating one’s life and career towards. It was pretty reaffirming for me, with a January of not really enough work on, to know that getting a corporate job isn’t really a long-term solution.

The idea of late capitalism means all that. It means that people are immiserated, exploited, ruined, left desperate. That inequality soars. That there’s no future. That societies lose hope. But instead of coming together and having some kind of constructive revolution, and here we don’t have to agree with Marx, they have a fascist meltdown, which I think we can all agree is a Bad Thing.

People turn on one another. Societies shut down. Companies turn ultra-predatory. Cronyism runs rampant. Economies slide into depression. And instead of some form of positive collective action, the answer to all this tends to be conflict, and maybe even World War.

That’s late capitalism. It’s not just “this is dystopia” or “everything sucks” or even “I’m exploited to the bone.” It has that historical meaning, the very specific one: instead of doing anything positive, making wise decisions, people turn regressive, lose their thinking minds, turn on each other, and instead of the sort of class war Marx envisioned, turn to demagogues who end up starting very real ones instead.

[…]

If you’re middle aged, I’d bet that the above is already beginning to happen to you. You’re being forced out, at least if you’re in a corporate career. Every mistake isn’t just “I could lose the promotion,” it turned into “I could lose this job,” and now it’s, “that’s the end of my career, because I’ll never find another one.”

Understand that and face it. It is true. This trend of forcing middle aged people out—no matter what their accomplishments are—is here to stay now. It is never going away. This is what the “job market” is and will be for the rest of our lives, and probably beyond, because what did we learn earlier? Late capitalism recurs. It isn’t even a “stage,” as Marx’s descendants thought, but something more like a chronic condition. And we, unfortunately, have it.

[…]

The time has come now for many, many people to forge post-capitalist lives, careers, professions, and futures. They might not know it yet. Their despair and bewilderment is a reflection of how little this guiding principle is discussed, understood, or talked about. That doesn’t mean that they all have to go out and be activists or revolutionaries, lol, not at all, we just discussed how being a creator is something that’s post-capitalist.

[…]

What does it mean to “be a post-capitalist"? Many of us are starting to find out. It means running a network, community, organization, thingie, maybe a business, in certain dimensions but not along strictly profit-maximizing capitalist lines, but more humanistic ones, in a sense, and that’s not a bad thing, when you think about it.

Source: the issue.

Image: Kevin Jarrett

One of the most disconnecting forces is our expectations of how others should be

A man sitting at a table talking to a woman

Years ago, I read The Art of Travel by Alain de Botton. It was so long ago that it was the first time I’d been introduced to Seneca’s observation that you can travel, but you can’t escape yourself.

This article by Phillipa Perry — whose books How to Stay Sane and The Book You Wish Your Parents Had Read (and Your Children Will be Glad That You Did) I’d highly recommend — points out that many of our problems stem from (how we conceptualise) our relationships with others.

Often, we believe the solution to our problems lies outside ourselves, believing that if we leave the job, the relationship, everything will be fine. Of course, that can sometimes be true and it’s important to be alert to situations which are truly damaging. But the path towards feeling more connected to others usually starts from within. We must examine how we talk to ourselves, uncover the covert beliefs we live by, and confront the darker aspects of our psyche. One of the most disconnecting forces is our expectations of how others should be – but learning to accept people and things we cannot change can help us become more sanguine.

Source: The Guardian

A certain brand of artistic criticism and commentary has become surprisingly rare

A skeleton, presumably representing Death, lifting his cloak to show some people a rainbow on a screen

Good stuff from Erik Hoel about, effectively, the need for more cultural criticism around the use of technology in society. Any article that appropriately quotes Neil Postman is alright by me, and the art (included here) from Alexander Naughton which accompanies the article? Wow.

[L]ately some decisions have been explicitly boundary-pushing in a shameless “Let’s speedrun to a bad outcome” way. I think most people would share the worry that a world where social media reactivity stems mainly from bots represents a step toward dystopia, a last severing of a social life that has already moved online. So news of these sorts of plans has come across to me about as sympathetically as someone putting on their monocle and practicing their Dr. Evil laugh in public.

Why the change? Why, especially, the brazenness?

Admittedly, any answer to this question will ignore some set of contributing causal factors. Here in the early days of the AI revolution, we suddenly have a bunch of new dimensions along which to move toward a dystopia, which means people are already fiddling with the sliders. That alone accounts for some of it.

But I think a major contributing cause is a more nebulous cultural reason, one outside tech itself, in that a certain brand of artistic criticism and commentary has become surprisingly rare. In the 20th century a mainstay of satire was skewering greedy corporate overreach, a theme that cropped up across different media and genres, from film to fiction. Many older examples are, well, obvious.

Source: The Intrinsic Perspective

The feedback has to be orders of magnitude faster than the situation being controlled

A roller coaster lit up at night with red lights

Tom Watson wrote up a workshop he ran on organisational resilience recently, quoting and linking to one of Roger Swannell’s weeknotes about feedback loops. The full quotation from Swannell, taken from his blog, reads:

One of the insights I found interesting is that for feedback loops to work effectively, the feedback has to be orders of magnitude faster than the situation being controlled. So, if we’re shipping fortnightly, then the feedback would have to be hourly in order for us to have any sense of what effect we’re having. In practice, it’s usually the other way round and feedback is much slower than the situation.

Watson goes on to discuss this in terms of organisational resilience, mapping on single-loop _(Are we doing things right?__, double-loop (Are we doing the right things?), and triple-loop learning (How do we decide what’s right?) onto the “Anticipate, Prepare, Respond, Adapt” approach to organisational resilience.

Interestingly, the three things he suggests to help build organisational resilience (continual monitoring, open working, monthly reflections) are things central to our co-op:

[H]ere’s a couple of ideas to try that can help move our approach to learning forward.

  1. Make sure your monitoring and metrics allow you to answer the question “Are we doing things right?” in a timely manner. Short timeframes are generally better. Align to any decisions you need to make.

  2. Embrace open working - devote 20 -30 minutes a week to allow you and your team to reflect on what is going well, what isn’t, what is challenging, what people are seeing.

  3. Put in monthly/quarterly sessions - maybe an hour where you explore the question “Why do we do it this way?” on a specific topic as a team. Use the weeknotes to start the culture of open reflection, use them to identify common topics that might be coming up.

Doing these 3 things will move you from being only in the Response phase, into anticipate and prepare phases. Or if you prefer from single to double loop learning.

Sources: Tomcw.xyz / Roger Swannell

Image: Aleksandr Popov

Who wants to have to speak the language of search engines to find what you need?

Students at computers with screens that include a representation of a retinal scanner with pixelation and binary data overlays and a brightly coloured datawave heatmap at the top.

It’s about a decade since I gave up on Google search. While I use Google services extensively for work and other areas of my life, search and personal email are not two of them. Instead, I use DuckDuckGo and, more recently, Perplexity Pro.

The latter is excellent, bypassing advertising and paid placements, acting as a natural language search agent for synthesising information. I tend to use it for information that would take several searches. Yesterday, for example, I gave it the following query: “I need a tool that can automatically take screenshot of a web page and then stitch them together. It should then make an animated gif, scrolling through the page from top to bottom. The website requires a login, so ideally it should be a Chrome browser extension." It gave me several options, approaching my request from multiple angles as there wasn’t a solution that did exactly what I needed.

Although this article in MIT Technology Review mentions Perplexity, it weirdly focuses mainly on Google and OpenAI. There’s no mention that you can choose between LLMs in Perplexity (I use Claude 3.5 Haiku) and the two issues it raises are copyright and hallucinations, rather than sustainability and privacy. Claude 3.5 Haiku is one of the lighter weight models when it comes to environmental impact, but it still consumes a lot more energy (and water, to cool the data centres) than a single DuckDuckGo search.

And then, when it comes to privacy, while it’s great that an LLM can personalise results based on what it already knows about you, there’s an amount of trust there that I’m increasingly wary of giving to companies like OpenAI. I cancelled and then resubscribed to ChatGPT last week. I’m not sure how long I can stomach the Sam Altman circus.

Ultimately, agentic search, where you ask a question in natural language and it shows you the sources it used to synthesise the answer, is the future. Perplexity seems pretty fair in this regard, pulling in my colleague Laura’s post as part of a response about the way that technology has shifted power over the last century. For me, this kind of thing is even more of a reason to work in the open.

There’s a critical digital literacies issue here, one that’s hinted in the last paragraph of the article (included below) and discussed in Helen Beetham’s podcast episode with Dan MacQuillan, author of Resisting AI: an Anti-fascist Approach to Artificial Intelligence. When “the answer” is presented to you, there’s less incentive to do your own work in finding your own interpretation. I think that is definitely a risk. Although, given that the internet is a giant justification machine already, I’m entirely sure it will necessarily make things worse — just perhaps make people a bit lazier.

The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.

More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.

[…]

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.

Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?

Source: MIT Technology Review

Image: Kathryn Conrad

AI slop as engagement bait

gray bucket on wooden table

A couple of months ago, I wrote a short post trying to define ‘AI slop’. It was the kind of post I write so that I can, myself, link back to something in passing as I write about related issues. It made me smile, therefore, that the (self-proclaimed) “world’s only lovable tech journalist” Mike Elgan included a link to it in a recent Computerworld article.

I’m surprised he didn’t link to the Wikipedia article on the subject, but then the reason I felt that I needed to write my post was that I didn’t feel the definitions there sufficed. I could have edited the article, but Wikipedia doesn’t include original content, and so I would have had to find a better definition instead of writing my own.

The interesting thing now is that I could potentially edit the Wikipedia article and include my definition because it’s been cited in Computerworld. But although I’ve got editing various pages of world’s largest online encyclopedia on my long-list of things to do, the reality is that I can’t be done with the politics. Especially at the moment.

According to Meta, the future of human connection is basically humans connecting with AI.

[…]

Meta treats the dystopian “Dead Internet Theory” — the belief that most online content, traffic, and user interactions are generated by AI and bots rather than humans — as a business plan instead of a toxic trend to be opposed.

[…]

All this intentional AI fakery takes place on platforms where the biggest and most harmful quality is arguably bottomless pools of spammy AI slop generated by users without content-creation help from Meta.

The genre uses bad AI-generated, often-bizarre images to elicit a knee-jerk emotional reaction and engagement.

In Facebook posts, these “engagement bait” pictures are accompanied by strange, often nonsensical, and manipulative text elements. The more “successful” posts have religious, military, political, or “general pathos” themes (sad, suffering AI children, for example).

The posts often include weird words. Posters almost always hashtag celebrity names. Many contain information about unrelated topics, like cars. Many such posts ask, “Why don’t pictures like this ever trend?”

These bizarre posts — anchored in bad AI, bad taste, and bad faith — are rife on Facebook.

You can block AI slop profiles. But they just keep coming — believe me, I tried. Blocking, reporting, criticizing, and ignoring have zero impact on the constant appearance of these posts, as far as I can tell.

Source: Computerworld

Image: pepe nero

LinkedIn has become a hellish waiting room giving off Beetlejuice vibe

The word 'LinkedIn' in white letters on a black background

I hate LinkedIn. I hate the the performativity, the way it makes me feel unworthy, as not enough. I hate the way that it promotes a particular mindset and approach to the world which does not mesh with my values.. I also hate the fact that you can’t be unduly critical of the platform itself (try it: “something is wrong, try again later” the pop-up messag insists)

This post by Amy Santee, which I discovered via a link from Matt Jukes, lists a lot of things wrong with the platform. I’ve quit LinkedIn before, but then felt like I needed to return a decade ago when becoming a consultant. And that’s the reason I stay: with the demise of Twitter, the only reliable way I can get in touch with the remnants of my professional community is through LinkedIn.

It really sucks. I appreciate what Santee suggests in terms of connecting with people via a newsletter, but that feels too broadcast-like for me. I crave community, not self-serving replies on ‘content’.

The mass tech layoffs of 2022-2024 have resulted in an explosion of people looking for work in an awful market where there just aren’t enough jobs in some fields like recruiting, product design, user research, and even engineering and game development (the latter of which are faring better).

As a result, LinkedIn has become a hellish waiting room giving off Beetlejuice vibes, where unfortunate souls are virtually required to spend inordinate amounts of time scavenging for jobs, filling out redundant applications, performing professionalism and feigning excitement in their posts, and bootlicking the companies that laid them off.

[…]

We labor and post and connect and cross our fingers in desperation, getting sucked into the noise, customizing our feeds (this makes me imagine cows at a trough), scrolling and searching and DMing, trying to beat the algorithm (just post a selfie!) or do something to stand out, all with the hopes of obtaining a prized golden ticket to participate in capitalism for a damn paycheck. We feel bad about ourselves and want to give up when someone else somehow gets a job. We may joke about our own demise. We share that we are about to become homeless or that we’re skipping meals. We express our anger at the system, and we’re more aware than ever before of other people’s suffering at the hands of this system.

[…]

The way I experience the algorithm is that it seems to randomly decide whether or not my posts are worth showing other people, or at least it feels that way because I don’t understand how it works. Linkedin definitely isn’t forthright about it. In its current form, the algorithm can be prohibitive for getting my ideas out there, having conversations, sharing my podcast episodes and blog posts, getting people to attend my events, and doing any of the stuff I used to enjoy about this place.

[…]

The execs and shareholders of LinkedIn (acquired by Microsoft in 2016) are the primary beneficiaries in all of this, and they will do anything to keep their monopolistic grip on our time, our lives, and our data (we are the product, too). This is all on purpose. LinkedIn continues to win big from the explosion in user activity, ad revenue, subscriptions, job posting fees, unpaid AI training via “Top Voice” user content, and the gobs of our data we gift them, in exchange for the displeasure of being linked in until hopefully something else comes around.

Source: The Jaw Breaker Weekly-ish

Image: Kim Menikh

Must-reads for sports fans

Composite image of Lionel Messi from an article in The Athletic

One of the things that I spend a lot of my time doing every week is watching football (soccer). Yet I don’t write about it anywhere. Whether it’s watching one of our teenagers in matches every weekend, or professionals play in stadiums or on TV, there’s a reason it’s called “the beautiful game”.

As a sucker for all things Adidas, I accrue a number of points each year in their ‘Adiclub’ members area which are of a “use it or lose it” nature. Having heard good things about The Athletic, a publication created by two former Strava employees who sold it to The New York Times in 2022, I exchanged some points for a year’s subscription. I have to say I’m already hooked.

The depth is staggering and the use of images fantastic. Here, for example, are articles on the issues around redeveloping Newcastle United’s stadium, the reason that Trent Alexander-Arnold had such a poor game against Manchester United, and a wonderful article (which I sent to my daughter) about the art of ‘scanning’ for midfield players. The use of gifs in the latter is 👌

I realise that this reads like a sponsored post, and I’m not a big fan of the editorial cowardice shown by The New York Times, but this is me just pointing out the good stuff. If you consider yourself a sports fan, I’d highly recommend getting yourself access.

Source: The Athletic

Promising Trouble's advice on UK Online Safety Act compliance

Black and purple computer keyboard

The UK’s Online Safety Act is due to come into effect soon (17th March 2025), and everyone seems to be a bit confused about it. For example, I filled in the online self-assessment tool on behalf of one of our clients, who we helped set up an online forum last year. It looks like they’re going to have to carry out an impact assessment.

Rachel Coldicutt has been doing some work, including reaching out to Ofcom, the communications regulator. The best mental model I’ve got for what she’s found is that it’s a bit like GDPR. Except people are even less aware and organised.

For volunteers and small activist organisations, it just becomes yet another layer of bureaucracy to deal with. Although “small low-risk user-to-user services” are defined as “fewer than 7 million users” I can imagine this will have a negative effect on people thinking about setting up, or continuing to run, online community groups.

Five things you need you run a small, low-risk user-to-user service This is set out in more detail on pages 2-5 of this document and can be summarised as follows.

have an individual accountable for illegal content safety duties and reporting and complaints duties a content moderation function to review and assess illegal and suspected illegal content, with swift takedown measures an easy-to-find and user complaints system and process, backed up by an appropriate process to deal with complaints and appeals, with the exception of manifestly unfounded claims easy-to-find, understandable terms and conditions the ability to remove accounts for proscribed organisations

Source: Promising Trouble

Image: 𝙂𝙧𝙚𝙜𝙤𝙧𝙮 𝙂𝙖𝙡𝙡𝙚𝙜𝙤𝙨

Bridging Dictionary

Screenshot of the Bridging Dictionary

On the one hand, I really like this new ‘Bridging Dictionary’ from MIT’s Center for Constructive Communication. On the other hand, it kind of presupposes that people on each side of the political spectrum argue in good faith and are interested in the other side’s opinion.

To be honest, it feels like the kind of website we used to see a decade ago when we we’d started to see the impact of people getting their news via algorithm-based social media feeds.

The most interesting thing for me, given that I get the majority of my news from centrist and centre-left publications, is seeing which words tend to be used by, for example, Fox News. The equivalent here in the UK, I guess, would be GB News or the Daily Mail.

Welcome to the Bridging Dictionary, a dynamic prototype from MIT’s Center for Constructive Communication that identifies how words common in American political discourse are used differently across the political divide. In addition to contrasting usage by the political left and right, the dictionary identifies some less polarizing–or bridging–alternatives.

Source: BridgingDictionary.org

A feedback loop of nonsense and violence

3D render of a red maze with a blue ball in the middle. The balls can come out of one of two exists: 'True Facts' or 'Fake News'

Unless you’ve been living under a rock for the past few days, you should by now be aware of the news that Meta products, including Facebook and Instagram, will replace teams of content moderators with ‘community notes’.

People on social media seem to think that merely linking to a bad news story and telling their network that “this is bad” is in any way a form of protest or activism. Not using stuff is protest; doing something about Meta’s influence in the world is activism.

Anyway, the best take I’ve seen on this whole thing is, unsurprisingly, from Ryan Broderick, who not only diagnoses what’s happened over the last four years, but predicts what will happen as a result. The only good thing to come of this whole debacle is that there have been some fantastic parody news stories, including this one.

[C]ontent moderation, as we’ve understood, it effectively ended on January 6th, 2021… [T]he way I look at it is that the Insurrection was the first time Americans could truly see the radicalizing effects of algorithmic platforms like Facebook and YouTube that other parts of the world, particularly the Global South, had dealt with for years. A moment of political violence Silicon Valley could no longer ignore or obfuscate the way it had with similar incidents in countries like Myanmar, India, Ethiopia, or Brazil. And once faced with the cold, hard truth of what their platforms had been facilitating, companies like Google and Meta, at least internally, accepted that they would never be able to moderate them at scale. And so they just stopped.

This explains Meta’s pivot to, first, the metaverse, which failed, and, more recently, AI, which hasn’t yet, but will. It explains YouTube’s own doomed embrace of AI and its broader transition into a Netflix competitor, rather than a platform for true user-generated content. Same with Twitter’s willingness to sell to Elon Musk, Google’s enshittification, and, relatedly, Reddit’s recent stagnant googlification. After 2021, the major tech platforms we’ve relied on since the 2010s could no longer pretend that they would ever be able to properly manage the amount of users, the amount of content, the amount of influence they “need” to exist at the size they “need” to exist at to make the amount of money they “need” to exist.

And after sleepwalking through the Biden administration and doing the bare minimum to avoid any fingers pointed their direction about election interference last year, the companies are now fully giving up. Knowing the incoming Trump administration will not only not care, but will even reward them for it.

[…]

[I]t is also safe to assume that the majority of internet users right now — both ones too young to remember a pre-moderated internet and ones too normie to have used it at the time — do not actually understand what that is going to look and feel like. But I can tell you where this is all headed, though much of this is already happening.

Under Zuckerberg’s new “censorship”-free plan, Meta’s social networks will immediately fill up with hatred and harassment. Which will make a fertile ground for terrorism and extremism. Scams and spam will clog comments and direct messages. And illicit content, like non-consensual sexual material, will proliferate in private corners of networks like group messages and private Groups. Algorithms will mindlessly spread this slop, boosted by the loudest, dumbest, most reactionary users on the platform, helping it evolve and metastasize into darker, stickier social movements. And the network will effectively break down. But Meta is betting that the average user won’t care or notice. AI profiles will like their posts, comment on them, and even make content for them. A feedback loop of nonsense and violence. Our worst, unmoderated impulses, shared by algorithm and reaffirmed by AI. Where nothing has to be true and everything is popular. A world where if Meta does inspire conspiracy theories, race riots, or insurrections, no one will actually notice. Or, at the very least, be so divided on what happened that Meta doesn’t get blamed for it again.

Source: Garbage Day

Image: Hartono Creative Studio

The internet may function not so much as a brainwashing engine but as a justification machine

Illustration of the Edison multipolar dynamo

“Do your own research” is the mantra of the conspiracy theorist. It turns out that if you search for evidence of something on the internet, you’ll find it. Want proof that the earth is flat? There’s plenty of nutjob articles, videos, and podcasts for that. As there is for almost anything you can possibly imagine.

This post by Charlie Warzel and Mike Caulfield focuses on the attack on the US Capitol four years ago for The Atlantic is based on this larger observation about the internet as a ‘justification machine’. As an historian, it makes me sad that when people refer to the “wider context” of a present-day event, they rarely go back more than a few months — or, at the most, a few years.

For example, I read a fantastic book on the history of Russia over the holiday period which really helped me understand the current invasion of Ukraine. I haven’t seen that mentioned once as part of the news cycle. It’s always on to the next thing, almost always presented through the partisan lens of some flavour of capitalism.

Lately, our independent work has coalesced around a particular shared idea: that misinformation is powerful, not because it changes minds, but because it allows people to maintain their beliefs in light of growing evidence to the contrary. The internet may function not so much as a brainwashing engine but as a justification machine. A rationale is always just a scroll or a click away, and the incentives of the modern attention economy—people are rewarded with engagement and greater influence the more their audience responds to what they’re saying—means that there will always be a rush to provide one. This dynamic plays into a natural tendency that humans have to be evidence foragers, to seek information that supports one’s beliefs or undermines the arguments against them. Finding such information (or large groups of people who eagerly propagate it) has not always been so easy. Evidence foraging might historically have meant digging into a subject, testing arguments, or relying on genuine expertise. That was the foundation on which most of our politics, culture, and arguing was built.

The current internet—a mature ecosystem with widespread access and ease of self-publishing—undoes that.

[…]

Conspiracy theorizing is a deeply ingrained human phenomenon, and January 6 is just one of many crucial moments in American history to get swept up in the paranoid style. But there is a marked difference between this insurrection (where people were presented with mountains of evidence about an event that played out on social media in real time) and, say, the assassination of John F. Kennedy (where the internet did not yet exist and people speculated about the event with relatively little information to go on). Or consider the 9/11 attacks: Some did embrace conspiracy theories similar to those that animated false-flag narratives of January 6. But the adoption of these conspiracy theories was aided not by the hyperspeed of social media but by the slower distribution of early online streaming sites, message boards, email, and torrenting; there were no centralized feeds for people to create and pull narratives from.

The justification machine, in other words, didn’t create this instinct, but it has made the process of erasing cognitive dissonance far more efficient. Our current, fractured media ecosystem works far faster and with less friction than past iterations, providing on-demand evidence for consumers that is more tailored than even the most frenzied cable news broadcasts can offer.

[…]

The justification machine thrives on the breakneck pace of our information environment; the machine is powered by the constant arrival of more news, more evidence. There’s no need to reorganize, reassess. The result is a stuckness, a feeling of being trapped in an eternal present tense.

Source: The Atlantic

Image: British Library

You will always be boring if you can't make your own choices

A hand reaching towards floating abstract shapes and spheres in various colours.

I like this post by Adam Singer as it builds on my last post about increasing one’s serendipity surface, as well as an article I published over a decade ago entitled curate or be curated. The latter covered some of the same ground as Singer’s post, riffing on the idea of the ‘filter bubble’.

Algorithms are literally everywhere in our lives these days, and coupled with AI we are likely to live templated lives. I’m currently composing this post while listening to music coming out of a speaker driven by the iPod I built a couple of years ago. I’m reading a book that I found in a second-hand bookstore. I hesitate to use the word ‘resistance’ but these are small ways in which I ensure that my world isn’t dictated by someone else’s choices for me.

We’ve never had more freedom, more choices. But in reality, most people are subtly funneled into the same streams, the same pools of ‘socially approved’ culture, cuisine and ideas. Remixes and memes abound, but almost no one shares anything weird, original or different. People wake up, perhaps with ambitions to make unique choices they believe are their own, only to find that the options have been filtered, curated, and ‘tailored to existing tastes’ by algorithms that claim to know them best. This only happens as these algorithms prioritize popularity or even just safe choices over individuality. They don’t lead you down our own path or really care what’s interesting and unknown, they lead us down paths proven profitable, efficient, safe. If you work in a creative sector (and many of us do) you already know how dangerous this is professionally, not to mention spiritually.

Algorithms might make for comfortable consumers, but they cannot produce thoughtful creators, and they are slowly taking your ability to choose from you. You might think you’re choosing, but you never really are. When your ideas, interests, and even daily meals are largely inspired by whatever was already approved, already done, already voted on and liked, you’re only experiencing life as an echo of the masses (or the machines, if personalized based on historic preference). And in this echo chamber, genuine discovery is rare, even radical.

Of course, it’s very easy to live like this, as we live in a society totally biased to pain avoidance and ease (it’s so ingrained much of the medical establishment only treats symptoms, not causes). There’s a an unconscious allure in this conformity, a feeling of belonging, of social safety, it’s a warm blanket you aren’t alone in the cosmos. But at what cost? In blending into the mainstream wasteland, you risk losing something deeply human: your impulse to explore, the courage to confront the unfamiliar, the potential to define yourself on your own terms. You don’t get real creativity without courage, and no one has this until they stop looking to the crowd for consensus approval.

Source: Hot Takes

Image: Google Deepmind

Luck = (Passionate) Doing x (Effective) Telling

Diagram illustrating the 'Surface Area of Luck' with a formula and a DOING vs. TELLING graph.

Back in 2016 I coined the term ‘serendipity surface’ which I defined as the inverse of an ‘attack surface’ when building software. In other words, you want to maximise your serendipity surface so that good and unexpected things happen to you. It’s something I discussed on the Artificiality podcast last year if you want to hear me discuss it further.

Tim Klapdor talks of a ‘serendipity engine’ and I guess Thought Shrapnel could be considered that for me. As part of my reading for this eclectic blog and newsletter, I came across this post on the Model Thinkers website on ‘The Surface Area of Luck’ which has no date, but was indexed by The Internet Archive for the first time in 2021.

There’s some good, actionable advice in it, as well as links for further exploration. It also includes the above image and, as we know, all good ideas require an image :)

Luck, by definition, is about chance, but it’s not totally out of your control. So why not use this model to increase your chance of luck?

The Surface Area of Luck, or your chance of being lucky, is equivalent to the action you take towards your passion, multiplied by the number of people you effectively communicate your passion and activities to.

Put simply: Luck = (Passionate) Doing x (Effective) Telling.

Source: Model Thinkers

We need to do a lot better than outsourcing AI education to grifters with bombastic Twitter threads

Whiteboard with handwritten text: 'ZIF_LLM_MODE' and 'get-request_body().'

This is a fantastic long post from Simon Willison about things we learned about Large Language Models (LLMs) in 2024. The bit that jumped out to me was, unsurprisingly, the AI literacies angle to all this. As Willison points out, using an LLM such as ChatGPT, Claude, or Gemini seems straightforward as it’s a chat-based interface, but even with the voice modes there’s still a need to understand what’s going on under the hood.

People often want ‘training’ on new technologies, but it’s actually quite difficult to provide in this situation. While I think there are underlying literacies involved here, a key way of understanding what’s going on is to experiment. As with every other technology, there’s no substitute for messing about with stuff to see how it works — and where the limits are.

I’d recommend also having a look at Willison’s list of ‘artifacts’ he created using Claude in a single week. It’s also worth considering the analogy he makes with building out railway infrastructure in the 19th century, as it kind of works.

A drum I’ve been banging for a while is that LLMs are power-user tools—they’re chainsaws disguised as kitchen knives. They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.

If anything, this problem got worse in 2024.

We’ve built computer systems you can talk to in human language, that will answer your questions and usually get them right! … depending on the question, and how you ask it, and whether it’s accurately reflected in the undocumented and secret training set.

[…]

What are we doing about this? Not much. Most users are thrown in at the deep end. The default LLM chat UI is like taking brand new computer users, dropping them into a Linux terminal and expecting them to figure it all out.

Meanwhile, it’s increasingly common for end users to develop wildly inaccurate mental models of how these things work and what they are capable of. I’ve seen so many examples of people trying to win an argument with a screenshot from ChatGPT—an inherently ludicrous proposition, given the inherent unreliability of these models crossed with the fact that you can get them to say anything if you prompt them right.

There’s a flipside to this too: a lot of better informed people have sworn off LLMs entirely because they can’t see how anyone could benefit from a tool with so many flaws. The key skill in getting the most out of LLMs is learning to work with tech that is both inherently unreliable and incredibly powerful at the same time. This is a decidedly non-obvious skill to acquire!

There is so much space for helpful education content here, but we need to do do a lot better than outsourcing it all to AI grifters with bombastic Twitter threads.

Source: Simon Willison’s Weblog

Image: Bernd Dittrich

It's OK not to have an opinion on everything

Three overlapping phone-shaped piece of glass in white, black, and translucent gray on a brown background.

My three-week breaks each year, usually in Spring, Summer, and Winter, are rejuvenating. One of the things I most enjoy about them is that I give myself permission to come off social media for a bit. While I’m not a user of TikTok, Instagram, Snapchat, or the like, even Mastodon or Bluesky can be an easy thing to reach for instead of doing something more interesting or useful.

There is no narrative to a social media feed. It’s just one thing after another, ordered either chronologically or algorithmically. Either isn’t great for trying to build a coherent picture of the world, especially given how emotionally-charged social media posts can be. As a former Nuzzel user, I’ve found Sill useful for avoiding FOMO. It creates a digest of the most popular links that your network is sharing, which is pretty useful.

What I’ve found myself leaning more into recently is experts making sense of the world as it happens. Two good examples of this are The Rest is Politics and The Athletic which make sense of the world of politics and football (soccer), respectively. Whether or not I agree with what the podcast host or article writer is saying, engaging with longer-form content provides much better context and helps me figure out what I think about a given situation.

Sometimes, of course, it’s OK not to have an opinion on something. This is not always understood or valued on social networks.

A 2023 study… [showed] how internet addiction causes structural changes in the brain that influence behavior and cognitive abilities. Michoel Moshel, a researcher at Macquarie University and co-author of the study, explains that compulsive content consumption — popularly known as doomscrolling — “takes advantage of our brain’s natural tendency to seek out new things, especially when it comes to potentially harmful or alarming information, a trait that once helped us survive.”

[…]

The problem, says the researcher, is that social media users are constantly exposed to rapidly changing and variable stimuli — such as Instagram notifications, WhatsApp messages, or news alerts — that have addictive potential. This means users are constantly switching their focus, which undermines their ability to concentrate effectively.

[…]

In December, psychologist Carlos Losada offered advice to EL PAÍS on how to avoid falling into the trap of doomscrolling — or, in other words, being consumed by the endless cycle of junk content amplified by algorithms. His recommendations included recognizing the problem, making a conscious effort to disconnect, and engaging in activities that require physical presence, such as meeting friends or playing sports.

Source: EL PAÍS

Image: Kelly Sikkema

The privileging of immediate, emotionally-charged, image-driven communication

Silhouette of a person holding a smartphone with the YouTube logo in front of their face.

Recently, when I met up with someone who was launching a new council website, they casually mentioned that his team had optimised it for a reading age of nine. This, apparently, is the average reading age of the UK adult population. A few years ago, my brother-in-law, who works for a church, showed me the way that they had started providing church updates in video format. YouTube and TikTok are by far the most-used apps by (western) teenagers.

Are we heading towards a post-literate society? This article by Sarah O’Connor quotes Neil Postman but I think it would be more appropriate to cite Walter Ong on secondary orality, a kind of orality that depends on literate culture and the existence of writing. For example, the updates provided by my brother-in-law’s church depend on there being a script, written updates to share with the congregation, and a programme of events to which they can refer.

Technological shifts reshape how we perceive and process information, and — as I mentioned in a recent post — we live in a world which privileges immediate, emotionally-charged, image-driven communication over slower, deliberate reflections. It’s a difficult thing to resist or change, because like fast-food it’s something which appeals to something innate.

(In passing, I would point out that the literacy proficiency of 16-24 year olds in England is probably due to the introduction of a phonics-based approach in early years, and ensuring young people remain in education or training up to the age of 18)

The implications for politics and the quality of public debate are already evident. These, too, were foreseen. In 2007, writer Caleb Crain wrote an article called Twilight of the Books in the New Yorker magazine about what a possible post-literate culture might look like. In oral cultures, he wrote, cliche and stereotype are valued, conflict and name-calling are prized because they are memorable, and speakers tend not to correct themselves because “it is only in a literate culture that the past’s inconsistencies have to be accounted for”. Does that sound familiar?

[…]

These trends are not unavoidable or irreversible. Finland demonstrates the potential for high-quality education and strong social norms to sustain a highly literate population, even in a world where TikTok exists. England shows the difference that improved schooling can make: there, the literacy proficiency of 16-24 year olds was significantly better than a decade ago.

The question of whether AI could alleviate or exacerbate the problem is more tricky. Systems like ChatGPT can perform well on many reading and writing tasks: they can parse reams of information and reduce it to summaries.

[…]

But, as [David] Autor [an economics professor at MIT] says, in order to make good use of a tool to “level up” your skills, you need a decent foundation to begin with. Absent that, [Andreas] Schleicher [director for education and skills at the OECD] worries that people with poor literacy skills will become “naive consumers of prefabricated content”.

Source: The Financial Times

Image: Rachit Tank

Resisting the Now Show

Person lying on a sofa reading a book in a dimly lit room.

Just before Christmas, I headed up to Barter Books with my family. It’s a great place where you can exchange books you no longer need for credit, which you can then spend on books that other people have brought in. I picked up Russia: A 1000-Year Chronicle of the Wild East, a big thick history book by former BBC correspondent, Martin Sixsmith.

I finished it this morning; it was a fantastic read. Sixsmith serialised the book on BBC Radio 4 so it’s easier to follow than the usual history book, but still has plenty of Russian names and places for the reader to wrap their head around.

As Audrey Watters notes, reading can often be hard work. It’s tempting to want to read the summary, to optimise your information environment such that you can get on with the important stuff. Where “the important stuff” is, presumably, making money, arguing on the internet, or attempting to turn a lack of empathy for others into a virtue.

Appropriately enough, it’s difficult to adequately summarise Audrey’s argument in this post because it’s nuanced — as the best writing usually is. As she points out, an important part of reading widely is developing empathy. For example, while I still hold a very low opinion of Vladimir Putin, make a lot more sense when put in the context of a 1,000 year narrative arc. It would have been difficult to come to that realisation watching a short YouTube video or social media thread.

Reading can be slow. It can be quite challenging work – and not simply because our attention has been increasingly conditioned, fragmented with distractions and disruptions. And yet from the considered effort of reading comes consideration. So it isn’t simply that we no longer read at length or read deeply; we no longer value contemplation.

[…]

If, as some scholars argue, learning to read does not just build cognition but helps develop empathy – that is, young readers become immersed in stories outside their own experience and thus see the world differently – what are the implications when adults cannot bother to tell stories to their children?

Source: Second Breakfast

Image: Matias North

Hamming questions

Pebbles with a stone featuring a question mark.

In his most recent newsletter, Ben James shared some “important snippets” from things that he read over the holidays. It included a post from 2019 on ‘The Hamming Question’ which I really like, and focuses the mind somewhat. I perhaps need to think about what the Hamming questions are in the areas in which I work.

Mathematician Richard Hamming used to ask scientists in other fields “What are the most important problems in your field?” partly so he could troll them by asking “Why aren’t you working on them?” and partly because getting asked this question is really useful for focusing people’s attention on what matters.

Source: LessWrong