AI slop as engagement bait

gray bucket on wooden table

A couple of months ago, I wrote a short post trying to define ‘AI slop’. It was the kind of post I write so that I can, myself, link back to something in passing as I write about related issues. It made me smile, therefore, that the (self-proclaimed) “world’s only lovable tech journalist” Mike Elgan included a link to it in a recent Computerworld article.

I’m surprised he didn’t link to the Wikipedia article on the subject, but then the reason I felt that I needed to write my post was that I didn’t feel the definitions there sufficed. I could have edited the article, but Wikipedia doesn’t include original content, and so I would have had to find a better definition instead of writing my own.

The interesting thing now is that I could potentially edit the Wikipedia article and include my definition because it’s been cited in Computerworld. But although I’ve got editing various pages of world’s largest online encyclopedia on my long-list of things to do, the reality is that I can’t be done with the politics. Especially at the moment.

According to Meta, the future of human connection is basically humans connecting with AI.

[…]

Meta treats the dystopian “Dead Internet Theory” — the belief that most online content, traffic, and user interactions are generated by AI and bots rather than humans — as a business plan instead of a toxic trend to be opposed.

[…]

All this intentional AI fakery takes place on platforms where the biggest and most harmful quality is arguably bottomless pools of spammy AI slop generated by users without content-creation help from Meta.

The genre uses bad AI-generated, often-bizarre images to elicit a knee-jerk emotional reaction and engagement.

In Facebook posts, these “engagement bait” pictures are accompanied by strange, often nonsensical, and manipulative text elements. The more “successful” posts have religious, military, political, or “general pathos” themes (sad, suffering AI children, for example).

The posts often include weird words. Posters almost always hashtag celebrity names. Many contain information about unrelated topics, like cars. Many such posts ask, “Why don’t pictures like this ever trend?”

These bizarre posts — anchored in bad AI, bad taste, and bad faith — are rife on Facebook.

You can block AI slop profiles. But they just keep coming — believe me, I tried. Blocking, reporting, criticizing, and ignoring have zero impact on the constant appearance of these posts, as far as I can tell.

Source: Computerworld

Image: pepe nero

LinkedIn has become a hellish waiting room giving off Beetlejuice vibe

The word 'LinkedIn' in white letters on a black background

I hate LinkedIn. I hate the the performativity, the way it makes me feel unworthy, as not enough. I hate the way that it promotes a particular mindset and approach to the world which does not mesh with my values.. I also hate the fact that you can’t be unduly critical of the platform itself (try it: “something is wrong, try again later” the pop-up messag insists)

This post by Amy Santee, which I discovered via a link from Matt Jukes, lists a lot of things wrong with the platform. I’ve quit LinkedIn before, but then felt like I needed to return a decade ago when becoming a consultant. And that’s the reason I stay: with the demise of Twitter, the only reliable way I can get in touch with the remnants of my professional community is through LinkedIn.

It really sucks. I appreciate what Santee suggests in terms of connecting with people via a newsletter, but that feels too broadcast-like for me. I crave community, not self-serving replies on ‘content’.

The mass tech layoffs of 2022-2024 have resulted in an explosion of people looking for work in an awful market where there just aren’t enough jobs in some fields like recruiting, product design, user research, and even engineering and game development (the latter of which are faring better).

As a result, LinkedIn has become a hellish waiting room giving off Beetlejuice vibes, where unfortunate souls are virtually required to spend inordinate amounts of time scavenging for jobs, filling out redundant applications, performing professionalism and feigning excitement in their posts, and bootlicking the companies that laid them off.

[…]

We labor and post and connect and cross our fingers in desperation, getting sucked into the noise, customizing our feeds (this makes me imagine cows at a trough), scrolling and searching and DMing, trying to beat the algorithm (just post a selfie!) or do something to stand out, all with the hopes of obtaining a prized golden ticket to participate in capitalism for a damn paycheck. We feel bad about ourselves and want to give up when someone else somehow gets a job. We may joke about our own demise. We share that we are about to become homeless or that we’re skipping meals. We express our anger at the system, and we’re more aware than ever before of other people’s suffering at the hands of this system.

[…]

The way I experience the algorithm is that it seems to randomly decide whether or not my posts are worth showing other people, or at least it feels that way because I don’t understand how it works. Linkedin definitely isn’t forthright about it. In its current form, the algorithm can be prohibitive for getting my ideas out there, having conversations, sharing my podcast episodes and blog posts, getting people to attend my events, and doing any of the stuff I used to enjoy about this place.

[…]

The execs and shareholders of LinkedIn (acquired by Microsoft in 2016) are the primary beneficiaries in all of this, and they will do anything to keep their monopolistic grip on our time, our lives, and our data (we are the product, too). This is all on purpose. LinkedIn continues to win big from the explosion in user activity, ad revenue, subscriptions, job posting fees, unpaid AI training via “Top Voice” user content, and the gobs of our data we gift them, in exchange for the displeasure of being linked in until hopefully something else comes around.

Source: The Jaw Breaker Weekly-ish

Image: Kim Menikh

Must-reads for sports fans

Composite image of Lionel Messi from an article in The Athletic

One of the things that I spend a lot of my time doing every week is watching football (soccer). Yet I don’t write about it anywhere. Whether it’s watching one of our teenagers in matches every weekend, or professionals play in stadiums or on TV, there’s a reason it’s called “the beautiful game”.

As a sucker for all things Adidas, I accrue a number of points each year in their ‘Adiclub’ members area which are of a “use it or lose it” nature. Having heard good things about The Athletic, a publication created by two former Strava employees who sold it to The New York Times in 2022, I exchanged some points for a year’s subscription. I have to say I’m already hooked.

The depth is staggering and the use of images fantastic. Here, for example, are articles on the issues around redeveloping Newcastle United’s stadium, the reason that Trent Alexander-Arnold had such a poor game against Manchester United, and a wonderful article (which I sent to my daughter) about the art of ‘scanning’ for midfield players. The use of gifs in the latter is 👌

I realise that this reads like a sponsored post, and I’m not a big fan of the editorial cowardice shown by The New York Times, but this is me just pointing out the good stuff. If you consider yourself a sports fan, I’d highly recommend getting yourself access.

Source: The Athletic

Promising Trouble's advice on UK Online Safety Act compliance

Black and purple computer keyboard

The UK’s Online Safety Act is due to come into effect soon (17th March 2025), and everyone seems to be a bit confused about it. For example, I filled in the online self-assessment tool on behalf of one of our clients, who we helped set up an online forum last year. It looks like they’re going to have to carry out an impact assessment.

Rachel Coldicutt has been doing some work, including reaching out to Ofcom, the communications regulator. The best mental model I’ve got for what she’s found is that it’s a bit like GDPR. Except people are even less aware and organised.

For volunteers and small activist organisations, it just becomes yet another layer of bureaucracy to deal with. Although “small low-risk user-to-user services” are defined as “fewer than 7 million users” I can imagine this will have a negative effect on people thinking about setting up, or continuing to run, online community groups.

Five things you need you run a small, low-risk user-to-user service This is set out in more detail on pages 2-5 of this document and can be summarised as follows.

have an individual accountable for illegal content safety duties and reporting and complaints duties a content moderation function to review and assess illegal and suspected illegal content, with swift takedown measures an easy-to-find and user complaints system and process, backed up by an appropriate process to deal with complaints and appeals, with the exception of manifestly unfounded claims easy-to-find, understandable terms and conditions the ability to remove accounts for proscribed organisations

Source: Promising Trouble

Image: 𝙂𝙧𝙚𝙜𝙤𝙧𝙮 𝙂𝙖𝙡𝙡𝙚𝙜𝙤𝙨

Bridging Dictionary

Screenshot of the Bridging Dictionary

On the one hand, I really like this new ‘Bridging Dictionary’ from MIT’s Center for Constructive Communication. On the other hand, it kind of presupposes that people on each side of the political spectrum argue in good faith and are interested in the other side’s opinion.

To be honest, it feels like the kind of website we used to see a decade ago when we we’d started to see the impact of people getting their news via algorithm-based social media feeds.

The most interesting thing for me, given that I get the majority of my news from centrist and centre-left publications, is seeing which words tend to be used by, for example, Fox News. The equivalent here in the UK, I guess, would be GB News or the Daily Mail.

Welcome to the Bridging Dictionary, a dynamic prototype from MIT’s Center for Constructive Communication that identifies how words common in American political discourse are used differently across the political divide. In addition to contrasting usage by the political left and right, the dictionary identifies some less polarizing–or bridging–alternatives.

Source: BridgingDictionary.org

A feedback loop of nonsense and violence

3D render of a red maze with a blue ball in the middle. The balls can come out of one of two exists: 'True Facts' or 'Fake News'

Unless you’ve been living under a rock for the past few days, you should by now be aware of the news that Meta products, including Facebook and Instagram, will replace teams of content moderators with ‘community notes’.

People on social media seem to think that merely linking to a bad news story and telling their network that “this is bad” is in any way a form of protest or activism. Not using stuff is protest; doing something about Meta’s influence in the world is activism.

Anyway, the best take I’ve seen on this whole thing is, unsurprisingly, from Ryan Broderick, who not only diagnoses what’s happened over the last four years, but predicts what will happen as a result. The only good thing to come of this whole debacle is that there have been some fantastic parody news stories, including this one.

[C]ontent moderation, as we’ve understood, it effectively ended on January 6th, 2021… [T]he way I look at it is that the Insurrection was the first time Americans could truly see the radicalizing effects of algorithmic platforms like Facebook and YouTube that other parts of the world, particularly the Global South, had dealt with for years. A moment of political violence Silicon Valley could no longer ignore or obfuscate the way it had with similar incidents in countries like Myanmar, India, Ethiopia, or Brazil. And once faced with the cold, hard truth of what their platforms had been facilitating, companies like Google and Meta, at least internally, accepted that they would never be able to moderate them at scale. And so they just stopped.

This explains Meta’s pivot to, first, the metaverse, which failed, and, more recently, AI, which hasn’t yet, but will. It explains YouTube’s own doomed embrace of AI and its broader transition into a Netflix competitor, rather than a platform for true user-generated content. Same with Twitter’s willingness to sell to Elon Musk, Google’s enshittification, and, relatedly, Reddit’s recent stagnant googlification. After 2021, the major tech platforms we’ve relied on since the 2010s could no longer pretend that they would ever be able to properly manage the amount of users, the amount of content, the amount of influence they “need” to exist at the size they “need” to exist at to make the amount of money they “need” to exist.

And after sleepwalking through the Biden administration and doing the bare minimum to avoid any fingers pointed their direction about election interference last year, the companies are now fully giving up. Knowing the incoming Trump administration will not only not care, but will even reward them for it.

[…]

[I]t is also safe to assume that the majority of internet users right now — both ones too young to remember a pre-moderated internet and ones too normie to have used it at the time — do not actually understand what that is going to look and feel like. But I can tell you where this is all headed, though much of this is already happening.

Under Zuckerberg’s new “censorship”-free plan, Meta’s social networks will immediately fill up with hatred and harassment. Which will make a fertile ground for terrorism and extremism. Scams and spam will clog comments and direct messages. And illicit content, like non-consensual sexual material, will proliferate in private corners of networks like group messages and private Groups. Algorithms will mindlessly spread this slop, boosted by the loudest, dumbest, most reactionary users on the platform, helping it evolve and metastasize into darker, stickier social movements. And the network will effectively break down. But Meta is betting that the average user won’t care or notice. AI profiles will like their posts, comment on them, and even make content for them. A feedback loop of nonsense and violence. Our worst, unmoderated impulses, shared by algorithm and reaffirmed by AI. Where nothing has to be true and everything is popular. A world where if Meta does inspire conspiracy theories, race riots, or insurrections, no one will actually notice. Or, at the very least, be so divided on what happened that Meta doesn’t get blamed for it again.

Source: Garbage Day

Image: Hartono Creative Studio

The internet may function not so much as a brainwashing engine but as a justification machine

Illustration of the Edison multipolar dynamo

“Do your own research” is the mantra of the conspiracy theorist. It turns out that if you search for evidence of something on the internet, you’ll find it. Want proof that the earth is flat? There’s plenty of nutjob articles, videos, and podcasts for that. As there is for almost anything you can possibly imagine.

This post by Charlie Warzel and Mike Caulfield focuses on the attack on the US Capitol four years ago for The Atlantic is based on this larger observation about the internet as a ‘justification machine’. As an historian, it makes me sad that when people refer to the “wider context” of a present-day event, they rarely go back more than a few months — or, at the most, a few years.

For example, I read a fantastic book on the history of Russia over the holiday period which really helped me understand the current invasion of Ukraine. I haven’t seen that mentioned once as part of the news cycle. It’s always on to the next thing, almost always presented through the partisan lens of some flavour of capitalism.

Lately, our independent work has coalesced around a particular shared idea: that misinformation is powerful, not because it changes minds, but because it allows people to maintain their beliefs in light of growing evidence to the contrary. The internet may function not so much as a brainwashing engine but as a justification machine. A rationale is always just a scroll or a click away, and the incentives of the modern attention economy—people are rewarded with engagement and greater influence the more their audience responds to what they’re saying—means that there will always be a rush to provide one. This dynamic plays into a natural tendency that humans have to be evidence foragers, to seek information that supports one’s beliefs or undermines the arguments against them. Finding such information (or large groups of people who eagerly propagate it) has not always been so easy. Evidence foraging might historically have meant digging into a subject, testing arguments, or relying on genuine expertise. That was the foundation on which most of our politics, culture, and arguing was built.

The current internet—a mature ecosystem with widespread access and ease of self-publishing—undoes that.

[…]

Conspiracy theorizing is a deeply ingrained human phenomenon, and January 6 is just one of many crucial moments in American history to get swept up in the paranoid style. But there is a marked difference between this insurrection (where people were presented with mountains of evidence about an event that played out on social media in real time) and, say, the assassination of John F. Kennedy (where the internet did not yet exist and people speculated about the event with relatively little information to go on). Or consider the 9/11 attacks: Some did embrace conspiracy theories similar to those that animated false-flag narratives of January 6. But the adoption of these conspiracy theories was aided not by the hyperspeed of social media but by the slower distribution of early online streaming sites, message boards, email, and torrenting; there were no centralized feeds for people to create and pull narratives from.

The justification machine, in other words, didn’t create this instinct, but it has made the process of erasing cognitive dissonance far more efficient. Our current, fractured media ecosystem works far faster and with less friction than past iterations, providing on-demand evidence for consumers that is more tailored than even the most frenzied cable news broadcasts can offer.

[…]

The justification machine thrives on the breakneck pace of our information environment; the machine is powered by the constant arrival of more news, more evidence. There’s no need to reorganize, reassess. The result is a stuckness, a feeling of being trapped in an eternal present tense.

Source: The Atlantic

Image: British Library

You will always be boring if you can't make your own choices

A hand reaching towards floating abstract shapes and spheres in various colours.

I like this post by Adam Singer as it builds on my last post about increasing one’s serendipity surface, as well as an article I published over a decade ago entitled curate or be curated. The latter covered some of the same ground as Singer’s post, riffing on the idea of the ‘filter bubble’.

Algorithms are literally everywhere in our lives these days, and coupled with AI we are likely to live templated lives. I’m currently composing this post while listening to music coming out of a speaker driven by the iPod I built a couple of years ago. I’m reading a book that I found in a second-hand bookstore. I hesitate to use the word ‘resistance’ but these are small ways in which I ensure that my world isn’t dictated by someone else’s choices for me.

We’ve never had more freedom, more choices. But in reality, most people are subtly funneled into the same streams, the same pools of ‘socially approved’ culture, cuisine and ideas. Remixes and memes abound, but almost no one shares anything weird, original or different. People wake up, perhaps with ambitions to make unique choices they believe are their own, only to find that the options have been filtered, curated, and ‘tailored to existing tastes’ by algorithms that claim to know them best. This only happens as these algorithms prioritize popularity or even just safe choices over individuality. They don’t lead you down our own path or really care what’s interesting and unknown, they lead us down paths proven profitable, efficient, safe. If you work in a creative sector (and many of us do) you already know how dangerous this is professionally, not to mention spiritually.

Algorithms might make for comfortable consumers, but they cannot produce thoughtful creators, and they are slowly taking your ability to choose from you. You might think you’re choosing, but you never really are. When your ideas, interests, and even daily meals are largely inspired by whatever was already approved, already done, already voted on and liked, you’re only experiencing life as an echo of the masses (or the machines, if personalized based on historic preference). And in this echo chamber, genuine discovery is rare, even radical.

Of course, it’s very easy to live like this, as we live in a society totally biased to pain avoidance and ease (it’s so ingrained much of the medical establishment only treats symptoms, not causes). There’s a an unconscious allure in this conformity, a feeling of belonging, of social safety, it’s a warm blanket you aren’t alone in the cosmos. But at what cost? In blending into the mainstream wasteland, you risk losing something deeply human: your impulse to explore, the courage to confront the unfamiliar, the potential to define yourself on your own terms. You don’t get real creativity without courage, and no one has this until they stop looking to the crowd for consensus approval.

Source: Hot Takes

Image: Google Deepmind

Luck = (Passionate) Doing x (Effective) Telling

Diagram illustrating the 'Surface Area of Luck' with a formula and a DOING vs. TELLING graph.

Back in 2016 I coined the term ‘serendipity surface’ which I defined as the inverse of an ‘attack surface’ when building software. In other words, you want to maximise your serendipity surface so that good and unexpected things happen to you. It’s something I discussed on the Artificiality podcast last year if you want to hear me discuss it further.

Tim Klapdor talks of a ‘serendipity engine’ and I guess Thought Shrapnel could be considered that for me. As part of my reading for this eclectic blog and newsletter, I came across this post on the Model Thinkers website on ‘The Surface Area of Luck’ which has no date, but was indexed by The Internet Archive for the first time in 2021.

There’s some good, actionable advice in it, as well as links for further exploration. It also includes the above image and, as we know, all good ideas require an image :)

Luck, by definition, is about chance, but it’s not totally out of your control. So why not use this model to increase your chance of luck?

The Surface Area of Luck, or your chance of being lucky, is equivalent to the action you take towards your passion, multiplied by the number of people you effectively communicate your passion and activities to.

Put simply: Luck = (Passionate) Doing x (Effective) Telling.

Source: Model Thinkers

We need to do a lot better than outsourcing AI education to grifters with bombastic Twitter threads

Whiteboard with handwritten text: 'ZIF_LLM_MODE' and 'get-request_body().'

This is a fantastic long post from Simon Willison about things we learned about Large Language Models (LLMs) in 2024. The bit that jumped out to me was, unsurprisingly, the AI literacies angle to all this. As Willison points out, using an LLM such as ChatGPT, Claude, or Gemini seems straightforward as it’s a chat-based interface, but even with the voice modes there’s still a need to understand what’s going on under the hood.

People often want ‘training’ on new technologies, but it’s actually quite difficult to provide in this situation. While I think there are underlying literacies involved here, a key way of understanding what’s going on is to experiment. As with every other technology, there’s no substitute for messing about with stuff to see how it works — and where the limits are.

I’d recommend also having a look at Willison’s list of ‘artifacts’ he created using Claude in a single week. It’s also worth considering the analogy he makes with building out railway infrastructure in the 19th century, as it kind of works.

A drum I’ve been banging for a while is that LLMs are power-user tools—they’re chainsaws disguised as kitchen knives. They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.

If anything, this problem got worse in 2024.

We’ve built computer systems you can talk to in human language, that will answer your questions and usually get them right! … depending on the question, and how you ask it, and whether it’s accurately reflected in the undocumented and secret training set.

[…]

What are we doing about this? Not much. Most users are thrown in at the deep end. The default LLM chat UI is like taking brand new computer users, dropping them into a Linux terminal and expecting them to figure it all out.

Meanwhile, it’s increasingly common for end users to develop wildly inaccurate mental models of how these things work and what they are capable of. I’ve seen so many examples of people trying to win an argument with a screenshot from ChatGPT—an inherently ludicrous proposition, given the inherent unreliability of these models crossed with the fact that you can get them to say anything if you prompt them right.

There’s a flipside to this too: a lot of better informed people have sworn off LLMs entirely because they can’t see how anyone could benefit from a tool with so many flaws. The key skill in getting the most out of LLMs is learning to work with tech that is both inherently unreliable and incredibly powerful at the same time. This is a decidedly non-obvious skill to acquire!

There is so much space for helpful education content here, but we need to do do a lot better than outsourcing it all to AI grifters with bombastic Twitter threads.

Source: Simon Willison’s Weblog

Image: Bernd Dittrich

It's OK not to have an opinion on everything

Three overlapping phone-shaped piece of glass in white, black, and translucent gray on a brown background.

My three-week breaks each year, usually in Spring, Summer, and Winter, are rejuvenating. One of the things I most enjoy about them is that I give myself permission to come off social media for a bit. While I’m not a user of TikTok, Instagram, Snapchat, or the like, even Mastodon or Bluesky can be an easy thing to reach for instead of doing something more interesting or useful.

There is no narrative to a social media feed. It’s just one thing after another, ordered either chronologically or algorithmically. Either isn’t great for trying to build a coherent picture of the world, especially given how emotionally-charged social media posts can be. As a former Nuzzel user, I’ve found Sill useful for avoiding FOMO. It creates a digest of the most popular links that your network is sharing, which is pretty useful.

What I’ve found myself leaning more into recently is experts making sense of the world as it happens. Two good examples of this are The Rest is Politics and The Athletic which make sense of the world of politics and football (soccer), respectively. Whether or not I agree with what the podcast host or article writer is saying, engaging with longer-form content provides much better context and helps me figure out what I think about a given situation.

Sometimes, of course, it’s OK not to have an opinion on something. This is not always understood or valued on social networks.

A 2023 study… [showed] how internet addiction causes structural changes in the brain that influence behavior and cognitive abilities. Michoel Moshel, a researcher at Macquarie University and co-author of the study, explains that compulsive content consumption — popularly known as doomscrolling — “takes advantage of our brain’s natural tendency to seek out new things, especially when it comes to potentially harmful or alarming information, a trait that once helped us survive.”

[…]

The problem, says the researcher, is that social media users are constantly exposed to rapidly changing and variable stimuli — such as Instagram notifications, WhatsApp messages, or news alerts — that have addictive potential. This means users are constantly switching their focus, which undermines their ability to concentrate effectively.

[…]

In December, psychologist Carlos Losada offered advice to EL PAÍS on how to avoid falling into the trap of doomscrolling — or, in other words, being consumed by the endless cycle of junk content amplified by algorithms. His recommendations included recognizing the problem, making a conscious effort to disconnect, and engaging in activities that require physical presence, such as meeting friends or playing sports.

Source: EL PAÍS

Image: Kelly Sikkema

The privileging of immediate, emotionally-charged, image-driven communication

Silhouette of a person holding a smartphone with the YouTube logo in front of their face.

Recently, when I met up with someone who was launching a new council website, they casually mentioned that his team had optimised it for a reading age of nine. This, apparently, is the average reading age of the UK adult population. A few years ago, my brother-in-law, who works for a church, showed me the way that they had started providing church updates in video format. YouTube and TikTok are by far the most-used apps by (western) teenagers.

Are we heading towards a post-literate society? This article by Sarah O’Connor quotes Neil Postman but I think it would be more appropriate to cite Walter Ong on secondary orality, a kind of orality that depends on literate culture and the existence of writing. For example, the updates provided by my brother-in-law’s church depend on there being a script, written updates to share with the congregation, and a programme of events to which they can refer.

Technological shifts reshape how we perceive and process information, and — as I mentioned in a recent post — we live in a world which privileges immediate, emotionally-charged, image-driven communication over slower, deliberate reflections. It’s a difficult thing to resist or change, because like fast-food it’s something which appeals to something innate.

(In passing, I would point out that the literacy proficiency of 16-24 year olds in England is probably due to the introduction of a phonics-based approach in early years, and ensuring young people remain in education or training up to the age of 18)

The implications for politics and the quality of public debate are already evident. These, too, were foreseen. In 2007, writer Caleb Crain wrote an article called Twilight of the Books in the New Yorker magazine about what a possible post-literate culture might look like. In oral cultures, he wrote, cliche and stereotype are valued, conflict and name-calling are prized because they are memorable, and speakers tend not to correct themselves because “it is only in a literate culture that the past’s inconsistencies have to be accounted for”. Does that sound familiar?

[…]

These trends are not unavoidable or irreversible. Finland demonstrates the potential for high-quality education and strong social norms to sustain a highly literate population, even in a world where TikTok exists. England shows the difference that improved schooling can make: there, the literacy proficiency of 16-24 year olds was significantly better than a decade ago.

The question of whether AI could alleviate or exacerbate the problem is more tricky. Systems like ChatGPT can perform well on many reading and writing tasks: they can parse reams of information and reduce it to summaries.

[…]

But, as [David] Autor [an economics professor at MIT] says, in order to make good use of a tool to “level up” your skills, you need a decent foundation to begin with. Absent that, [Andreas] Schleicher [director for education and skills at the OECD] worries that people with poor literacy skills will become “naive consumers of prefabricated content”.

Source: The Financial Times

Image: Rachit Tank

Resisting the Now Show

Person lying on a sofa reading a book in a dimly lit room.

Just before Christmas, I headed up to Barter Books with my family. It’s a great place where you can exchange books you no longer need for credit, which you can then spend on books that other people have brought in. I picked up Russia: A 1000-Year Chronicle of the Wild East, a big thick history book by former BBC correspondent, Martin Sixsmith.

I finished it this morning; it was a fantastic read. Sixsmith serialised the book on BBC Radio 4 so it’s easier to follow than the usual history book, but still has plenty of Russian names and places for the reader to wrap their head around.

As Audrey Watters notes, reading can often be hard work. It’s tempting to want to read the summary, to optimise your information environment such that you can get on with the important stuff. Where “the important stuff” is, presumably, making money, arguing on the internet, or attempting to turn a lack of empathy for others into a virtue.

Appropriately enough, it’s difficult to adequately summarise Audrey’s argument in this post because it’s nuanced — as the best writing usually is. As she points out, an important part of reading widely is developing empathy. For example, while I still hold a very low opinion of Vladimir Putin, make a lot more sense when put in the context of a 1,000 year narrative arc. It would have been difficult to come to that realisation watching a short YouTube video or social media thread.

Reading can be slow. It can be quite challenging work – and not simply because our attention has been increasingly conditioned, fragmented with distractions and disruptions. And yet from the considered effort of reading comes consideration. So it isn’t simply that we no longer read at length or read deeply; we no longer value contemplation.

[…]

If, as some scholars argue, learning to read does not just build cognition but helps develop empathy – that is, young readers become immersed in stories outside their own experience and thus see the world differently – what are the implications when adults cannot bother to tell stories to their children?

Source: Second Breakfast

Image: Matias North

Hamming questions

Pebbles with a stone featuring a question mark.

In his most recent newsletter, Ben James shared some “important snippets” from things that he read over the holidays. It included a post from 2019 on ‘The Hamming Question’ which I really like, and focuses the mind somewhat. I perhaps need to think about what the Hamming questions are in the areas in which I work.

Mathematician Richard Hamming used to ask scientists in other fields “What are the most important problems in your field?” partly so he could troll them by asking “Why aren’t you working on them?” and partly because getting asked this question is really useful for focusing people’s attention on what matters.

Source: LessWrong

Best of Thought Shrapnel 2024

Gold 3D render of the number '2024'.

Well, here we are at the end of another year! My sole criterion for inclusion in this ‘best of’ list is that the articles I reference made me think. Reinforcing my existing views, or being merely ‘interesting’ wasn’t enough to make it. So, after whittling down from twenty or so, here are my top ten Thought Shrapnel posts of 2024:

  1. De-bogging yourself — Adam Mastroianni’s topic is getting yourself out of a situation where you’re stuck, which he calls “de-bogging yourself”. I love the way he breaks it down into three different kinds of ‘bog phenomena’ and gives names to examples which fall into those categories.
  2. The importance of context — I can highly recommend this conversation between Adam Grant and Trevor Noah. The conversation they have about context towards the start is so important that I wish everyone I know would listen to it.
  3. Begetting Strangers — This is such a great article by Joshua Rothman in The New Yorker. Quoting philosophers, he concisely summarises the difficulty of parenting, examines some of the tensions, and settles on a position with which I’d agree.
  4. Man or bear IRL — This article by Laura Killingbeck is definitely worth reading in its entirety. Not only is it extremely well-written, it gives a real-world example to a hypothetical internet discussion. Killingbeck is a long-term ‘bikepacker’ and therefore the “man or bear” question is one she grapples with on a regular basis.
  5. Philosophy and folklore — I love this piece in Aeon from Abigail Tulenko, who argues that folklore and philosophy share a common purpose in challenging us to think deeply about life’s big questions. Her essay is essentially a critique of academic philosophy’s exclusivity and she calls for a broader, more inclusive approach that embraces… folklore.
  6. ‘Meta-work’ is how we get past all the one-size-fits-none approaches — Alexandra Samuel points out in this newsletter that a lot of the work we do as knowledge workers will increasingly be ‘meta-work’. Introducing a 7-step approach, she first of all outlines why it’s necessary, especially in a ‘neurovarious’ world.
  7. We become what we behold — An insightful and nuanced post from Stephen Downes, who reflects on various experiences, from changing RSS reader through to the way he takes photographs. What he calls ‘AI drift’ is our tendency to replace manual processes with automated ones.
  8. You don’t have to like what other people like, or do what other people do — Warren Ellis responds to a post by Jay Springett on ‘surface flatness’ by reframing the problem as… not one we have to worry about. It’s good advice: so long as you can sustain an income by not having to interact with online walled gardens, why care what other people do?
  9. 3 strategies to counter the unseen costs of boundary work within organisations — This article focuses on research that reveals people who do ‘boundary work’ within organisations, that is to say, individuals who span different silos, are more likely to suffer burnout and exhibit negative social behaviours.
  10. Dark data is a climate concern — I mean, yes, of course I knew that data files are stored on servers and that those servers consume electricity. But this is a good example of reframing. How many emails have I got stored that I will never look at again? How many files stored in the cloud ‘just in case’?

Thanks for reading and sharing Thought Shrapnel this year! I’ll be back in 2025 🎉

I'm increasingly uneasy about being a Spotify Premium subscriber

Union of Musicians and Allied Workers protesting at Spotify's corporate headquarters in San Francisco. A person in black jacket and face covering holds a sign demanding a penny per stream.

In 2009, seeing which way the wind was blowing, I decided to sell my CD collection and use the proceeds to fund streaming my music via Spotify. Fifteen years later, factoring in price rises and an upgrade to the family version, I’ve probably spent about £2,000. So I reckon I’m about even.

I’ve really enjoyed using Spotify. I like the way it’s available everywhere, including on my Google Home devices and in my car. It’s learned my tastes and I’ve discovered all kinds of music through the service.

However, I’ve felt increasingly guilty about the way that Spotify, and other music streaming services, treat artists. We’re now in a situation where artists have to tour to make a living. I’m not sure that’s necessarily healthy.

Also, given Sabrina Carpenter seems to show up on every playlist I ask Spotify to create at the moment (including ‘hardcore gym rap’!) I’m pretty sure they are also making a lot of money from paid placements. My unease is only compounded with the revelations in this article which details the ways that Spotify have actively tried to reduce the amount of royalties paid to artists.

Perhaps it’s time to move on. Perhaps the answer is to go back to MP3s and use a platform such as Bandcamp? 🤔

According to a source close to the company, Spotify’s own internal research showed that many users were not coming to the platform to listen to specific artists or albums; they just needed something to serve as a soundtrack for their days, like a study playlist or maybe a dinner soundtrack. In the lean-back listening environment that streaming had helped champion, listeners often weren’t even aware of what song or artist they were hearing. As a result, the thinking seemed to be: Why pay full-price royalties if users were only half listening? It was likely from this reasoning that the Perfect Fit Content program was created.

After at least a year of piloting, PFC was presented to Spotify editors in 2017 as one of the company’s new bets to achieve profitability. According to a former employee, just a few months later, a new column appeared on the dashboard editors used to monitor internal playlists. The dashboard was where editors could view various stats: plays, likes, skip rates, saves. And now, right at the top of the page, editors could see how successfully each playlist embraced “music commissioned to fit a certain playlist/mood with improved margins,” as PFC was described internally.

[…]

Some employees felt that those responsible for pushing the PFC strategy did not understand the musical traditions that were being affected by it. These higher-ups were well versed in the business of major-label hitmaking, but not necessarily in the cultures or histories of genres like jazz, classical, ambient, and lo-fi hip-hop—music that tended to do well on playlists for relaxing, sleeping, or focusing. One of my sources told me that the attitude was “if the metrics went up, then let’s just keep replacing more and more, because if the user doesn’t notice, then it’s fine.”

[…]

In a Slack channel dedicated to discussing the ethics of streaming, Spotify’s own employees debated the fairness of the PFC program. “I wonder how much these plays ‘steal’ from actual ’normal’ artists,” one employee asked. And yet as far as the public was concerned, the company had gone to great lengths to keep the initiative under wraps. Perhaps Spotify understood the stakes—that when it removed real classical, jazz, and ambient artists from popular playlists and replaced them with low-budget stock muzak, it was steamrolling real music cultures, actual traditions within which artists were trying to make a living. Or perhaps the company was aware that this project to cheapen music contradicted so many of the ideals upon which its brand had been built. Spotify had long marketed itself as the ultimate platform for discovery—and who was going to get excited about “discovering” a bunch of stock music? Artists had been sold the idea that streaming was the ultimate meritocracy—that the best would rise to the top because users voted by listening. But the PFC program undermined all this. PFC was not the only way in which Spotify deliberately and covertly manipulated programming to favor content that improved its margins, but it was the most immediately galling. Nor was the problem simply a matter of “authenticity” in music. It was a matter of survival for actual artists, of musicians having the ability to earn a living on one of the largest platforms for music. PFC was irrefutable proof that Spotify rigged its system against musicians who knew their worth.

Source: Harper’s Magazine

People aren't unemployed because they're lazy

About a quarter of the British working age population (ages 16-64) does not have a job. There are many reasons for this, but the right-wing view on this is that “benefits are too generous.” I think we can put bed with this chart from the University of Bath (2019):

Chart showing UK in last place in terms of generosity around unemployment insurance amongst OECD countries

Reducing benefits that are already some of the lowest in the developed world isn’t likely to get people working again, it just causes misery and has knock-on effects such as an increase in the amount of shoplifting for food and other essential items.

Not only are British unemployment benefits low, but they’re also split in a way which is massively skewed towards housing benefit, as even commentators in the right-wing Sunday Times have to admit:

Chart comparing different countries'  percentage 'replacement rate' of unemployment benefits compared to previous salary

Unsurprisingly, state-level economics is fiendishly difficult and nothing at all like running household finances. Here’s a very simple system diagram from an article in the journal Social Policy & Administration from earlier this year which discusses 24 European countries and macroeconomic variables:

Simple system diagram linking economic and employment policies to job insecurity and job quality.

There are two things that it seems the British political class don’t want to talk about. The first is Brexit, an act of almost unimaginable economic harm that has meant 15% lower trade with the EU, and cost the economy over £140 billion so far. The second is the long-term health impact of the pandemic, with the related effects on the number of people working.

All in all, we need a grown-up conversation about this, based on data. But with Reform UK waiting in the wings, potentially financed by the world’s richest person, the chances are we’ll continue with knee-jerk reactions and shallow thinking for the foreseeable future.

Substack bros

Mug on desk with writing on it which reads: 'Everyone is entitled to my opinion'

Having a moral compass can sometimes make life more difficult. I literally turned down a ridiculously well-paid gig last month because it contravened my ethical code. While that particular example was relatively clear cut, it’s more difficult when it comes to things like platforms which are used for free. At what point does your use of it become out of alignment with your values?

Twitter turning to X is a good example of this, with some people leaving a long time ago (🙋) while others, for some inexplicable reason, are still on there. I’d argue that the next service to be recognised as toxic is probably going to be Substack. I hosted Thought Shrapnel there briefly for a few weeks at the end of last year, but left when they started platforming Nazis. They seem to be at it again (here’s an archive version as that link was down at the time of writing).

While I wanted to give that context, this post is actually about a particular style of writing that is popular on Substack. I discovered this via Robin Sloan’s newsletter, which (thankfully) is written in a style at odds with the opposite of the advice given by Max Read, a relatively-successful Substacker. What Read says about being a “textual YouTuber” is spot-on. I can’t imagine anything more awful than watching video after video, but I will read and read until the proverbial cows come home.

The other thing which I think Read gets right is something I was discussing the other day (IRL I’m afraid, no link!) about how everyone wants Strong Opinions™ these days and to be the “main character.” My own writing these days is almost the opposite of that: slightly philosophical, with provisional opinions and, while introspective, not presenting myself as the hero of the story.

My standard joke about my job is that I am less a “writer” than I am a “textual YouTuber for Gen Xers and Elder Millennials who hate watching videos.” What I mean by this is that while what I do resembles journalistic writing in the specific, the actual job is in most ways closer to that of a YouTuber or a streamer or even a hang-out-type podcaster than it is to that of most types of working journalist. (The one exception being: Weekly op-ed columnist.) What most successful Substacks offer to subscribers is less a series of discrete and self-supporting pieces of writing–or, for that matter, a specific and tightly delimited subject or concept–and more a particular attitude or perspective, a set of passions and interests, and even an ongoing process of “thinking through,” to which subscribers are invited. This means you have to be pretty comfortable having a strong voice, offering relatively strong opinions, and just generally “being the main character” in your writing. And, indeed, all these qualities are more important than any kind of particular technical writing skill: Many of the world’s best (formal) writers are not comfortable with any of those things, while many of the world’s worst writers are extremely comfortable with them.

So, part of your job as a Substacker is is “producing words” and part of your job is “cultivating a persona for which people might have some kind of inexplicable affection or even respect.”

Source: Read Max

Image: Steve Johnson

Navigating the clash of identity and ability

Distorted ('glitched') photo of a man

I had a great walk and talk with my good friend Bryan Mathers yesterday. He made the trip up from London to Northumberland, where I live, and we went walking in the Simonside Hills and at Druridge Bay.

One of our many topics of conversation was the various seasons of life, including our kids leaving home, doing meaningful work, and social interaction.

Our generation is perhaps the first where men getting help through therapy is at least semi-normal, where it’s OK to talk about feelings, and where there’s the beginnings of an understanding that perhaps work shouldn’t define a man’s life.

What’s interesting about this article in The Guardian by Adrienne Matei is the framing as a “clash of identity and ability.” I’m already experiencing this on a physical level with my mind thinking I’m capable of running, swimming, and jumping much further than I’m able. It’s frustrating, but as the article points out, a nudge that I need to be thinking about my life differently as I approach 44 years old.

In 2023, researchers from the University of Michigan and the University of Alabama at Birmingham published a study exploring how hegemonic masculinity affects men’s approach to health and ageing. “Masculine identity upholds beliefs about masculine enactment,” the authors write, referring to the traits some men feel they must exhibit, including control, responsibility, strength and competitiveness. As men age, they are likely to feel pressure to remain self-reliant and avoid perceived weakness, including seeking medical help or acknowledging emerging challenges.

The study’s authors write that middle-aged men might try to fight ageing with disciplined health and fitness routines. But as they get older and those strategies become less successful, they have to rethink what it means to be “masculine”, or suffer poorer health outcomes. Accepting these identity shifts can be particularly difficult for men, who can exhibit less self-reflection and self-compassion than women.

[…]

[Dr Karen Skerrett, a psychotherapist and researcher] emphasizes there is no tidy, one size fits all way to navigate the clash of identity and ability: “There is just so much diversity that we can’t particularly predict how somebody is going to react to limitations,” she says.

However, in a 2021 research report she and her co-authors proposed six tasks to help people develop a “realistic, accommodating and hopeful” perception of the future: acknowledging and accepting the realities of ageing; normalizing angst about the future; active reminiscence; accommodating physical, cognitive and social changes; searching for new emotionally meaningful goals; and expanding one’s capacity to tolerate ambiguity. These tasks help people to recharacterize ageing as a transition that requires adaptability, growth and foresight, and to resist “premature foreclosure”, or the notion that their life stories have ended.

As we age, managing our own egos becomes a bigger psychological task, says Skerrett. We may not be able to do all the things we once enjoyed, but we can still ask ourselves how we can contribute and support others in meaningful ways. Focusing on internal growth and confronting hard truths with grace and clarity can ease confusion, shame and anger. Instead of clinging to lost identities, we can seek purpose in connection, legacy and gratitude.

Source: The Guardian

Smartphone bans are not the answer

Screenshot for 'Swiped' featuring the presenters and children in uniforms using smartphones

After reading that “every parent should watch” a Channel 4 TV programme called Swiped: The School That Banned Smartphones I dutifully did so this afternoon. I’m off work, so need something to do after wrapping presents 😉

I thought it was poor, if I’m honest. As a former teacher and senior leader, and the father of two teenagers (one who has a real issue with screen time) I thought it was OK-ish as a conversation starter. But the blunt instrument of a ‘ban’, as is apparently going to happen in Australia, just seems a bit laughable to be honest. How are you supposed to develop digital literacies through non-use?

It’s easy to think that a problem you and other people are experiencing should be solved quickly and easily by someone else. In this case, the government. But this is a systemic issue, and not as easy as the government ‘forcing’ tech platforms to do something about it. What about the chronic underfunding of youth activities and child mental health services, and the slashing of council budgets? Smartphones aren’t the only reason kids sit in their rooms.

In March 2025, the Online Safety Act comes into force. The intention is welcome, but as with the Australian ‘ban’ it’s probably going to be hard to make it work.

The kids in the TV experiment were 12 years old. If, at the end of 2024, you’re letting your not-even-teenager on a smartphone without any safeguards, I’m afraid you’re doing it wrong. If you’re allowing kids of that age to have their phones in their bedroom overnight, you’re doing it wrong. That’s not something you need a ban to fix.

Smartphones, just like any technology, aren’t wholly positive or wholly negative. There are huge benefits and significant drawbacks to them. What’s more powerful in this situation are social norms. If this programme helps to start a conversation, then it’s done its job. I’m just concern that most people are going to take from it the message that “the government needs to sort this out.”

Source: Channel 4