But I blogged about that in detail a while back, shall I send you a link later?

Writing is a form of extended thinking. Or, at least it is for me. Which is why I think that blogging, either here on Thought Shrapnel, on my personal blog, on the WAO blog, or occasionally over at ambiguiti.es, is so useful.
Giles Thomas points out that a blog is the equivalent of showing your contributions to open source software via a GitHub profile. It’s a good analogy: by working openly and sharing your thinking, you create a link for every significant thought or connection you’ve made between ideas. That means, in my case at least, I can search for my name and the topic, and a bunch of things come up.
Although I haven’t included it in the quotation below, the original rationale for Thomas' post is whether it’s worth blogging in the age of AI. It’s an unequivocal YES for me, but then I’m the kind of person who donated my doctoral thesis to the public domain. Nobody “owns” ideas, so by blogging you’re helping contribute to the sum total of human knowledge.
I said that you will be vanishingly unlikely to make a name for yourself with blogging on its own. But that doesn’t mean it’s pointless from a career perspective. You’re building up a portfolio of writing about topics that interest you. Imagine you’re in a job interview and are asked about X. You reply with the details you know, and add “but I blogged about that in detail a while back, shall I send you a link later?” Or if you’re aiming to close a contract with a potential consulting client in a particular area – wouldn’t it be useful to send them a list of links showing your thoughts on aspects of exactly that topic?
Your GitHub profile shows your contributions to open source and lets people know how well you can code. But your blog shows your contributions to knowledge, and shows how well you can think. That’s valuable!
Source: Giles' blog
Image: Markus Winkler
If a waiter has to explain the “concept” behind a menu there is something wrong with the menu

For those unaware, for the past 15 years, Jay Rayner has been the food critic for The Guardian and its sister publication, The Observer. The latter has a ‘food monthly’ supplement which is usually referred to by the acronym OFM.
In Rayner’s last column for OFM he dispenses lots of fantastic advice. Here’s are my favourite parts, some of which can be used as metaphors and are therefore more widely applicable.
Individual foods are not pharmaceuticals; just eat a balanced diet. There is nothing you can eat or drink that will detoxify you; that’s what your liver and kidneys are for. No healthy person needs to wear a glucose spike monitor; it’s a fad indulged by the worried well. As is the cobblers of being interested in “wellness”, because nobody is interested in “illness”. People have morals but food doesn’t, so don’t describe dishes as “dirty”. And stop it with the whole “clean eating” thing. It’s annoying and vacuous.
[…]
Tipping should be abolished. It’s wrong that restaurant staff should be dependent on the mood of the customer for the size of their wage. They should be paid properly. It works in Japan, France and Australia. It can work in the UK. All new restaurants should employ someone over 50 to check whether the print on the menu is big enough to be read, the lighting bright enough for it to be read by and the seats comfortable enough for a lengthy meal. If a waiter has to explain the “concept” behind a menu there is something wrong with the menu.
Source: The Observer
Image: Damien Santos
I call it the feediverse. It's not a joke.

Dave Winer has launched something called WordLand which uses RSS as the federated specification underpinning a federated social network. This is instead of ActivityPub, which underpins the Fediverse (Mastodon, Pixelfed, etc.) or ATProto, which powers Bluesky.
I immediately ran into an error about API calls, with no suggestion how to fix it. I’m also not entirely sure how textcasting is different to just, blogging? This approach seems a bit post hoc, ergo propter hoc. Just as with something like Delta Chat which piggybacks on email for chat functionality, this uses blogs for microblogging 🤔
Thanks to John Johnston for bringing this to my attention, and for pointing me towards PootleWriter which looks simple and great for quickly getting things on the web.
WordLand is designed to be the kind of editor you use in a social app like Bluesky or Mastodon, but with most of the features of textcasting.
WordLand is where we start to boot up a simple social net using only RSS as the protocol connecting users. Rather than wait for ActivityPub and AT Proto to get their acts together. I think we can do it with feeds and start off with immediate interop without the complexity of federation. I call it the feediverse. It’s not a joke, although it may incite a smile and a giggle. And that’s ok.
Source: Scripting News
Image: Juno Jo
The idea stood up to more than casual scrutiny

There is enough going on in the world and in my life at the moment that Thought Shrapnel does not need to deal with. Instead, dear reader, I present to you PJ Holden’s microfiction newsletter, A4. Just like Jay Springett’s Start Select Reset zine.
A4 is a single A4 sheet of paper with seven little nano-fiction stories. The sheet is designed to be printed and folded in such a way that you end up with a lovely little standee with a pulp fiction like cover. (Or you could just read them on screen! but I promise, it’s worth the effort!)
Issue Zero was my test fire to see if the idea stood up to more than casual scrutiny and so far, a surprising number of people have downloaded it (I have the stats!)
Source: PJ Holden’s Blog
Image: from the author’s post
Sometimes life seems really short, and other times it seems impossibly long

Matt Muir links to My Life in Weeks by Gina Trapani, which she adapted from Buster Benson. He got the idea from Tim Urban. You can create your own version at weeksofyour.life.
I like the idea of representing one’s life like this, for several reasons. First, as Urban’s initial post points out:
Sometimes life seems really short, and other times it seems impossibly long. But this chart helps to emphasize that it’s most certainly finite. Those are your weeks and they’re all you’ve got.
Personally, 2025 has been terrible for me so far. But we’re only a few weeks in! The rest of it could be great, who knows?
The boxes can also be a reminder that life is forgiving. No matter what happens each week, you get a new fresh box to work with the next week. It makes me want to skip the New Year’s Resolutions—they never work anyway—and focus on making New Week’s Resolutions every Sunday night. Each blank box is an opportunity to crush the week—a good thing to remember.
Source: (various)
Image: Screenshot from Gina Trapani’s site
Nostalgia tells you that your personal history wasn’t just scary or tragic; it helped make you who you are

I’ve been listening to an interesting interview over the past couple of days where Rick Rubin, the legendary music producer, interviews Will Smith. One of the things Smith says is that people have a real “thirst” for nostalgia at the moment, wanting to go back to a time when things were a little bit better.
This article by Olga Khazan in The Atlantic looks at some of the research into this topic, noting that that reflecting on past times, even if they were tough, gives them a story, a sense of self, and a sense of solidarity with others. Ultimately, it seems, nostalgia is all about creating a sense of security, which absolutely makes sense.
Nostalgia for terrible things may sound absurd, but many people experience it, for reasons that speak to the way people make meaning of their lives. The central reason for this phenomenon, according to researchers who study nostalgia, is that humans look to our past selves to make sense of our present. Reflecting on the challenging times we’ve endured provides significance and edification to a life that can otherwise seem pointlessly difficult. The past was tough, we think, but we survived it, so we must be tough too.
To be sure, part of the explanation is that people tend to romanticize the past, remembering it more rosily than it actually was. Thanks to something called the “fading affect bias,” negative feelings about an event evaporate much more quickly than positive ones. As a difficult experience recedes in time, we start to miss its happier aspects and gloss over the challenges. And nostalgia is usually prompted by a feeling of dissatisfaction with the present, experts say, making the past seem better by comparison.
[…]
There are few large, robust studies on this topic, but some experimental research has shown that nostalgia provides a feeling of authenticity and a sense of connection between your past and present selves. Because of this, we often get nostalgic for consequential moments in our lives. “People are nostalgic for things that give their lives meaning or help them feel important,” says Andrew Abeyta, a psychology professor at Rutgers University.
[…]
Reminiscing about a difficult experience reminds you that at least you survived, and that your loved ones came to your aid. “The fact that those people did those things for you, or were there for you, reassures you that you have your self-worth,” Batcho said. Research by the psychologist Tim Wildschut and his colleagues found that people who wrote about a nostalgic experience went on to feel higher self-esteem than a control group, and they also felt more secure in their relationships.
Source: The Atlantic
Image: Jon Tyson
All things good should flow into the boulevard

Warren Ellis comments on fractured and fragmented the world is now in terms of keeping up with what other people are thinking and producing. You can’t trust the algorithms any more, and there are precious few people doing the curatorial work across multiple streams — which is why I appreciate people like Jason Kottke, Tina Roth Eisenberg, Stephen Downes, Matt Muir and, of course, Ellis himself.
I was talking to a publisher friend last night about Patreon, on which he spends a lot of time looking at comics creators. I do not – I didn’t find out until last night that I still have an account on there, and I’m still not sure how that’s possible. Anyway. His thing was: he sees lots of work-in-progress and one page updates and stuff there, but how has it not become a primary delivery system for digital comics? Like, for your membership fee, or an extra dollar or whatever, here’s the first issue of my comic for you to read online or download, and the next one will be on this day next month, and so on. Maybe there’s a limited physical print edition that I’ll offer for sale a month later. And there’s no deal for collection, so maybe you’ll never see this again.
(It occurred to me this morning that any writer could do that with ebooks, too, and then whack them out to Amazon two months later.)
My thing was, does anyone really want to fracture common culture and a shared marketplace any more than it already is? And an hour later, I thought, common culture is a delusion of my age. Common platforms, perhaps, but platforms are contingent and temporary. We are all “creators” now.
Is there even a digital comics store and reading app that a majority of people use now?
(There is a supposed quote by Pericles I heard years ago but never sourced: “All things good should flow into the boulevard.”)
This note from my friend, which I summarise here to preserve it for myself, has gotten me thinking about that entire space. It’s less walled-off from the world than Kickstarter-style crowdfunding, perhaps? (I think Kickstarter and Backerkit et al are great: my concern over work crowdfunded in that style doesn’t transmit anything into the general culture. Again, probably a fixed idea from my age and background.) I’m always wondering how much great work I might be missing simply because I can’t find it browsing around real or virtual shelves.
Source: Warren Ellis Ltd
Image: Jonas Stolle
The revolution, it turns out, is boringly iterative

Jessica Prendergrast is part of Onion Collective, which undertook an experimental research project last year funded by the Joseph Roundtree Foundation. In this first of a series of four essays, Prendergrast explores new models for transforming systems beyond capitalism, ultimately coming up with a new three-part Ebbing/Evolving/Emerging model they name Onion Collective’s Petal Model of Regenerative Transition.
I like the emphasis on language and metaphor in shaping our understanding of change, as well as the potential for innovation at the periphery of society. It’s a hopeful piece, which is what we need in such times. I’ve included the image of the model which gives some examples, to aid with understanding.
At Onion Collective, positioned as we are in the ‘niche’, the further we delved into system transformation or replacement, the more conscious we became of how all these models, without fail, position the radical as outliers — trying to break in — rather than centre-ing them as dominant forces of change, reinforcing their radicalism as oddity. Whether unintentionally or a symptom of internalised capitalism, this seemed to reflect how anything which challenges the status quo is targeted as ‘radical’ or ‘extreme’. Rebecca Solnit explores this phenomenon in her extraordinary book, Hope in the Dark. She explains how those who are marginalised, especially when they try to push through to the centre, are often portrayed as dangerous and unsavoury, defamed and even criminalised. This was as true for civil rights activists, suffragettes, and abolitionists, as it is now with climate activists and post growth academics. They are portrayed as rabble on the fringe, somehow both naive (or swampy or woke depending on your era) and dangerous — a kind of system-level dismissal or sniggering at those suggesting an alternative to the mainstream, and one which feels particularly galling when that mainstream is creaking (burning, flooding, dying) under the weight of the damage it has created.
[…]
To reflect where the radical power for change really lies, as a starting point, we wanted to convey emerging and alternative futures practitioners less like oddities or outliers and more like a new beginning at the heart of the model. We wanted a model that better represented the viewpoint and power of all those under the waterline (whether in the global south or left-behind places) and that could begin to change what was ‘thinkable’. In the metaphorical battle for hearts and minds, we wanted to find a way to position the dominant but damaging paradigm as on the edge — a far more logical placement in the sense of the ‘extremeness’ of a position that is destroying itself and the planet. And, we put the alternative future in the centre of the action rather than the outskirts of possibility.
[…]
Intentionally, the three rounds of petals are layered up on top of one another reflecting that the new lives alongside the old even as it envelops it and, as we learned from Gibson-Graham, that all sorts of non-dominant regime activity is always happening alongside the mainstream. The layering and overlap also recognises that most of those building alternative futures are operating in what we have described previously as a liminal space. They tend to be working in multiple arenas all at once, and balancing the contradictions and complexities of such all the time. They may be doing a fair bit of ebbing, evolving and emerging work at once, by virtue of existing in the contradictory reality of late-stage capitalism.
The revolution, it turns out, is boringly iterative.
[…]
An example version here shows a host of areas added. In this case, these are petal sets that felt especially relevant to our practice at Onion Collective. For example, our work is at the nexus of culture, community and climate work; it takes in explorations of land use and ownership; knowledge production and sharing and alternative demonstrations of economics in place and at a systems level.
[…]
Viewed from the centre of this flower, where it’s all fresh and new and emergent, far away from the browning decaying edges of the old regime’s petals, it becomes easier to imagine the end of capitalism. From here, to overplay and mix up the metaphors, the ice above the waterline could melt away. From here, looking at all the activity and dreams and hopes that are coalescing under the surface, it’s not so difficult to conceive that maybe, we’ve just been looking in the wrong place, blinded by the light of capitalism. After all, the history of the world tells us that dominant paradigms dominate only until they don’t anymore. Eventually they give way, either gently or in turbulence, to something else. Change is inevitable, new petals will unfurl and a different kind of flower will come into bloom.
Source: Onion Collective
Image: taken from the article
It's better than strapping clay crocodiles to people’s heads and praying for the best

As I have written about several times over the years, I am a migraineur. They have been with me all my adult life, and I can’t really remember what life was like without them. Preventative medication makes me drowsy, so along with some relieving triptans my only relief is rest.
I’ve sent this article in Nature to my immediate family, who seem to confuse certain migraine phases with neurodiversity. The diagram below, in particular, is extremely valuable to anyone who is a migraineur, or who knows one. It’s easy to focus on the visual disturbances and the cranial pain, but there’s much more to it than that.

And, as I’ve discussed before, post-migraine is an extremely fertile time for me, with it being the perfect time for creative pursuits, including coming up with new or innovative ideas. That being said, I’m not entirely sure that the benefits outweigh the drawbacks, which is why I would absolutely explore new drugs which help prevent them in novel ways.
For ages, the perception of migraine has been one of suffering with little to no relief. In ancient Egypt, physicians strapped clay crocodiles to people’s heads and prayed for the best. And as late as the seventeenth century, surgeons bored holes into people’s skulls — some have suggested — to let the migraine out. The twentieth century brought much more effective treatments, but they did not work for a significant fraction of the roughly one billion people who experience migraine worldwide.
Now there is a new sense of progress running through the field, brought about by developments on several fronts. Medical advances in the past few decades — including the approval of gepants and related treatments — have redefined migraine as “a treatable and manageable condition”, says Diana Krause, a neuropharmacologist at the University of California, Irvine.
[…]
Researchers are trying to discover what triggers a migraine-prone brain to flip into a hyperactive state, causing a full-blown attack, or for that matter, what makes a brain prone to the condition. A new and broader approach to research and treatment is needed, says Arne May, a neurologist at the University Medical Center Hamburg–Eppendorf in Germany. To stop migraine completely and not just headache pain, he says, “we need to create new frameworks to understand how the brain activates the whole system of migraine”.
[…]
Researchers found that changes in the brain’s activity start appearing at what’s known as the premonitory phase, which begins hours to days before an attack (see ‘Migraine is cyclical’). The premonitory phase is characterized by a swathe of symptoms, including nausea, food cravings, faintness, fatigue and yawning. That’s often followed by a days-long migraine attack phase, which comes with overwhelming headache pain and other physical and psychological symptoms. After the attack subsides, the postdrome phase has its own associated set of symptoms that include depression, euphoria and fatigue. An interictal phase marks the time between attacks and can involve symptoms as well.
[…]
The limbic system is a group of interconnected brain structures that process sensory information and regulate emotions.. Studies that scanned the brains of people with migraine every few days for several weeks showed that hypothalamic connectivity to various parts of the brain increases just before a migraine attack begins, then collapses during the headache phase.
May and others think that the hypothalamus loses control over the limbic system about two days before the attack begins, and it results in changes to conscious experiences that might explain symptoms such as light- and sound-sensitivity, or cognitive impairments. At the same time, the breakdown of hypothalamic control puts the body’s homeostatic balance out of kilter, which explains why symptoms such as fatigue, nausea, yawning and food cravings are common when a migraine is building up, says Krause.
Migraine researchers now talk of a hypothetical ‘migraine threshold’ in which environmental or physiological triggers tip brain activity into a dysregulated state.
Source: Nature
Images: taken from the article
The consumption of generative AI as entertainment seems like another order of psychic submission

I quoted with approval from the first part of R.H. Lossin’s essay in e-flux on “the relationship between art, artificial intelligence, and emerging forms of hegemony.” In the second part, she puts forward an even more explicitly marxist critique, suggesting that being human involves both embodiment and emotion — something that AI can only ever imitate.
What I particularly appreciated in this second part was the focus on domination. I could have quoted more below, including one particularly juicy bit about Amazon’s Mechanical Turk, NFTs, and exploitation. You’ll just have to go and read the whole thing.
The liberal impulse to redress historic wrongs by progressively expanding the public sphere is nothing to scoff at. There couldn’t be a better time for marxists to climb down and admit the social value of including someone other than white heterosexuals in public discourse and cultural production. That said, counterhegemonic generative AI is a fantasy even if you define the diversification of therapy as counterhegemonic. In addition to causing disproportionate environmental harm, these elaborate experiments with computer subjectivity are always an exercise in labor exploitation and colonial domination. Materially, they are dependent on the maintenance and expansion of the extractive arrangements established by colonialism and the ongoing concentration of wealth and intellectual resources in the hands of very few men; ideologically they require increasing alienation and the elimination of difference. At best, these experiments offer us a pale reflection of intellectual engagement and collective social life. At worst, they contribute to the destruction of diverse communities and the very conditions for the solidarity required for real resistance.
[…]
The suggestion that a self-replicating taxonomy can produce knowledge and insights generally formulated over the course of a human life seems to defy reason. But this is exactly the claim being made by […] techno-boosterism at large: that a sophisticated enough machine can replicate the most complex human creations. This is, of course, just how machine production has always worked and evolved—each generation witnessing the disappearance of a set of skills and body of knowledge thought to be uniquely human. Art making, writing, and other highly skilled intellectual endeavors are not inherently more human, precious, or worthy of preservation than any skilled manufacture subsumed by the assembly lines of the past century. In the case of generative AI and other recent developments in machine learning, though, we are witnessing both the subsumption of cultural production by machines and the enclosure of vast swathes of subjective experience. Dramatic changes to production have always been accompanied by fundamental changes in the organization of social life beyond the workplace, but this is a qualitatively different phenomenon.
[…]
In the nineteenth century, Karl Marx observed that machinery is not just a means of production but a form of domination. In a mechanized, industrial economy, “labor appears […] as a conscious organ, scattered among the individual living workers at numerous points of the mechanical system […] as itself only a link of the system, whose unity exists not in the living workers, but rather in the living (active) machinery.” This apparent totality of machinery “confronts [the worker’s] individual, insignificant doings as a mighty organism. In machinery, objectified labor confronts living labor […] as the power which rules it.” […] Theodor Adorno and Max Horkheimer described popular entertainment as a relentless repetition of the rhythms of factory production; a way for the workplace to haunt the leisure time of the off-duty worker. The consumption of generative AI as entertainment seems like another order of psychic submission.
Source: e-flux
Image: Rashaad Newsome Studio (taken from the essay)
That’s how we got in this mess to begin with

Ben Werdmuller points to this article and says that “self-sovereignty should be available to all” because “if only wealthy people can own their own stuff, the movement is meaningless.”
If I’m understanding the arguments that PJ Onori is making (below) and Ben is making (implicitly) then they’re eliding between “owning your data” and having things “on a site you control.” I’ve got a microserver under the desk in my office. All of the data on there is “mine” in that I can physically pick it up and take it elsewhere. But… is this what we’re advocating for? It seems unrealistic.
What seems more realistic is having your stuff “on a site you control.” But what does “control” mean in this context? For most people it’s not technical control, because they won’t have the knowledge or skills. Instead, it’s power, which is the thing I think is missing from most arguments around Open Source and Free Software. The missing piece, I would argue, is creating democratic organisations such as cooperatives to give people together a way of pushing back against the combined power of Big Tech and nation states. Doing it individually is a fool’s errand.
PS The reason you’ll never hear me talk of “self-sovereignty” is mainly because of this book co-written by the father of arch-Tory Jacob Rees-Mogg.
It’s 2025. Read.cv is shutting down. WordPress is on fire. Twitter has completely melted down. Companies are changing their content policies en masse. Social networks are becoming increasingly icy towards anything outside of their walled garden. Services are using the content you post to feed proprietary LLMs. Government websites appear to be purging data. It’s a wild time.
[…]
Now, more than ever, it’s critical to own your data. Really own it. Like, on your hard drive and hosted on your website. Ideally on your own server, but one step at a time.
[…]
Is taking control of your content less convenient? Yeah–of course. That’s how we got in this mess to begin with. It can be a downright pain in the ass. But it’s your pain in the ass. And that’s the point.
Source: PJ Onori’s blog
Image: Alexander Sinn
Loose, liminal time with others used to be baked into life

I think it says something about the state of the world that articles have to be written encouraging us to hang out with others, and indeed how to do so. But here we are.
It’s easy to live an over-scheduled life, especially if you have kids. That makes it particularly difficult to make, or encourage other people to make, unscheduled calls. But that kind of thing is the spice of life. I need more serendipity in mine, for sure.
Nowadays… unstructured moments seem fewer and farther between. Socializing nearly always revolves around a specific activity, often out of the house, and with an implied start and end time. Plans are Tetris-ed into a packed calendar and planned well in advance, leaving little room for spontaneity. Then, when we inevitably feel worn out or like our social battery’s drained, we retreat inward under the pretense of self-care; according to pop culture, true rest can only happen at home, alone, often in a bubble bath or bed.
Of course, solo veg time can be rejuvenating (and necessary), but I think we’ve lost sight of how relaxing with loved ones can also fill our cup, and make us feel less lonely. And after talking with a couple of experts on the topic, I know I’m not the only one. […]
Loose, liminal time with others used to be baked into life. It’s been slowly wedged out thanks to smartphones, go-go-go lifestyles, a fiercely individualistic society, and a host of other cultural shifts
[…]
Because there’s less pressure to perform or meet expectations, free-flowing togetherness also encourages authenticity, Dr. Stratnyer adds—and the ability to be your true self is no small thing. Social psychology researchers have found that showing up authentically in close relationships improves self-esteem; lowers levels of anxiety, depression, and stress; and is essential to building trusting, stable, satisfying relationships.
[…]
It can be as easy as saying, “Come over and let’s just hang out” or “Drop by whenever! I have no plans and would love to catch up.” When you extend invites like this, “you signal that the focus is on enjoying each other’s company rather than completing a list of activities,” Dr. Hafeez says. “With no rigid agenda, people are free to explore whatever feels right. The beauty of this kind of get-together is that things can unfold naturally, creating unforgettable memories.”
Source: SELF
Image: Javier Allegue Barros
Putting the news in its damn place

In his most recent newsletter, Warren Ellis mentioned something that I’ve been feeling, but feeling somewhat guilty about. Namely: it’s difficult to carve out space to live a flourishing life when you spend most of your days avoiding bad news.
Yes, I’m sharing some of it here — or at least, commentaries on some of it. How could I not? My feeds feature little else but people throwing their hands in the air about democracy and/or AI. But I think think thi sis good advice from Ellis.
Thing is, not only is the news all the bloody same, all about the same country and the same handful of main characters, and every news service reports all the incremental updates to the same bloody stories every sixty seconds: but that constant battering tide of zone-flooding shit compresses time and shrinks space to think. And I want this year to feel like a year and not three bloody weeks.
It’s not about “taking a break from the news,” which various newsletters have suggested is now A Thing. And, you know, if you live in certain places right now, taking a break from the news might feel a luxury at best and a wilful ignoring of alarm bells at worst. On a single evening last week I talked to three people setting plans to bug out of the US..
It’s more about putting the news in its damn place and creating more space to live in.
Source: Orbital Operations
Image: Utsav Srestha
People think that fascism arrives in fancy dress

I said last week there are more historical authoritarian regimes to compare what’s happening around the world to than just Nazi Germany. I’m sick of my news feeds being full of people freaking out about what’s happening, as if this hasn’t been going on for years now.
I’m a reader of The Guardian and subscribe to the weekly print edition. But I’m finding the pointing-and-staring a little grating, which is why I appreciate this from Zoe Williams. I appreciate Carole Cadwalladr’s candid articles even more — although she does tend to post them on the Nazi-platforming Substack.
Like many people, I often feel as if I grew up with the Michael Rosen poem that starts: “I sometimes fear that / people think that fascism arrives in fancy dress.” In fact, it was written in 2014, but it was such a neat distillation that it instantly joined the canon of words that had always existed, right up there with clouds being lonely and parents fucking you up. Obviously, fascism arrives as your friend. How else would it arrive?
[…]
Between 1933 and 1939, the journalist Charlotte Beradt compiled The Third Reich of Dreams, in which she transcribed the nightmares of citizens from housemaids to small-business owners, then grouped them thematically, analysed them, and smuggled them to the US. They were published in 1968. A surprising, poignant number of them were about people dreaming that it was forbidden to dream, then freaking out in the dream because they knew they were illegitimately dreaming. There were amazingly prescient themes, of hyper-surveillance by the state before it had even begun, of barbarous violence, again, before it had started. But the paralysis theme was possibly the most recurrent and striking – people’s limbs frozen in Sieg Heils, voices frozen into silence, motifs of inaction from the most trivial to the most all-encompassing.
Source: The Guardian
Image: Mika Baumeister
⭐ Support Thought Shrapnel!
Join the Holographic Sticker Crew for a £5/month donation and keep Thought Shrapnel going. My Ko-fi page also links to ebooks and options for Critical Friend consultations 🤘
Updates (23rd Feb):
- Thanks to Adam Procter for becoming the first member of the crew!
- I’m exploring new horizons at the moment, so please let me know of any opportunities 🙂
- I made a thing called Album Shelf which you may like, and which I discuss in Weeknote 08/2025
Shaped into SNARF to spread

I should imagine many people who read Thought Shrapnel also read Stephen Downes' OLDaily, so may already have seen this by Jonah Peretti, CEO of BuzzFeed. What interested me was the acronym SNARF, which is as good as any for being a short way of differentiating between centralised, for-profit, highly algorithmic social networks, and their opposite.
The quotation below comes from the The Anti-SNARF Manifesto, which is linked from the sign-up page for a new social network which features an illustration of an island. That’s interesting symbolism; I wonder if it will use a protocol such as ActivityPub (which underpins Fediverse apps such as Mastodon) or ATProto (which is used by Bluesky)? It would be a bit of a ballsy move to start completely from scratch.
Given the number of boosts and favourites I’ve had on my Fediverse post asking people to add a content warning for things relating to US politics, I’d think that moderation is something which is a potential differentiator. People neither want a completely straight reverse-chronological feed, it would seem, but nor do they want to feel manipulated by an opaque algorithm. I’ll be following this with interest and I have, of course, signed up to be notified when it launches.
SNARF stands for Stakes/Novelty/Anger/Retention/Fear. SNARF is the kind of content that evolves when a platform asks an AI to maximize usage. Content creators need to please the AI algorithms or they become irrelevant. Millions of creators make SNARF content to stay in the feed and earn a living.
We are all familiar with this kind of content, especially those of us who are chronically online. Content creators exaggerate stakes to make their content urgent and existential. They manufacture novelty and spin their content as unprecedented and unique. They manipulate anger to drive engagement via outrage. They hack retention by withholding information and promising a payoff at the end of a video. And they provoke fear to make people focus with urgency on their content. Every piece of content faces ruthless Darwinian competition so only SNARF has the ability to be successful, even if it is inaccurate, hateful, fake, ethically dubious, and intellectually suspect.
This dynamic is causing many different types of content to evolve into versions of the same thing. Once you understand this you can see how much of our society, culture, and politics are downstream from big tech’s global SNARF machines. The political ideas that break through, from both Democrats and Republicans, need to be shaped into SNARF to spread. Through this lens, MAGA and “woke” are the same thing! They both are versions of political ideas that spread through raw negative emotion, outrage, and novelty. The news stories and journalism that break through aren’t the most important stories, but rather the stories that can be shaped into SNARF. This is why it seems like every election, every new technology, every global conflict has the potential to end our way of life, destroy democracy, or set off a global apocalypse! It is not a coincidence that no matter what the message is, it always takes the same form, namely memetically optimized media that maximizes stakes and novelty, provokes anger, drives retention, and instills fear. The result is an endless stream of addictive content that leaves everyone feeling depressed, scared, and dissatisfied.
[…]
But there is some hope, despite the growing revenue and usage of the big social media platforms. We are beginning to see the first cracks that suggest there might be an opportunity to fight back. A recent study by the National Bureau of Economic Research found that the majority of respondents would prefer to live in a world where TikTok and Instagram did not exist! There was generally a feeling of being compelled to use these projects because of FOMO, social pressure, and addiction. A large portion of users said they would pay money for TikTok and Instagram to not exist, suggesting these products have negative utility for many people. This challenges traditional economics which posits that consumers choosing a product means it provides positive utility. Instead, social media companies are using AI to manipulate consumer behavior for their own ends, not the benefit of the consumer. This aligns with what these researchers suspect is happening, namely that “companies introduce features that exacerbate non-user utility and diminish consumer welfare, rather than enhance it, increasing people’s need for a product without increasing the utility it delivers to them.”
Source: The Anti-SNARF Manifesto
Image: cropped from the background image on the above website
We’re hard-wired for addiction

I think what Scott Galloway is saying here is that unfettered capitalism, which allows addicting people to products detrimental to their health, is a bad thing? That seems pretty obvious.
What I think Americans are missing, to be honest, is a way of saying that they want ‘socialism’ without it being equated with ‘communism’. I lots of tortuous statements about ‘post-capitalism’ and other terms. But the rest of the world undrestands socialism as balancing government intervention for the health and flourishing of citizens as being ‘socialism’, not ‘communism’.
The world’s most valuable resource isn’t data, compute, oil, or rare earth metals; it’s dopa, i.e., the fuel of the addiction economy, which runs the most valuable companies in history. Addiction has always been a component of capitalism — nothing rivals the power of craving to manufacture demand and support irrational margins.
[…]
Historically, the most valuable companies turn dopa into consumption. Over the last 100 years, 15 of the top 30 companies by cumulative compound return have been pillars of the addiction economy. The compounders cluster in tobacco (Altria +265,528,900%), the food industrial complex (Coca-Cola +12,372,265%), pharma (Wyeth +5,702,341%), and retailers (Kroger +2,834,362%) that sell both substances and treatments. To predict which companies will be the top compounders over the next century, consider this: Eight of the world’s 10 most valuable businesses turn dopa into attention, or make picks and shovels for these dopa merchants.
[…]
Now that everyone has a cellphone, we spend 70% less time with our friends than we did a decade ago. We’re addicted to our phones, and even when we’re not seeking our fix, our phones seek us out — notifying us on average 46 times per day for adults and 237 times per day for teens. In college, I spent too much time smoking pot and watching Planet of the Apes, but when I decided to venture on campus, my bong and Cornelius didn’t send me notifications.
[…]
We’re hard-wired for addiction. We’re also wired for conflict, as competing for scarce resources has shaped our neurological system to swiftly detect, assess, and respond to threats — often before we’re aware of them. As technology advances, our wiring makes us more powerful and more vulnerable. We produce dopa monsters at internet speed. We can wage war at a velocity and scale that risks extinction in the blink of an eye.
Source: No Mercy / No Malice
Image: Mishal Ibrahim
What burns people out is not being allowed to exercise their integrity instincts

In this wind-ranging article, Venkatesh Rao discusses a number of things, including the unfolding Musk/DOGE coup. I’m ignoring that for the moment, as anything I write about it will be out of date by next week. The two parts I found most interesting from Rao’s piece were: (i) his comparison of people who tolerate inefficiency and interruption versus those who don’t, and (ii) his assertion that burnout comes from not being able to exercise integrity.
The two are related, I think. When you have to do things a particular way, subsuming your identity and values to someone else’s, it denies a core part of who you are as a person. While it’s relatively normal to self-censor to present oneself as a particular type of person, doing so in a way which is in conflict with your values is essentially a Jekyll/Hyde problem. And we all know what happened at the end of that story.
A big tell of whether you are an “open-door” type person is whether you tolerate a high degree of apparent inefficiency, interruption, and refractory periods of reflection that look like idleness. All are signs that your mental doors are open and are taking in new input. Especially dissenting input that can easily be interpreted as disloyal or traitorous by a loyalty-obsessed paranoid mind. Input that forces you to stop acting and switch to reflecting for a while.
Conversely, if you’re all about “efficiency” and a “maniacal sense of urgency” and a desperate belief that your “first principles” are all you need, you will eventually pay the price. A playbook that worked great once will stop working. Even the most powerful set of first principles that might be driving you will leave you with an exhausted paradigm and nowhere to go.
[…]
What truly burns people out is not that their boss is too demanding, hot-tempered, or even sadistic. What burns people out is not being allowed to exercise their integrity instincts. Being asked to turn off or delegate their moral compass to others. Plenty of people have the courage, the desperation, the ambition, or all three, to deal with demanding and scary bosses. But not many people can indefinitely suspend integrity instincts without being traumatized and burning out.
Source: Contraptions
Image: Danylo Suprun
All intelligence is collective intelligence

The concept of ‘intelligence’ is a slippery one. It’s a human construct and, as such, privileges not only our own species, but those humans who at any given time have power and control over what counts as ‘intelligent’. There have been moves, especially recently, to ascribe intelligence to species that we don’t commonly eat, such as dolphins and crows.
But what about animals humans do eat? As a vegetarian I regularly feel guilty for consuming eggs and dairy; what kind of suffering am I causing sentient animals? But, I console myself, at least I don’t eat them any more.
A foolish consistency may be the hobgoblin of little minds, according to Emerson, but it is useful to have a consistent and philosophically-sound position on things. This article by Sally Adee is a pretty read, but worthwhile. It not only covers animal intelligence, but that of plants, fungi, and (of course!) machines.
A small but growing number of philosophers, physicists and developmental biologists say that, instead of continually admitting new creatures into the category of intelligence, the new findings are evidence that there is something catastrophically wrong with the way we understand intelligence itself. And they believe that if we can bring ourselves to dramatically reconsider what we think we know about it, we will end up with a much better concept of how to restabilize the balance between human and nonhuman life amid an ecological omnicrisis that threatens to permanently alter the trajectory of every living thing on Earth.
No plant, fungus or bacterium can sit an IQ test. But to be honest, neither could you if the test was administered in a culture radically different from your own. “I would probably soundly fail an intelligence test devised by an 18th-century Sioux,” the social scientist Richard Nisbett once told me. IQ tests are culturally bound, meaning that they test the ability to represent the particular world an individual inhabits and manipulate that representation in a way that maximizes the ability to thrive in it.
What would we find if we could design a test appropriate for the culture plants inhabit?
[…]
Electrophysiological readings, for example, have for a long time revealed striking similarities in the activity of humans, plants, fungi, bacteria and other organisms. It’s uncontroversially accepted that electrical signals coordinate the physical and mental activities of brain cells. We have operationalized this knowledge. When we want to peer into the mental states produced by a human brain’s 86 billion or so neurons, we eavesdrop on their cell-to-cell electrical communication (called action potentials). We have been measuring electrical activity in the brain since the electroencephalogram was invented in 1924. Analyzing the synchronized waves produced by billions of electrical firings has allowed us to deduce whether a person is asleep, dreaming or, when awake, concentrating or unfocused.
[…]
“The reality is that all intelligence is collective intelligence,” [developmental biologist Michael] Levin told me. “It’s just a matter of scale.” Human intelligence, animal swarms, bacterial biofilms — even the cells that work in concert to compose the human anatomy. “Each of us consists of a huge number of cells working together to generate a coherent cognitive being with goals, preferences and memories that belong to the whole and not to its parts.”
[…]
“We are not even individuals at all,” wrote the technologist and artist James Bridle in “Ways of Being,” a 2022 study of multiple intelligences. “Rather we are walking assemblages, riotous communities, multi-species multi-bodied beings inside and outside of our very cells.”
Bridle was referring to (among other things) the literal pounds of every human body that consists not of human cells but bacteria and fungi and other organisms, all of which play a profound role in shaping our so-called “human” intelligence.
[…]
If we can let go of the idea that the only locus of intelligence is the human brain, then we can start to conceive of ways intelligence manifests elsewhere in biology. Call it biological cognition or biological intelligence — it seems to manifest in the relationships between individuals more than in individuals themselves. […]
“The boundaries between humans and nature and humans and machines are at the very least in suspense,” wrote the philosopher Tobias Rees. Moving away from human exceptionalism, he argued, would help “to transform politics from something that is only concerned with human affairs to something that is truly planetary,” ushering in a shift from the age of the human to ‘the age of planetary reason.’”
Source: NOEMA
Image: Landon Parenteau
From cheapfakes to deepfakes

I was listening on the radio to someone who was talking about AI. At first, I was skeptical of what they were saying, as it seemed to be the classic hand-waving of “machines will never be able to replace humans” without being specific. However, they did provide more specificity, mentioning how quickly we can tell, for example, if someone’s tone of voice is “I’m not really OK but I’m pretending to be.”
We spot when something isn’t right. Which is why it’s interesting to me that, while I got 10/10 on my first go on a deepfake quiz, that’s very much an outlier. I’m obviously not saying that I have some magical ability to spot what others can’t, but spending time with technologies and understanding how they work and what they look like is part of AI Literacies.
All of this reminds me of the 30,000 World War 2 volunteers who helped with the Battle of Britain by learning to spot the difference between, for example, a Messerschmitt Bf 109 and a Spitfire by listening to sound recordings, looking at silhouettes, etc.
Deepfakes have become alarmingly difficult to detect. So difficult, that only 0.1% of people today can identify them.
That’s according to iProov, a British biometric authentication firm. The company tested the public’s AI detective skills by showing 2,000 UK and US consumers a collection of both genuine and synthetic content.
[…]
Last year, a deepfake attack happened every five minutes, according to ID verification firm Onfido.
The content is frequently weaponised for fraud. A recent study estimated that AI drives almost half (43%) of all fraud attempts.
Andrew Bud, the founder and CEO of iProov, attributes the escalation to three converging trends:
The rapid evolution of AI and its ability to produce realistic deepfakes
The growth of Crime-as-a-Service (CaaS) networks that offer cheaper access to sophisticated, purpose-built, attack technologies
The vulnerability of traditional ID verification practices
Bud also pointed to the lower barriers of entry to deepfakes. Attackers have progressed from simple “cheapfakes” to powerful tools that create convincing synthetic media within minutes.
Source: The Next Web
Image: Markus Spiske