It's better than strapping clay crocodiles to people’s heads and praying for the best

Conceptual illustration by Aleksandra Czudżak showing a person suffering from migraine.

As I have written about several times over the years, I am a migraineur. They have been with me all my adult life, and I can’t really remember what life was like without them. Preventative medication makes me drowsy, so along with some relieving triptans my only relief is rest.

I’ve sent this article in Nature to my immediate family, who seem to confuse certain migraine phases with neurodiversity. The diagram below, in particular, is extremely valuable to anyone who is a migraineur, or who knows one. It’s easy to focus on the visual disturbances and the cranial pain, but there’s much more to it than that.

Illustration showing the cyclical nature of migraines

And, as I’ve discussed before, post-migraine is an extremely fertile time for me, with it being the perfect time for creative pursuits, including coming up with new or innovative ideas. That being said, I’m not entirely sure that the benefits outweigh the drawbacks, which is why I would absolutely explore new drugs which help prevent them in novel ways.

For ages, the perception of migraine has been one of suffering with little to no relief. In ancient Egypt, physicians strapped clay crocodiles to people’s heads and prayed for the best. And as late as the seventeenth century, surgeons bored holes into people’s skulls — some have suggested — to let the migraine out. The twentieth century brought much more effective treatments, but they did not work for a significant fraction of the roughly one billion people who experience migraine worldwide.

Now there is a new sense of progress running through the field, brought about by developments on several fronts. Medical advances in the past few decades — including the approval of gepants and related treatments — have redefined migraine as “a treatable and manageable condition”, says Diana Krause, a neuropharmacologist at the University of California, Irvine.

[…]

Researchers are trying to discover what triggers a migraine-prone brain to flip into a hyperactive state, causing a full-blown attack, or for that matter, what makes a brain prone to the condition. A new and broader approach to research and treatment is needed, says Arne May, a neurologist at the University Medical Center Hamburg–Eppendorf in Germany. To stop migraine completely and not just headache pain, he says, “we need to create new frameworks to understand how the brain activates the whole system of migraine”.

[…]

Researchers found that changes in the brain’s activity start appearing at what’s known as the premonitory phase, which begins hours to days before an attack (see ‘Migraine is cyclical’). The premonitory phase is characterized by a swathe of symptoms, including nausea, food cravings, faintness, fatigue and yawning. That’s often followed by a days-long migraine attack phase, which comes with overwhelming headache pain and other physical and psychological symptoms. After the attack subsides, the postdrome phase has its own associated set of symptoms that include depression, euphoria and fatigue. An interictal phase marks the time between attacks and can involve symptoms as well.

[…]

The limbic system is a group of interconnected brain structures that process sensory information and regulate emotions.. Studies that scanned the brains of people with migraine every few days for several weeks showed that hypothalamic connectivity to various parts of the brain increases just before a migraine attack begins, then collapses during the headache phase.

May and others think that the hypothalamus loses control over the limbic system about two days before the attack begins, and it results in changes to conscious experiences that might explain symptoms such as light- and sound-sensitivity, or cognitive impairments. At the same time, the breakdown of hypothalamic control puts the body’s homeostatic balance out of kilter, which explains why symptoms such as fatigue, nausea, yawning and food cravings are common when a migraine is building up, says Krause.

Migraine researchers now talk of a hypothetical ‘migraine threshold’ in which environmental or physiological triggers tip brain activity into a dysregulated state.

Source: Nature

Images: taken from the article

The consumption of generative AI as entertainment seems like another order of psychic submission

'Being' is a digital griot who functions as a performance artist, poet, educator and healer

I quoted with approval from the first part of R.H. Lossin’s essay in e-flux on “the relationship between art, artificial intelligence, and emerging forms of hegemony.” In the second part, she puts forward an even more explicitly marxist critique, suggesting that being human involves both embodiment and emotion — something that AI can only ever imitate.

What I particularly appreciated in this second part was the focus on domination. I could have quoted more below, including one particularly juicy bit about Amazon’s Mechanical Turk, NFTs, and exploitation. You’ll just have to go and read the whole thing.

The liberal impulse to redress historic wrongs by progressively expanding the public sphere is nothing to scoff at. There couldn’t be a better time for marxists to climb down and admit the social value of including someone other than white heterosexuals in public discourse and cultural production. That said, counterhegemonic generative AI is a fantasy even if you define the diversification of therapy as counterhegemonic. In addition to causing disproportionate environmental harm, these elaborate experiments with computer subjectivity are always an exercise in labor exploitation and colonial domination. Materially, they are dependent on the maintenance and expansion of the extractive arrangements established by colonialism and the ongoing concentration of wealth and intellectual resources in the hands of very few men; ideologically they require increasing alienation and the elimination of difference. At best, these experiments offer us a pale reflection of intellectual engagement and collective social life. At worst, they contribute to the destruction of diverse communities and the very conditions for the solidarity required for real resistance.

[…]

The suggestion that a self-replicating taxonomy can produce knowledge and insights generally formulated over the course of a human life seems to defy reason. But this is exactly the claim being made by […] techno-boosterism at large: that a sophisticated enough machine can replicate the most complex human creations. This is, of course, just how machine production has always worked and evolved—each generation witnessing the disappearance of a set of skills and body of knowledge thought to be uniquely human. Art making, writing, and other highly skilled intellectual endeavors are not inherently more human, precious, or worthy of preservation than any skilled manufacture subsumed by the assembly lines of the past century. In the case of generative AI and other recent developments in machine learning, though, we are witnessing both the subsumption of cultural production by machines and the enclosure of vast swathes of subjective experience. Dramatic changes to production have always been accompanied by fundamental changes in the organization of social life beyond the workplace, but this is a qualitatively different phenomenon.

[…]

In the nineteenth century, Karl Marx observed that machinery is not just a means of production but a form of domination. In a mechanized, industrial economy, “labor appears […] as a conscious organ, scattered among the individual living workers at numerous points of the mechanical system […] as itself only a link of the system, whose unity exists not in the living workers, but rather in the living (active) machinery.” This apparent totality of machinery “confronts [the worker’s] individual, insignificant doings as a mighty organism. In machinery, objectified labor confronts living labor […] as the power which rules it.” […] Theodor Adorno and Max Horkheimer described popular entertainment as a relentless repetition of the rhythms of factory production; a way for the workplace to haunt the leisure time of the off-duty worker. The consumption of generative AI as entertainment seems like another order of psychic submission.

Source: e-flux

Image: Rashaad Newsome Studio (taken from the essay)

That’s how we got in this mess to begin with

Red heart made out of binary digits

Ben Werdmuller points to this article and says that “self-sovereignty should be available to all” because “if only wealthy people can own their own stuff, the movement is meaningless.”

If I’m understanding the arguments that PJ Onori is making (below) and Ben is making (implicitly) then they’re eliding between “owning your data” and having things “on a site you control.” I’ve got a microserver under the desk in my office. All of the data on there is “mine” in that I can physically pick it up and take it elsewhere. But… is this what we’re advocating for? It seems unrealistic.

What seems more realistic is having your stuff “on a site you control.” But what does “control” mean in this context? For most people it’s not technical control, because they won’t have the knowledge or skills. Instead, it’s power, which is the thing I think is missing from most arguments around Open Source and Free Software. The missing piece, I would argue, is creating democratic organisations such as cooperatives to give people together a way of pushing back against the combined power of Big Tech and nation states. Doing it individually is a fool’s errand.

PS The reason you’ll never hear me talk of “self-sovereignty” is mainly because of this book co-written by the father of arch-Tory Jacob Rees-Mogg.

It’s 2025. Read.cv is shutting down. WordPress is on fire. Twitter has completely melted down. Companies are changing their content policies en masse. Social networks are becoming increasingly icy towards anything outside of their walled garden. Services are using the content you post to feed proprietary LLMs. Government websites appear to be purging data. It’s a wild time.

[…]

Now, more than ever, it’s critical to own your data. Really own it. Like, on your hard drive and hosted on your website. Ideally on your own server, but one step at a time.

[…]

Is taking control of your content less convenient? Yeah–of course. That’s how we got in this mess to begin with. It can be a downright pain in the ass. But it’s your pain in the ass. And that’s the point.

Source: PJ Onori’s blog

Image: Alexander Sinn

Loose, liminal time with others used to be baked into life

silhouette photo of four people dancing on sands near shoreline

I think it says something about the state of the world that articles have to be written encouraging us to hang out with others, and indeed how to do so. But here we are.

It’s easy to live an over-scheduled life, especially if you have kids. That makes it particularly difficult to make, or encourage other people to make, unscheduled calls. But that kind of thing is the spice of life. I need more serendipity in mine, for sure.

Nowadays… unstructured moments seem fewer and farther between. Socializing nearly always revolves around a specific activity, often out of the house, and with an implied start and end time. Plans are Tetris-ed into a packed calendar and planned well in advance, leaving little room for spontaneity. Then, when we inevitably feel worn out or like our social battery’s drained, we retreat inward under the pretense of self-care; according to pop culture, true rest can only happen at home, alone, often in a bubble bath or bed.

Of course, solo veg time can be rejuvenating (and necessary), but I think we’ve lost sight of how relaxing with loved ones can also fill our cup, and make us feel less lonely. And after talking with a couple of experts on the topic, I know I’m not the only one. […]

Loose, liminal time with others used to be baked into life. It’s been slowly wedged out thanks to smartphones, go-go-go lifestyles, a fiercely individualistic society, and a host of other cultural shifts

[…]

Because there’s less pressure to perform or meet expectations, free-flowing togetherness also encourages authenticity, Dr. Stratnyer adds—and the ability to be your true self is no small thing. Social psychology researchers have found that showing up authentically in close relationships improves self-esteem; lowers levels of anxiety, depression, and stress; and is essential to building trusting, stable, satisfying relationships.

[…]

It can be as easy as saying, “Come over and let’s just hang out” or “Drop by whenever! I have no plans and would love to catch up.” When you extend invites like this, “you signal that the focus is on enjoying each other’s company rather than completing a list of activities,” Dr. Hafeez says. “With no rigid agenda, people are free to explore whatever feels right. The beauty of this kind of get-together is that things can unfold naturally, creating unforgettable memories.”

Source: SELF

Image: Javier Allegue Barros

Putting the news in its damn place

A stack of newspapers

In his most recent newsletter, Warren Ellis mentioned something that I’ve been feeling, but feeling somewhat guilty about. Namely: it’s difficult to carve out space to live a flourishing life when you spend most of your days avoiding bad news.

Yes, I’m sharing some of it here — or at least, commentaries on some of it. How could I not? My feeds feature little else but people throwing their hands in the air about democracy and/or AI. But I think think thi sis good advice from Ellis.

Thing is, not only is the news all the bloody same, all about the same country and the same handful of main characters, and every news service reports all the incremental updates to the same bloody stories every sixty seconds: but that constant battering tide of zone-flooding shit compresses time and shrinks space to think. And I want this year to feel like a year and not three bloody weeks.

It’s not about “taking a break from the news,” which various newsletters have suggested is now A Thing. And, you know, if you live in certain places right now, taking a break from the news might feel a luxury at best and a wilful ignoring of alarm bells at worst. On a single evening last week I talked to three people setting plans to bug out of the US..

It’s more about putting the news in its damn place and creating more space to live in.

Source: Orbital Operations

Image: Utsav Srestha

People think that fascism arrives in fancy dress

A group of people standing in front of a building, one is holding a sign that reads 'Burn Fascism not Fossil Fuels'

I said last week there are more historical authoritarian regimes to compare what’s happening around the world to than just Nazi Germany. I’m sick of my news feeds being full of people freaking out about what’s happening, as if this hasn’t been going on for years now.

I’m a reader of The Guardian and subscribe to the weekly print edition. But I’m finding the pointing-and-staring a little grating, which is why I appreciate this from Zoe Williams. I appreciate Carole Cadwalladr’s candid articles even more — although she does tend to post them on the Nazi-platforming Substack.

Like many people, I often feel as if I grew up with the Michael Rosen poem that starts: “I sometimes fear that / people think that fascism arrives in fancy dress.” In fact, it was written in 2014, but it was such a neat distillation that it instantly joined the canon of words that had always existed, right up there with clouds being lonely and parents fucking you up. Obviously, fascism arrives as your friend. How else would it arrive?

[…]

Between 1933 and 1939, the journalist Charlotte Beradt compiled The Third Reich of Dreams, in which she transcribed the nightmares of citizens from housemaids to small-business owners, then grouped them thematically, analysed them, and smuggled them to the US. They were published in 1968. A surprising, poignant number of them were about people dreaming that it was forbidden to dream, then freaking out in the dream because they knew they were illegitimately dreaming. There were amazingly prescient themes, of hyper-surveillance by the state before it had even begun, of barbarous violence, again, before it had started. But the paralysis theme was possibly the most recurrent and striking – people’s limbs frozen in Sieg Heils, voices frozen into silence, motifs of inaction from the most trivial to the most all-encompassing.

Source: The Guardian

Image: Mika Baumeister

⭐ Support Thought Shrapnel!

Join the Holographic Sticker Crew for a £5/month donation and keep Thought Shrapnel going. My Ko-fi page also links to ebooks and options for Critical Friend consultations 🤘

Shaped into SNARF to spread

Illustration of an island in the middle of the sea

I should imagine many people who read Thought Shrapnel also read Stephen Downes' OLDaily, so may already have seen this by Jonah Peretti, CEO of BuzzFeed. What interested me was the acronym SNARF, which is as good as any for being a short way of differentiating between centralised, for-profit, highly algorithmic social networks, and their opposite.

The quotation below comes from the The Anti-SNARF Manifesto, which is linked from the sign-up page for a new social network which features an illustration of an island. That’s interesting symbolism; I wonder if it will use a protocol such as ActivityPub (which underpins Fediverse apps such as Mastodon) or ATProto (which is used by Bluesky)? It would be a bit of a ballsy move to start completely from scratch.

Given the number of boosts and favourites I’ve had on my Fediverse post asking people to add a content warning for things relating to US politics, I’d think that moderation is something which is a potential differentiator. People neither want a completely straight reverse-chronological feed, it would seem, but nor do they want to feel manipulated by an opaque algorithm. I’ll be following this with interest and I have, of course, signed up to be notified when it launches.

SNARF stands for Stakes/Novelty/Anger/Retention/Fear. SNARF is the kind of content that evolves when a platform asks an AI to maximize usage. Content creators need to please the AI algorithms or they become irrelevant. Millions of creators make SNARF content to stay in the feed and earn a living.

We are all familiar with this kind of content, especially those of us who are chronically online. Content creators exaggerate stakes to make their content urgent and existential. They manufacture novelty and spin their content as unprecedented and unique. They manipulate anger to drive engagement via outrage. They hack retention by withholding information and promising a payoff at the end of a video. And they provoke fear to make people focus with urgency on their content. Every piece of content faces ruthless Darwinian competition so only SNARF has the ability to be successful, even if it is inaccurate, hateful, fake, ethically dubious, and intellectually suspect.

This dynamic is causing many different types of content to evolve into versions of the same thing. Once you understand this you can see how much of our society, culture, and politics are downstream from big tech’s global SNARF machines. The political ideas that break through, from both Democrats and Republicans, need to be shaped into SNARF to spread. Through this lens, MAGA and “woke” are the same thing! They both are versions of political ideas that spread through raw negative emotion, outrage, and novelty. The news stories and journalism that break through aren’t the most important stories, but rather the stories that can be shaped into SNARF. This is why it seems like every election, every new technology, every global conflict has the potential to end our way of life, destroy democracy, or set off a global apocalypse! It is not a coincidence that no matter what the message is, it always takes the same form, namely memetically optimized media that maximizes stakes and novelty, provokes anger, drives retention, and instills fear. The result is an endless stream of addictive content that leaves everyone feeling depressed, scared, and dissatisfied.

[…]

But there is some hope, despite the growing revenue and usage of the big social media platforms. We are beginning to see the first cracks that suggest there might be an opportunity to fight back. A recent study by the National Bureau of Economic Research found that the majority of respondents would prefer to live in a world where TikTok and Instagram did not exist! There was generally a feeling of being compelled to use these projects because of FOMO, social pressure, and addiction. A large portion of users said they would pay money for TikTok and Instagram to not exist, suggesting these products have negative utility for many people. This challenges traditional economics which posits that consumers choosing a product means it provides positive utility. Instead, social media companies are using AI to manipulate consumer behavior for their own ends, not the benefit of the consumer. This aligns with what these researchers suspect is happening, namely that “companies introduce features that exacerbate non-user utility and diminish consumer welfare, rather than enhance it, increasing people’s need for a product without increasing the utility it delivers to them.”

Source: The Anti-SNARF Manifesto

Image: cropped from the background image on the above website

We’re hard-wired for addiction

Blurred photo of man moving head

I think what Scott Galloway is saying here is that unfettered capitalism, which allows addicting people to products detrimental to their health, is a bad thing? That seems pretty obvious.

What I think Americans are missing, to be honest, is a way of saying that they want ‘socialism’ without it being equated with ‘communism’. I lots of tortuous statements about ‘post-capitalism’ and other terms. But the rest of the world undrestands socialism as balancing government intervention for the health and flourishing of citizens as being ‘socialism’, not ‘communism’.

The world’s most valuable resource isn’t data, compute, oil, or rare earth metals; it’s dopa, i.e., the fuel of the addiction economy, which runs the most valuable companies in history. Addiction has always been a component of capitalism — nothing rivals the power of craving to manufacture demand and support irrational margins.

[…]

Historically, the most valuable companies turn dopa into consumption. Over the last 100 years, 15 of the top 30 companies by cumulative compound return have been pillars of the addiction economy. The compounders cluster in tobacco (Altria +265,528,900%), the food industrial complex (Coca-Cola +12,372,265%), pharma (Wyeth +5,702,341%), and retailers (Kroger +2,834,362%) that sell both substances and treatments. To predict which companies will be the top compounders over the next century, consider this: Eight of the world’s 10 most valuable businesses turn dopa into attention, or make picks and shovels for these dopa merchants.

[…]

Now that everyone has a cellphone, we spend 70% less time with our friends than we did a decade ago. We’re addicted to our phones, and even when we’re not seeking our fix, our phones seek us out — notifying us on average 46 times per day for adults and 237 times per day for teens. In college, I spent too much time smoking pot and watching Planet of the Apes, but when I decided to venture on campus, my bong and Cornelius didn’t send me notifications.

[…]

We’re hard-wired for addiction. We’re also wired for conflict, as competing for scarce resources has shaped our neurological system to swiftly detect, assess, and respond to threats — often before we’re aware of them. As technology advances, our wiring makes us more powerful and more vulnerable. We produce dopa monsters at internet speed. We can wage war at a velocity and scale that risks extinction in the blink of an eye.

Source: No Mercy / No Malice

Image: Mishal Ibrahim

What burns people out is not being allowed to exercise their integrity instincts

Fire

In this wind-ranging article, Venkatesh Rao discusses a number of things, including the unfolding Musk/DOGE coup. I’m ignoring that for the moment, as anything I write about it will be out of date by next week. The two parts I found most interesting from Rao’s piece were: (i) his comparison of people who tolerate inefficiency and interruption versus those who don’t, and (ii) his assertion that burnout comes from not being able to exercise integrity.

The two are related, I think. When you have to do things a particular way, subsuming your identity and values to someone else’s, it denies a core part of who you are as a person. While it’s relatively normal to self-censor to present oneself as a particular type of person, doing so in a way which is in conflict with your values is essentially a Jekyll/Hyde problem. And we all know what happened at the end of that story.

A big tell of whether you are an “open-door” type person is whether you tolerate a high degree of apparent inefficiency, interruption, and refractory periods of reflection that look like idleness. All are signs that your mental doors are open and are taking in new input. Especially dissenting input that can easily be interpreted as disloyal or traitorous by a loyalty-obsessed paranoid mind. Input that forces you to stop acting and switch to reflecting for a while.

Conversely, if you’re all about “efficiency” and a “maniacal sense of urgency” and a desperate belief that your “first principles” are all you need, you will eventually pay the price. A playbook that worked great once will stop working. Even the most powerful set of first principles that might be driving you will leave you with an exhausted paradigm and nowhere to go.

[…]

What truly burns people out is not that their boss is too demanding, hot-tempered, or even sadistic. What burns people out is not being allowed to exercise their integrity instincts. Being asked to turn off or delegate their moral compass to others. Plenty of people have the courage, the desperation, the ambition, or all three, to deal with demanding and scary bosses. But not many people can indefinitely suspend integrity instincts without being traumatized and burning out.

Source: Contraptions

Image: Danylo Suprun

All intelligence is collective intelligence

Brown mushrooms on green grass during daytime

The concept of ‘intelligence’ is a slippery one. It’s a human construct and, as such, privileges not only our own species, but those humans who at any given time have power and control over what counts as ‘intelligent’. There have been moves, especially recently, to ascribe intelligence to species that we don’t commonly eat, such as dolphins and crows.

But what about animals humans do eat? As a vegetarian I regularly feel guilty for consuming eggs and dairy; what kind of suffering am I causing sentient animals? But, I console myself, at least I don’t eat them any more.

A foolish consistency may be the hobgoblin of little minds, according to Emerson, but it is useful to have a consistent and philosophically-sound position on things. This article by Sally Adee is a pretty read, but worthwhile. It not only covers animal intelligence, but that of plants, fungi, and (of course!) machines.

A small but growing number of philosophers, physicists and developmental biologists say that, instead of continually admitting new creatures into the category of intelligence, the new findings are evidence that there is something catastrophically wrong with the way we understand intelligence itself. And they believe that if we can bring ourselves to dramatically reconsider what we think we know about it, we will end up with a much better concept of how to restabilize the balance between human and nonhuman life amid an ecological omnicrisis that threatens to permanently alter the trajectory of every living thing on Earth.

No plant, fungus or bacterium can sit an IQ test. But to be honest, neither could you if the test was administered in a culture radically different from your own. “I would probably soundly fail an intelligence test devised by an 18th-century Sioux,” the social scientist Richard Nisbett once told me. IQ tests are culturally bound, meaning that they test the ability to represent the particular world an individual inhabits and manipulate that representation in a way that maximizes the ability to thrive in it.

What would we find if we could design a test appropriate for the culture plants inhabit?

[…]

Electrophysiological readings, for example, have for a long time revealed striking similarities in the activity of humans, plants, fungi, bacteria and other organisms. It’s uncontroversially accepted that electrical signals coordinate the physical and mental activities of brain cells. We have operationalized this knowledge. When we want to peer into the mental states produced by a human brain’s 86 billion or so neurons, we eavesdrop on their cell-to-cell electrical communication (called action potentials). We have been measuring electrical activity in the brain since the electroencephalogram was invented in 1924. Analyzing the synchronized waves produced by billions of electrical firings has allowed us to deduce whether a person is asleep, dreaming or, when awake, concentrating or unfocused.

[…]

“The reality is that all intelligence is collective intelligence,” [developmental biologist Michael] Levin told me. “It’s just a matter of scale.” Human intelligence, animal swarms, bacterial biofilms — even the cells that work in concert to compose the human anatomy. “Each of us consists of a huge number of cells working together to generate a coherent cognitive being with goals, preferences and memories that belong to the whole and not to its parts.”

[…]

“We are not even individuals at all,” wrote the technologist and artist James Bridle in “Ways of Being,” a 2022 study of multiple intelligences. “Rather we are walking assemblages, riotous communities, multi-species multi-bodied beings inside and outside of our very cells.”

Bridle was referring to (among other things) the literal pounds of every human body that consists not of human cells but bacteria and fungi and other organisms, all of which play a profound role in shaping our so-called “human” intelligence.

[…]

If we can let go of the idea that the only locus of intelligence is the human brain, then we can start to conceive of ways intelligence manifests elsewhere in biology. Call it biological cognition or biological intelligence — it seems to manifest in the relationships between individuals more than in individuals themselves. […]

“The boundaries between humans and nature and humans and machines are at the very least in suspense,” wrote the philosopher Tobias Rees. Moving away from human exceptionalism, he argued, would help “to transform politics from something that is only concerned with human affairs to something that is truly planetary,” ushering in a shift from the age of the human to ‘the age of planetary reason.’”

Source: NOEMA

Image: Landon Parenteau

From cheapfakes to deepfakes

Graffiti saying 'FAKE'

I was listening on the radio to someone who was talking about AI. At first, I was skeptical of what they were saying, as it seemed to be the classic hand-waving of “machines will never be able to replace humans” without being specific. However, they did provide more specificity, mentioning how quickly we can tell, for example, if someone’s tone of voice is “I’m not really OK but I’m pretending to be.”

We spot when something isn’t right. Which is why it’s interesting to me that, while I got 10/10 on my first go on a deepfake quiz, that’s very much an outlier. I’m obviously not saying that I have some magical ability to spot what others can’t, but spending time with technologies and understanding how they work and what they look like is part of AI Literacies.

All of this reminds me of the 30,000 World War 2 volunteers who helped with the Battle of Britain by learning to spot the difference between, for example, a Messerschmitt Bf 109 and a Spitfire by listening to sound recordings, looking at silhouettes, etc.

Deepfakes have become alarmingly difficult to detect. So difficult, that only 0.1% of people today can identify them.

That’s according to iProov, a British biometric authentication firm. The company tested the public’s AI detective skills by showing 2,000 UK and US consumers a collection of both genuine and synthetic content.

[…]

Last year, a deepfake attack happened every five minutes, according to ID verification firm Onfido.

The content is frequently weaponised for fraud. A recent study estimated that AI drives almost half (43%) of all fraud attempts.

Andrew Bud, the founder and CEO of iProov, attributes the escalation to three converging trends:

  1. The rapid evolution of AI and its ability to produce realistic deepfakes

  2. The growth of Crime-as-a-Service (CaaS) networks that offer cheaper access to sophisticated, purpose-built, attack technologies

  3. The vulnerability of traditional ID verification practices

Bud also pointed to the lower barriers of entry to deepfakes. Attackers have progressed from simple “cheapfakes” to powerful tools that create convincing synthetic media within minutes.

Source: The Next Web

Image: Markus Spiske

Redefining terms like “hate speech” is obviously part of the fascist project

Image of banned words shared in Gizmodo article

The situation in the US is a slide into authoritarianism. That much is plain to see. Some people are wary of using the label ‘fascist’ perhaps because their only mental model of what’s going on is a hazy understanding of events from the 1930s in Nazi Germany.

However, there have been many authoritarian regimes that have done unspeakable harm to their people, including mass murder of populations. It starts with language, and ends with concentration camps (invented by the Spanish in Cuba, by the way). This article in Gizmodo includes a list of banned words that will get your National Science Foundation (NSF) grant funding application rejected.

For those looking for precedents of authoritarian regimes beginning their censorship efforts by targeting language immediately upon gaining power, you might want to check out this page I created with Perplexity which summarises some examples. It also gives references for further reading.

According to the Washington Post, a word like “women” appearing will get the content flagged, but it will need to be manually reviewed to determine if the context of the word is related to a forbidden topic under the anti-DEI order. Trump and his fellow fascists use terms like DEI to describe anything they don’t like, which means that the word “women” is on the forbidden list while “men” doesn’t initiate a review.

Straight white men are seen through the MAGA worldview as the default human and thus wouldn’t be suspicious and in need of a review. Any other type of identity is inherently suspect.

Some of the terms that are getting flagged are particularly eyebrow-raising in light of the Nazi salutes that Trump supporters have been giving since he took office. For example, the term “hate speech” will get a paper at NSF flagged for further review. Redefining terms like “hate speech” is obviously part of the fascist project.

Source: Gizmodo

Image: List of banned words for NSF grants shared by Ashenafi Shumey Cherkos

Capitalism would simply die if it met all of our needs, and our needs are not that hard to fill

Grayscale photo of man carrying bags of shpping while walking past a male homeless person sitting on pavement outside a Prada store

As promised, I’ve returned to e-flux with an essay from Charles Tonderai Mudede, a Zimbabwean-born cultural critic, urbanist, filmmaker, college lecturer, and writer. In it, he discusses the origins of capitalism, arguing that many have missed the point: capitalism is focused on luxury goods and their consumption, and therefore can never reach a steady state, an equilibrium where everyone’s needs are met.

It’s a long-ish read, and makes some fascinating digressions (I love the story about the tulip bulb misidentified as an onion) but what I’ve quoted below is, I think, the main points being made.

Indeed, the key to capitalist products is not their use value but their uselessness, which is why so many goods driving capitalist growth were (and are) luxuries: coffee, tea, tobacco, beef, china, spices, chocolate, single-family homes, and ultimately automobiles—which define capitalism in its American moment. It’s no accident that the richest man of our times is a car manufacturer.

[…]

Capitalism has never been about use value at all, a misreading that entered the heart of Marxism through Adam Smith’s influence on Marx’s political economy. The Dutch philosopher Bernard Mandeville’s economics, on the other hand, represents a reading of capitalism that corresponds with what I call its configuration space, in which the defining consumer products are culturally actualized compossibilities—and predetermined, like luxuries associated with vice. The reason is simple: capitalism would simply die if it met all of our needs, and our needs are not that hard to fill.

This is precisely where John Maynard Keynes made a major mistake in his remarkable and entertaining 1930 essay “Economic Possibilities for Our Grandchildren.” He assumed that capitalism’s noble project was to alleviate its own scarcity, its own uneven distribution of capital. Yes, he really thought that the objective of capitalism was capitalism’s own death. And indeed, the late nineteenth-century neoclassical economists universally believed this to be the case. They told the poor to leave capital accumulation to the specialists, as it alone could eventually eliminate all wants and satisfy all needs. It’s just a question of time. It is time that justified the concentration of capital in a few hands, the hands of those who had it and did not blow it. And this fortitude, which the poor lacked, deserved a reward. The people provided labor, which deserved a wage; the rich provided waiting, which deserved a profit. […]

What was missing in Keynes’s utopia? Even with little distinction from socialism, what was missing was the basic understanding that capitalism is not about producing the necessities of life, but about using every opportunity to transfer luxuries from the elites to the masses. This is the point of Boots Riley’s masterpiece Sorry to Bother You (2018), a film that may be called surreal by those who have no idea of the kind of culture they are in. The real is precisely the enchantment, the dream. Capitalism’s poor do not live in the woods but instead, like Sorry to Bother You’s main character, Cassius “Cash” Green (played by LaKeith Stanfield), drive beat-up or heavily indebted cars; work, in the words of the late anarchist anthropologist David Graeber, “bullshit jobs”; and sleep in vehicles made for recreation (RVs) or tents made for quick weekend breaks from urban stress, or for the lucky ones, in garages (houses for cars). This is what poverty actually looks like in a society that’s devoted to luxuries rather than necessities.

[…]

Capitalism is not, at the end of the day, based on the production of things we really need (absolute needs), for if it was, it would have already become a thing of the past. Or, in the language of thermodynamics, it would have reached equilibrium. (Indeed, the nineteenth-century British political economist John Stuart Mill called this equilibrium “a stationary state.”)

[…]

For example, an apparent shortage of housing—an absolute need or demand, meaning every human needs to be housed—could easily be solved. But what do you find everywhere in a very rich city like Seattle? No developments that come close to satisfying widespread demand for housing as an absolute need. This fact should sound an alarm in your head. We are in a system geared for relative needs. And capital’s re-enchantment is so complete that it’s hard to find a theorist who has attempted to adequately (or systemically) recognize it as such. This kind of political economy (or even anti-political economy) would find its reflection in lucid dreaming. Revolution, then, is not the end of enchantment (“the desert of the real”) but can only be re-enchantment. We are all made of dreams.

Source: e-flux

Image: Max Böhme

The occupational classification of a conversation does not necessarily mean the user was a professional in that field

Various charts showing findings from the Anthropic report

I find this report (PDF) by Anthropic, the AI company behind Claude.ai, really interesting. First, I have to note that they’ve purposely used a report style that looks like it’s been published in an academic journal. But, of course, it hasn’t, which means it’s not peer-reviewed. I’m not saying this invalidates the findings in anyway, especially as they’ve open-sourced the dataset used for the analysis.

Second, although they’ve mapped occupational categories, as the Anthropic researchers point out, “the occupational classification of a conversation does not necessarily mean the user was a professional in that field.” I’ve asked LLMs about health-related things, for example, but I am not a health professional.

Third, and maybe I’m an edge case here, but I use different LLMs for different purposes:

  • I primarily use ChatGPT for writing and brainstorming assistance, as well as converting one thing into another. For example, this morning I fed it some PDFs to extract skills frameworks as JSON.
  • I use Perplexity when searching for stuff that might take a while to find — for example, the solution a technical problem that might be on an obscure Reddit or Stack Exchange thread.
  • I turn to Google’s Gemini if I want to have a conversation with an LLM, say if I’m preparing for a presentation or an interview.
  • I use Claude for code-related things because it can create interactive artefacts which can be useful.
  • Finally, for sensitive work, or if a client specifically asks, I use Recurse.chat to interact with local LLM models such as LLaVA and Llama.

What I’m saying, I suppose, is that there’s an element of horses for courses with all of this. Increasingly, people will use different kinds of LLMs, sometimes without even realising it. If Anthropic looked at my use of Claude, they’d probably think I had some kind of programming or data analysis job. Which I don’t. So let’s take this with a grain of salt.

The following extract is taken from the report:

Here, we present a novel empirical framework for measuring AI usage across different tasks in the economy, drawing on privacy-preserving analysis of millions of real-world conversations on Claude.ai [Tamkin et al., 2024]. By mapping these conversations to occupational categories in the U.S. Department of Labor’s O*NET Database, we can identify not just current usage patterns, but also early indicators of which parts of the economy may be most affected as these technologies continue to advance.

We use this framework to make five key contributions:

1. Provide the first large-scale empirical measurement of which tasks are seeing AI use across the economy …Our analysis reveals highest use for tasks in software engineering roles (e.g., software engineers, data scientists, bioinformatics technicians), professions requiring substantial writing capabilities (e.g., technical writers, copywriters, archivists), and analytical roles (e.g., data scientists). Conversely, tasks in occupations involving physical manipulation of the environment (e.g., anesthesiologists, construction workers) currently show minimal use.

2. Quantify the depth of AI use within occupations …Only ∼ 4% of occupations exhibit AI usage for at least 75% of their tasks, suggesting the potential for deep task-level use in some roles. More broadly, ∼ 36% of occupations show usage in at least 25% of their tasks, indicating that AI has already begun to diffuse into task portfolios across a substantial portion of the workforce.

3. Measure which occupational skills are most represented in human-AI conversations ….Cognitive skills like Reading Comprehension, Writing, and Critical Thinking show high presence, while physical skills (e.g., Installation, Equipment Maintenance) and managerial skills (e.g., Negotiation) show minimal presence—reflecting clear patterns of human complementarity with current AI capabilities.

4. Analyze how wage and barrier to entry correlates with AI usage …We find that AI use peaks in the upper quartile of wages but drops off at both extremes of the wage spectrum. Most high-usage occupations clustered in the upper quartile correspond predominantly to software industry positions, while both very high-wage occupations (e.g., physicians) and low-wage positions (e.g., restaurant workers) demonstrate relatively low usage. This pattern likely reflects either limitations in current AI capabilities, the inherent physical manipulation requirements of these roles, or both. Similar patterns emerge for barriers to entry, with peak usage in occupations requiring considerable preparation (e.g., bachelor’s degree) rather than minimal or extensive training.

5. Assess whether people use Claude to automate or augment tasks …We find that 57% of interactions show augmentative patterns (e.g., back-and-forth iteration on a task) while 43% demonstrate automation-focused usage (e.g., performing the task directly). While this ratio varies across occupations, most occupations exhibited a mix of automation and augmentation across tasks, suggesting AI serves as both an efficiency tool and collaborative partner.

Source: The Anthropic Economic Index

Image: (taken from the report)

Flash fictions and creative constraints

Old blank postcard

In his most recent newsletter, Warren Ellis shares his belief that the ideal length of an email is “10 to 75 words.” He compares this with telegrams and postcards, using these as a creative constraint for what he calls ‘flash fictions’.

The average number of words on a postcard was between forty and fifty. The average number of words in a telegram was around fourteen. Last year, I started playing with flash fictions again for the first time in more than a decade. Here’s some.

[…]

The first ever time someone takes your hand, and the first thought you have is “this is everything” and the second is “what happens when it’s gone?” The space of time between those thoughts defines the shape of your life.

[…]

Flat gray post-funeral day, feeling like a human shovel as you dig into your mother’s hoarded life-debris. At the bottom of the midden of corner-shop crap, the book of her crimes. And you recognise your father’s chest tattoo covering its scabbed boards.

[…]

That point at the end of winter when your bones feel damp.

[…]

From my perspective, houses are roomy coffins with plumbing.

Source: Orbital Operations

Image: Jenny Scott

Surplus value must be distributed by and among the workers

No Capitalista - La explotación comercial de esta obra sólo está permitida a cooperativas, organizaciones y colectivos sin fines de lucro, a organizaciones de trabajadores autogestionados, y donde no existan relaciones de explotación. Todo excedente o plusvalía obtenidos por el ejercicio de los derechos concedidos por esta Licencia sobre la Obra deben ser distribuidos por y entre los trabajadores.

I’ve come across lots of different licenses in my time. Some, such as Creative Commons licenses, for example, are meant to stand up in court. Others are more of a form of artistic expression, a way of signalling to an in-group, and a ‘hands-off’ warning for those therefore considered ‘out-group’.

My Spanish is still terrible, so I used DeepL to make the translation of this ‘Non-Capitalist’ clause on the En Defensa del Software Libre website, which is used in addition to the standard Attribution and Share-Alike clauses.

Non-Capitalist - Commercial exploitation of this work is only permitted to cooperatives, non-profit organisations and collectives, self-managed workers' organisations, and where no exploitative relationships exist. Any surplus or surplus value obtained from the exercise of the rights granted by this Licence on the Work must be distributed by and among the workers.

Original Spanish:

No Capitalista - La explotación comercial de esta obra sólo está permitida a cooperativas, organizaciones y colectivos sin fines de lucro, a organizaciones de trabajadores autogestionados, y donde no existan relaciones de explotación. Todo excedente o plusvalía obtenidos por el ejercicio de los derechos concedidos por esta Licencia sobre la Obra deben ser distribuidos por y entre los trabajadores.

Source: En Defensa del Software Libre

The art of not being governed like that and at that cost

a painted sign on a wall that says question everything and smile

I haven’t yet listened to the episode of Neil Selwyn’s podcast entitled ‘What is ‘critical’ in critical studies of edtech? but I couldn’t resist reading the editorial written by Felicitas Macgilchrist in the open-access journal Learning, Technology and Society.

Macgilchrist argues that we shouldn’t take the word ‘critical’ for granted, and outlines three ways in which it can be considered. I particularly like her approach to critique of moving the conversation forward by “raising questions and troubling… previously held assumptions and convictions.”

Given what’s happening in the US at the moment, I’ve pulled out the Foucault quotation as making it difficult to be governed is absolutely how to resist authoritarianism — in any area of life.

When Latour (2004) wondered if critique had ‘run out of steam’, this led to a flurry of responses about critical scholarship today. If, he wrote, his neighbours now thoroughly debunk ‘facts’ as constructed, positioned and political, then what is his role as a critical scholar? Latour proposes in response that ‘the critic is not the one who debunks, but the one who assembles’ (2004, 246). And, in this sense, ‘assembling’ joined proposals to see critical scholarship as ‘reparative’, rather than paranoid or suspicious (Sedgwick 1997), as ‘diffraction’, creating difference patterns that make a difference (Haraway 1997, 268) or as ‘worlding’, a post-colonial critical practice of creation (Wilson 2007, 210). These generative approaches have been picked up in research on learning, media and technology, for instance, analysing open knowledge practices (Stewart 2015) or equitable data practices (Macgilchrist 2019), and most explicitly in feminist perspectives on edtech (Eynon 2018; Henry, Oliver, and Winters 2019). […]

Generative forms of critique invite us to imagine other futures, and have inspired a range of speculative work on possible futures. Futurity becomes, in these studies, less about predicting the future or joining accelerationist or transhumanist futurisms, but about estranging readers from common-sense. SF ‘isn’t about the future’ (Le Guin [1976] 2019, xxi), it’s about the present, generating ‘a shocked renewal of our vision such that once again, and as though for the first time, we are able to perceive [our contemporary cultures’ and institutions’] historicity and their arbitrariness’ (Jameson 2005, 255). […]

If critique is not fault-finding or suspicion but, as one often cited source has it, the ‘art of not being governed like that and at that cost’ (Foucault 1997, 29; Butler 2001), then the critical work outlined here aims to identify how we are currently being governed, to question how this produces the acceptable or desirable horizons of ‘good education’, ‘good teaching’ or ‘good citizens’, and to speculate on alternatives.

Source: Learning, Media and Technology

Image: Marija Zaric

⭐ Become a Thought Shrapnel supporter!

Just a quick reminder that you can become a supporter of Thought Shrapnel by clicking here. Thank you to The Other Doug and ARTiFactor for their one-off tips last week!

We're all below the AI line except for a very very very small group of wealthy white men

A neural network comes out of the top of an ivory tower, above a crowd of people's heads (shown in green to symbolise grass roots). Some of them are reaching up to try and take some control and pull the net down to them. Watercolour illustration.

As a fully paid-up member of the Audrey Watters fan club, I make no apologies for including another one of her articles in Thought Shrapnel this week. This one has much that I could dwell on, but I’m trying not to post too much about the current digital overthrow of democracy in the US at the moment.

One could also say that I could stop posting as much about AI, but then that’s all my information feeds are full of at the moment. And, anyway, it’s an interesting topic.

While you should absolutely go and read the full text, I pulled the following out of Audrey’s post, which references something that I’ve also referenced Venkatesh Rao discuss: being above or below the “API line”. These days, it’s more like an “AI line”

In 2015, an essay made the rounds (in my household at least) that argued that jobs could be classified as above or below the “API line” – above the API, you wield control, programmatically; below, however, your job is under threat of automation, your livelihood increasingly precarious. Today, a decade later, I think we’d frame this line as an “AI” not an “API line” (much to Kin’s chagrin). We’re all told – and not just programmers – that we have to cede control to AI (to “agents” and “chatbots”) in order to have any chance to stay above it. The promise isn’t that our work will be less precarious, of course; there’s been no substantive, structural shift in power, and if anything, precarity has gotten worse. AI usage is merely a psychological cushion – we’ll feel better if we can feel faster and more efficient; we’ll feel better if we can think less.

We’re all below the AI line except for a very very very small group of wealthy white men. And they truly fucking hate us.

It’s a line, it’s always a line with them: those above, and those below. “AI is an excuse that allows those with power to operate at a distance from those whom their power touches,” writes Eryk Salvaggio in “A Fork in the Road.” Intelligence, artificial or otherwise, has always been a technology of ranking and sorting and discriminating. It has always been a technology of eugenics.

Source: Second Breakfast

Image: CC-BY Jamillah Knowles & We and AI / Better Images of AI / People and Ivory Tower AI