From cheapfakes to deepfakes

I was listening on the radio to someone who was talking about AI. At first, I was skeptical of what they were saying, as it seemed to be the classic hand-waving of “machines will never be able to replace humans” without being specific. However, they did provide more specificity, mentioning how quickly we can tell, for example, if someone’s tone of voice is “I’m not really OK but I’m pretending to be.”
We spot when something isn’t right. Which is why it’s interesting to me that, while I got 10/10 on my first go on a deepfake quiz, that’s very much an outlier. I’m obviously not saying that I have some magical ability to spot what others can’t, but spending time with technologies and understanding how they work and what they look like is part of AI Literacies.
All of this reminds me of the 30,000 World War 2 volunteers who helped with the Battle of Britain by learning to spot the difference between, for example, a Messerschmitt Bf 109 and a Spitfire by listening to sound recordings, looking at silhouettes, etc.
Deepfakes have become alarmingly difficult to detect. So difficult, that only 0.1% of people today can identify them.
That’s according to iProov, a British biometric authentication firm. The company tested the public’s AI detective skills by showing 2,000 UK and US consumers a collection of both genuine and synthetic content.
[…]
Last year, a deepfake attack happened every five minutes, according to ID verification firm Onfido.
The content is frequently weaponised for fraud. A recent study estimated that AI drives almost half (43%) of all fraud attempts.
Andrew Bud, the founder and CEO of iProov, attributes the escalation to three converging trends:
The rapid evolution of AI and its ability to produce realistic deepfakes
The growth of Crime-as-a-Service (CaaS) networks that offer cheaper access to sophisticated, purpose-built, attack technologies
The vulnerability of traditional ID verification practices
Bud also pointed to the lower barriers of entry to deepfakes. Attackers have progressed from simple “cheapfakes” to powerful tools that create convincing synthetic media within minutes.
Source: The Next Web
Image: Markus Spiske
Redefining terms like “hate speech” is obviously part of the fascist project

The situation in the US is a slide into authoritarianism. That much is plain to see. Some people are wary of using the label ‘fascist’ perhaps because their only mental model of what’s going on is a hazy understanding of events from the 1930s in Nazi Germany.
However, there have been many authoritarian regimes that have done unspeakable harm to their people, including mass murder of populations. It starts with language, and ends with concentration camps (invented by the Spanish in Cuba, by the way). This article in Gizmodo includes a list of banned words that will get your National Science Foundation (NSF) grant funding application rejected.
For those looking for precedents of authoritarian regimes beginning their censorship efforts by targeting language immediately upon gaining power, you might want to check out this page I created with Perplexity which summarises some examples. It also gives references for further reading.
According to the Washington Post, a word like “women” appearing will get the content flagged, but it will need to be manually reviewed to determine if the context of the word is related to a forbidden topic under the anti-DEI order. Trump and his fellow fascists use terms like DEI to describe anything they don’t like, which means that the word “women” is on the forbidden list while “men” doesn’t initiate a review.
Straight white men are seen through the MAGA worldview as the default human and thus wouldn’t be suspicious and in need of a review. Any other type of identity is inherently suspect.
Some of the terms that are getting flagged are particularly eyebrow-raising in light of the Nazi salutes that Trump supporters have been giving since he took office. For example, the term “hate speech” will get a paper at NSF flagged for further review. Redefining terms like “hate speech” is obviously part of the fascist project.
Source: Gizmodo
Image: List of banned words for NSF grants shared by Ashenafi Shumey Cherkos
Capitalism would simply die if it met all of our needs, and our needs are not that hard to fill

As promised, I’ve returned to e-flux with an essay from Charles Tonderai Mudede, a Zimbabwean-born cultural critic, urbanist, filmmaker, college lecturer, and writer. In it, he discusses the origins of capitalism, arguing that many have missed the point: capitalism is focused on luxury goods and their consumption, and therefore can never reach a steady state, an equilibrium where everyone’s needs are met.
It’s a long-ish read, and makes some fascinating digressions (I love the story about the tulip bulb misidentified as an onion) but what I’ve quoted below is, I think, the main points being made.
Indeed, the key to capitalist products is not their use value but their uselessness, which is why so many goods driving capitalist growth were (and are) luxuries: coffee, tea, tobacco, beef, china, spices, chocolate, single-family homes, and ultimately automobiles—which define capitalism in its American moment. It’s no accident that the richest man of our times is a car manufacturer.
[…]
Capitalism has never been about use value at all, a misreading that entered the heart of Marxism through Adam Smith’s influence on Marx’s political economy. The Dutch philosopher Bernard Mandeville’s economics, on the other hand, represents a reading of capitalism that corresponds with what I call its configuration space, in which the defining consumer products are culturally actualized compossibilities—and predetermined, like luxuries associated with vice. The reason is simple: capitalism would simply die if it met all of our needs, and our needs are not that hard to fill.
This is precisely where John Maynard Keynes made a major mistake in his remarkable and entertaining 1930 essay “Economic Possibilities for Our Grandchildren.” He assumed that capitalism’s noble project was to alleviate its own scarcity, its own uneven distribution of capital. Yes, he really thought that the objective of capitalism was capitalism’s own death. And indeed, the late nineteenth-century neoclassical economists universally believed this to be the case. They told the poor to leave capital accumulation to the specialists, as it alone could eventually eliminate all wants and satisfy all needs. It’s just a question of time. It is time that justified the concentration of capital in a few hands, the hands of those who had it and did not blow it. And this fortitude, which the poor lacked, deserved a reward. The people provided labor, which deserved a wage; the rich provided waiting, which deserved a profit. […]
What was missing in Keynes’s utopia? Even with little distinction from socialism, what was missing was the basic understanding that capitalism is not about producing the necessities of life, but about using every opportunity to transfer luxuries from the elites to the masses. This is the point of Boots Riley’s masterpiece Sorry to Bother You (2018), a film that may be called surreal by those who have no idea of the kind of culture they are in. The real is precisely the enchantment, the dream. Capitalism’s poor do not live in the woods but instead, like Sorry to Bother You’s main character, Cassius “Cash” Green (played by LaKeith Stanfield), drive beat-up or heavily indebted cars; work, in the words of the late anarchist anthropologist David Graeber, “bullshit jobs”; and sleep in vehicles made for recreation (RVs) or tents made for quick weekend breaks from urban stress, or for the lucky ones, in garages (houses for cars). This is what poverty actually looks like in a society that’s devoted to luxuries rather than necessities.
[…]
Capitalism is not, at the end of the day, based on the production of things we really need (absolute needs), for if it was, it would have already become a thing of the past. Or, in the language of thermodynamics, it would have reached equilibrium. (Indeed, the nineteenth-century British political economist John Stuart Mill called this equilibrium “a stationary state.”)
[…]
For example, an apparent shortage of housing—an absolute need or demand, meaning every human needs to be housed—could easily be solved. But what do you find everywhere in a very rich city like Seattle? No developments that come close to satisfying widespread demand for housing as an absolute need. This fact should sound an alarm in your head. We are in a system geared for relative needs. And capital’s re-enchantment is so complete that it’s hard to find a theorist who has attempted to adequately (or systemically) recognize it as such. This kind of political economy (or even anti-political economy) would find its reflection in lucid dreaming. Revolution, then, is not the end of enchantment (“the desert of the real”) but can only be re-enchantment. We are all made of dreams.
Source: e-flux
Image: Max Böhme
The occupational classification of a conversation does not necessarily mean the user was a professional in that field

I find this report (PDF) by Anthropic, the AI company behind Claude.ai, really interesting. First, I have to note that they’ve purposely used a report style that looks like it’s been published in an academic journal. But, of course, it hasn’t, which means it’s not peer-reviewed. I’m not saying this invalidates the findings in anyway, especially as they’ve open-sourced the dataset used for the analysis.
Second, although they’ve mapped occupational categories, as the Anthropic researchers point out, “the occupational classification of a conversation does not necessarily mean the user was a professional in that field.” I’ve asked LLMs about health-related things, for example, but I am not a health professional.
Third, and maybe I’m an edge case here, but I use different LLMs for different purposes:
- I primarily use ChatGPT for writing and brainstorming assistance, as well as converting one thing into another. For example, this morning I fed it some PDFs to extract skills frameworks as JSON.
- I use Perplexity when searching for stuff that might take a while to find — for example, the solution a technical problem that might be on an obscure Reddit or Stack Exchange thread.
- I turn to Google’s Gemini if I want to have a conversation with an LLM, say if I’m preparing for a presentation or an interview.
- I use Claude for code-related things because it can create interactive artefacts which can be useful.
- Finally, for sensitive work, or if a client specifically asks, I use Recurse.chat to interact with local LLM models such as LLaVA and Llama.
What I’m saying, I suppose, is that there’s an element of horses for courses with all of this. Increasingly, people will use different kinds of LLMs, sometimes without even realising it. If Anthropic looked at my use of Claude, they’d probably think I had some kind of programming or data analysis job. Which I don’t. So let’s take this with a grain of salt.
The following extract is taken from the report:
Here, we present a novel empirical framework for measuring AI usage across different tasks in the economy, drawing on privacy-preserving analysis of millions of real-world conversations on Claude.ai [Tamkin et al., 2024]. By mapping these conversations to occupational categories in the U.S. Department of Labor’s O*NET Database, we can identify not just current usage patterns, but also early indicators of which parts of the economy may be most affected as these technologies continue to advance.
We use this framework to make five key contributions:
1. Provide the first large-scale empirical measurement of which tasks are seeing AI use across the economy …Our analysis reveals highest use for tasks in software engineering roles (e.g., software engineers, data scientists, bioinformatics technicians), professions requiring substantial writing capabilities (e.g., technical writers, copywriters, archivists), and analytical roles (e.g., data scientists). Conversely, tasks in occupations involving physical manipulation of the environment (e.g., anesthesiologists, construction workers) currently show minimal use.
2. Quantify the depth of AI use within occupations …Only ∼ 4% of occupations exhibit AI usage for at least 75% of their tasks, suggesting the potential for deep task-level use in some roles. More broadly, ∼ 36% of occupations show usage in at least 25% of their tasks, indicating that AI has already begun to diffuse into task portfolios across a substantial portion of the workforce.
3. Measure which occupational skills are most represented in human-AI conversations ….Cognitive skills like Reading Comprehension, Writing, and Critical Thinking show high presence, while physical skills (e.g., Installation, Equipment Maintenance) and managerial skills (e.g., Negotiation) show minimal presence—reflecting clear patterns of human complementarity with current AI capabilities.
4. Analyze how wage and barrier to entry correlates with AI usage …We find that AI use peaks in the upper quartile of wages but drops off at both extremes of the wage spectrum. Most high-usage occupations clustered in the upper quartile correspond predominantly to software industry positions, while both very high-wage occupations (e.g., physicians) and low-wage positions (e.g., restaurant workers) demonstrate relatively low usage. This pattern likely reflects either limitations in current AI capabilities, the inherent physical manipulation requirements of these roles, or both. Similar patterns emerge for barriers to entry, with peak usage in occupations requiring considerable preparation (e.g., bachelor’s degree) rather than minimal or extensive training.
5. Assess whether people use Claude to automate or augment tasks …We find that 57% of interactions show augmentative patterns (e.g., back-and-forth iteration on a task) while 43% demonstrate automation-focused usage (e.g., performing the task directly). While this ratio varies across occupations, most occupations exhibited a mix of automation and augmentation across tasks, suggesting AI serves as both an efficiency tool and collaborative partner.
Source: The Anthropic Economic Index
Image: (taken from the report)
Flash fictions and creative constraints

In his most recent newsletter, Warren Ellis shares his belief that the ideal length of an email is “10 to 75 words.” He compares this with telegrams and postcards, using these as a creative constraint for what he calls ‘flash fictions’.
The average number of words on a postcard was between forty and fifty. The average number of words in a telegram was around fourteen. Last year, I started playing with flash fictions again for the first time in more than a decade. Here’s some.
[…]
The first ever time someone takes your hand, and the first thought you have is “this is everything” and the second is “what happens when it’s gone?” The space of time between those thoughts defines the shape of your life.
[…]
Flat gray post-funeral day, feeling like a human shovel as you dig into your mother’s hoarded life-debris. At the bottom of the midden of corner-shop crap, the book of her crimes. And you recognise your father’s chest tattoo covering its scabbed boards.
[…]
That point at the end of winter when your bones feel damp.
[…]
From my perspective, houses are roomy coffins with plumbing.
Source: Orbital Operations
Image: Jenny Scott
Surplus value must be distributed by and among the workers

I’ve come across lots of different licenses in my time. Some, such as Creative Commons licenses, for example, are meant to stand up in court. Others are more of a form of artistic expression, a way of signalling to an in-group, and a ‘hands-off’ warning for those therefore considered ‘out-group’.
My Spanish is still terrible, so I used DeepL to make the translation of this ‘Non-Capitalist’ clause on the En Defensa del Software Libre website, which is used in addition to the standard Attribution and Share-Alike clauses.
Non-Capitalist - Commercial exploitation of this work is only permitted to cooperatives, non-profit organisations and collectives, self-managed workers' organisations, and where no exploitative relationships exist. Any surplus or surplus value obtained from the exercise of the rights granted by this Licence on the Work must be distributed by and among the workers.
Original Spanish:
No Capitalista - La explotación comercial de esta obra sólo está permitida a cooperativas, organizaciones y colectivos sin fines de lucro, a organizaciones de trabajadores autogestionados, y donde no existan relaciones de explotación. Todo excedente o plusvalía obtenidos por el ejercicio de los derechos concedidos por esta Licencia sobre la Obra deben ser distribuidos por y entre los trabajadores.
Source: En Defensa del Software Libre
The art of not being governed like that and at that cost

I haven’t yet listened to the episode of Neil Selwyn’s podcast entitled ‘What is ‘critical’ in critical studies of edtech? but I couldn’t resist reading the editorial written by Felicitas Macgilchrist in the open-access journal Learning, Technology and Society.
Macgilchrist argues that we shouldn’t take the word ‘critical’ for granted, and outlines three ways in which it can be considered. I particularly like her approach to critique of moving the conversation forward by “raising questions and troubling… previously held assumptions and convictions.”
Given what’s happening in the US at the moment, I’ve pulled out the Foucault quotation as making it difficult to be governed is absolutely how to resist authoritarianism — in any area of life.
When Latour (2004) wondered if critique had ‘run out of steam’, this led to a flurry of responses about critical scholarship today. If, he wrote, his neighbours now thoroughly debunk ‘facts’ as constructed, positioned and political, then what is his role as a critical scholar? Latour proposes in response that ‘the critic is not the one who debunks, but the one who assembles’ (2004, 246). And, in this sense, ‘assembling’ joined proposals to see critical scholarship as ‘reparative’, rather than paranoid or suspicious (Sedgwick 1997), as ‘diffraction’, creating difference patterns that make a difference (Haraway 1997, 268) or as ‘worlding’, a post-colonial critical practice of creation (Wilson 2007, 210). These generative approaches have been picked up in research on learning, media and technology, for instance, analysing open knowledge practices (Stewart 2015) or equitable data practices (Macgilchrist 2019), and most explicitly in feminist perspectives on edtech (Eynon 2018; Henry, Oliver, and Winters 2019). […]
Generative forms of critique invite us to imagine other futures, and have inspired a range of speculative work on possible futures. Futurity becomes, in these studies, less about predicting the future or joining accelerationist or transhumanist futurisms, but about estranging readers from common-sense. SF ‘isn’t about the future’ (Le Guin [1976] 2019, xxi), it’s about the present, generating ‘a shocked renewal of our vision such that once again, and as though for the first time, we are able to perceive [our contemporary cultures’ and institutions’] historicity and their arbitrariness’ (Jameson 2005, 255). […]
If critique is not fault-finding or suspicion but, as one often cited source has it, the ‘art of not being governed like that and at that cost’ (Foucault 1997, 29; Butler 2001), then the critical work outlined here aims to identify how we are currently being governed, to question how this produces the acceptable or desirable horizons of ‘good education’, ‘good teaching’ or ‘good citizens’, and to speculate on alternatives.
Source: Learning, Media and Technology
Image: Marija Zaric
⭐ Become a Thought Shrapnel supporter!
Just a quick reminder that you can become a supporter of Thought Shrapnel by clicking here. Thank you to The Other Doug and ARTiFactor for their one-off tips last week!
We're all below the AI line except for a very very very small group of wealthy white men

As a fully paid-up member of the Audrey Watters fan club, I make no apologies for including another one of her articles in Thought Shrapnel this week. This one has much that I could dwell on, but I’m trying not to post too much about the current digital overthrow of democracy in the US at the moment.
One could also say that I could stop posting as much about AI, but then that’s all my information feeds are full of at the moment. And, anyway, it’s an interesting topic.
While you should absolutely go and read the full text, I pulled the following out of Audrey’s post, which references something that I’ve also referenced Venkatesh Rao discuss: being above or below the “API line”. These days, it’s more like an “AI line”
In 2015, an essay made the rounds (in my household at least) that argued that jobs could be classified as above or below the “API line” – above the API, you wield control, programmatically; below, however, your job is under threat of automation, your livelihood increasingly precarious. Today, a decade later, I think we’d frame this line as an “AI” not an “API line” (much to Kin’s chagrin). We’re all told – and not just programmers – that we have to cede control to AI (to “agents” and “chatbots”) in order to have any chance to stay above it. The promise isn’t that our work will be less precarious, of course; there’s been no substantive, structural shift in power, and if anything, precarity has gotten worse. AI usage is merely a psychological cushion – we’ll feel better if we can feel faster and more efficient; we’ll feel better if we can think less.
We’re all below the AI line except for a very very very small group of wealthy white men. And they truly fucking hate us.
It’s a line, it’s always a line with them: those above, and those below. “AI is an excuse that allows those with power to operate at a distance from those whom their power touches,” writes Eryk Salvaggio in “A Fork in the Road.” Intelligence, artificial or otherwise, has always been a technology of ranking and sorting and discriminating. It has always been a technology of eugenics.
Source: Second Breakfast
Image: CC-BY Jamillah Knowles & We and AI / Better Images of AI / People and Ivory Tower AI
Philosophically discontinuous times?

You should, as they say, “follow the money” when people make pronouncements. And when they’re confusing, grand-sounding, and vague, full of big words that point to a radically different future, I’d argue that you should be wary. I’ve re-read this interview with Tobias Rees several times, and I’ve concluded that what he’s saying is… bollocks.
Rees is a “founder of… an R&D studio located at the intersection of philosophy, art and technology” while also being “a senior fellow of Schmidt Sciences’ AI2050 initiative and a senior visiting fellow at Google.” Oh, and he’s a former editor of NOEMA, where this interview is published. While some of what he says sounds relatively believable, I just can’t get over this statement:
What makes AI such a profound philosophical event is that it defies many of the most fundamental, most taken-for-granted concepts — or philosophies — that have defined the modern period and that most humans still mostly live by. It literally renders them insufficient, thereby marking a deep caesura.
The idea that AI is a “profound philosophical event” should start your eyes rolling, and I’d be surprised if they haven’t rolled out of your head by the time you finish the next bit:
The human-machine distinction provided modern humans with a scaffold for how to understand themselves and the world around them. The philosophical significance of AIs — of built, technical systems that are intelligent — is that they break this scaffold.
That that means is that an epoch that was stable for almost 400 years comes — or appears to come — to an end.
Poetically put, it is a bit as if AI releases ourselves and the world from the understanding of ourselves and the world we had. It leaves us in the open.
In general, when people start arbitrarily dividing history into epochs (think “second industrial revolution,” etc.) they usually don’t know what they’re talking about. Rees manages to mention a bunch of philosophers (Karl Jaspers, Karl Marx, Martin Heidegger, etc.) but it’s a scatter-gun approach. Again, I don’t really think he knows what he’s talking about:
The alternative to being against AI is to enter AI and try to show what it could be. We need more in-between people. If my suggestion that AI is an epochal rupture is only modestly accurate, then I don’t really see what the alternative is.
What does this mean? And then, table-flipping time:
As we have elaborated in this conversation, we live in philosophically discontinuous times. The world has been outgrowing the concepts we have lived by for some time now.
We only live in “philosophically discontinuous times” if you haven’t been paying attention, and haven’t done your homework. Another reason to avoid techbro-adjacent philosophising. It’s just a waste of time.
Source: NOEMA magazine
Image: CC-BYAnne Fehres and Luke Conroy & AI4Media / Better Images of AI / Data is a Mirror of Us /
That mask is kind of coming off in all sorts of ways now

I’d highly recommend listening to Helen Beetham’s latest podcast where she’s in conversation with Audrey Watters talking about AI. As you would expect, they eloquently critique AI as a tool of political and economic power, reinforcing right-wing authoritarianism, labour control, and racial hierarchies. However, the episode covers AI’s deep ties to military surveillance, eugenics, and Silicon Valley libertarianism, with them both arguing that it serves corporate and state interests rather than public good.
The second half of the podcast episode was my favourite, where they highlight how AI in education standardises learning, erases diversity (“the bell curve of banality”), and reinforces existing biases, particularly privileging male whiteness. The myth of AI as a ‘neutral’ or ‘liberating force’ is well and truly skwered, with them instead positioning ‘Luddism’ as a form of resistance against its exploitative tendencies.
I’ve pulled out one particular exchange from the episode which comes after Helen mentions Sam Altman’s response to DeepSeek r1 — something that has been likened to a ‘Sputnik moment’. The insight I appreciate is the comparison to crypto, which Audrey says was “almost too literal” in terms of being “too obvious of a con”.
Helen Beetham: OK. So, well, it’s kind of predictable, but I think the underlying message is really interesting. So effectively what he says is great. That’s great. They’re going to challenge us to do this at smaller scale. But we still need the build out. We absolutely need every inch of data centre we can have, and we need every piece of compute we can have because we’re going to need a lot of AI. And I think this is the moment where the mask starts to slip, you know, because it’s been clear for over a year that they’re not interested in a viable product. They don’t care whether the use cases work or not. Not except kind of rhetorically and incidentally. They don’t care if it’s valuable. They don’t care what it fucks up. They care about controlling data and compute. And it’s a great much better than crypto was. It seems to be much more effective than crypto at amassing that intentionality, that state will, that capital in one place to build out the biggest possible amount of data centres that are under the control of these corporations in alliance with these militarised states. And then at the same time, to control massive amounts of amounts of data and that is the underlying project. I feel that that mask is kind of coming off in all sorts of ways now. I could say something about how that plays out in the UK, but I’d really like to hear what you think.
Audrey Watters: Well, I think it’s interesting that the crypto stuff was almost too literal, right? Because this was about the creation of money. Like, literally we’re going to make up a new currency, and wrest power away from the traditional arbiters of money, the government. So it was almost like too nakedly literal. But with the generative AI, now we’re just making up, you know, students' essays. We’re just creating videos, and somehow it seems like a less overt power grab. I mean, I think for obviously for people in education, for people who work in creative industries, it’s an obvious power grab, but I think that it’s almost as though the cryptocurrency was too much of a con. It was too obvious of a con.
Source: imperfect offerings podcast
Image: Better Images of AI
There is no evidence that restrictive school policies are associated with overall phone and social media use or better mental wellbeing in adolescents

I usually find abstracts on academic papers a bit rubbish, but this ‘summary’ at the top of a research study is aces. As many people in the UK will have seen in the news over the last week, a study has shown that there’s “no evidence that restrictive school policies are associated with overall phone and social media use or better mental wellbeing in adolescents. As a result, “the findings do not provide evidence to support the use of school policies that prohibit phone use during the school day in their current form.”
This, of course, does not chime with what the public (parents, politicians, etc.) want to hear, so I imagine it will be widely ignored. In fact, when this was reported on in a radio news bulletin I heard, they immediately cut to a soundbite from a headteacher who had implemented a “no phones” policy who basically said it had worked for them. There are many problems with smartphone uses by teenagers in schools. But then there are many problems with schools.
Background: Poor mental health in adolescents can negatively affect sleep, physical activity and academic performance, and is attributed by some to increasing mobile phone use. Many countries have introduced policies to restrict phone use in schools to improve health and educational outcomes. The SMART Schools study evaluated the impact of school phone policies by comparing outcomes in adolescents who attended schools that restrict and permit phone use.
Methods: We conducted a cross-sectional observational study with adolescents from 30 English secondary schools, comprising 20 with restrictive (recreational phone use is not permitted) and 10 with permissive (recreational phone use is permitted) policies. The primary outcome was mental wellbeing (assessed using Warwick– Edinburgh Mental Well-Being Scale [WEMWBS]). Secondary outcomes included smartphone and social media time. Mixed effects linear regression models were used to explore associations between school phone policy and participant outcomes, and between phone and social media use time and participant outcomes. Study registration: ISRCTN77948572.
Findings: We recruited 1227 participants (age 12–15) across 30 schools. Mean WEMWBS score was 47 (SD = 9) with no evidence of a difference between groups (adjusted mean difference −0.48, 95% CI −2.05 to 1.06, p = 0.62). Adolescents attending schools with restrictive, compared to permissive policies had lower phone (adjusted mean difference −0.67 h, 95% CI −0.92 to −0.43, p = 0.00024) and social media time (adjusted mean difference −0.54 h, 95% CI −0.74 to −0.36, p = 0.00018) during school time, but there was no evidence for differences when comparing usage time on weekdays or weekends.
Interpretation: There is no evidence that restrictive school policies are associated with overall phone and social media use or better mental wellbeing in adolescents. The findings do not provide evidence to support the use of school policies that prohibit phone use during the school day in their current form, and indicate that these policies require further development.
Source: The Lancet
Image: True Images/Alamy (via The Guardian)
Technology is a means of spreading misinformation, not the cause of misinformation

As a technologist and educator (former History teacher!) who wrote his doctoral thesis on digital literacies, this article couldn’t be any more in my sweet spot if it tried. Dr Gordon McKelvie talks about his British Academy project about misinformatoin, focusing on queens “because they were prominent enough figures to be spoken about and blamed for the country’s ills.”
Although only coined in 2013, Brandolini’s law has always been in full effect: “The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it." At least we live in a time when things can, in theory, be rebutted and debunked quickly. Back in medieval times, people would believe misinformation for years — if not their entire lives.
While a focus on the immediate problems confronting democratic states dealing with the spread of conspiracy theories is essential, we should not lose sight of the fact that misinformation was around long before the internet. If we look back into the distant past, we see the spread of conspiracy theories have been a common feature throughout human history. Technology is a means of spreading misinformation, not the cause of misinformation. […]
A key finding has been that fake news often becomes accepted historical fact. An example that illustrates this is the death of Anne Neville, wife of the infamous Richard III. We do not know the exact cause of her death, but it was probably natural causes. One contemporary source, however, claimed that the king needed to deny poisoning her in order to marry his niece. This is the only near-contemporary reference to such an event, written by the hostile Crowland chronicler . By the time Shakespeare was writing his ‘The Tragedy of Richard III’ a century later, this had become an accepted historical fact. Here, we see something that began as a piece of misinformation in the fifteenth century transformed into an accepted historical fact in the sixteenth century. […]
When we look elsewhere in medieval Europe, we see other examples of misinformation premised on existing prejudices. During the First Crusade, mistrust between the Catholic crusaders and the Greek Orthodox Byzantine Empire led to conspiracy theories that the Byzantines were colluding with Muslims against their fellow Christians. When the first wave of the Black Death hit Europe in 1348, Jews were thought to have spread the disease by poisoning wells, simply to kill Christians. In both examples, pre-existing beliefs and fears meant that misinformation and conspiracy theories flourished quickly. […]
Misinformation was a key feature of medieval politics and society. Examining the spread of fake news, or conspiracy theories, in the centuries before even the printing press, never mind the internet, helps us understand how they flourish and their appeal. […]
Historians have an important part to play in fleshing out our understanding of misinformation. We are indeed living in an age of mistrust, but certainly not the first, and almost definitely not the last.
Source: The British Academy
Image: Walters Art Museum
Once you have a 360 view, you can redirect resources to insiders and cut off the opposition

I’ve held off posting anything about what’s currently going on in the USA at the moment, as apparently it’s all very confusing even if you’re paying full attention. What did make me sit up and take notice, though, was Jason Kottke’s use of screengrab from Mad Max: Fury Road when summarising a Bluesky thread by Abe Newman about Elon Musk’s seizure of key parts of the government’s information systems.
For those who haven’t seen the film (one of my favourites, especially the Black & Chrome Edition), it’s the perfect analogy. A character by the name of Immortan Joe, a dictator in a post-apocalyptic landscape, who is revered as a god by his followers. He dominates the economy by controlling the only supply of fresh water, which he turns on from time to time, saying “Do not, my friends, become addicted to water. It will take hold of you, and you will resent its absence!" I’ve included a gif above that shows the moment from the film.
Newman links to reporting that detail that these operations are controlled by Musk: payment, personnel, and operations. But seeing them as part of a bigger strategy is important:
The first point is to make the connection. Reporting has seen these as independent ‘lock outs’ or access to specific IT systems. This seems much more a part of a coherent strategy to identify centralized information systems and control them from the top.
Newman continues:
So what are the risks. First, the panopticon. Made popular by Foucault, the idea is that if you let people know that they are being watched from a central position they are more likely to obey. E.g. emails demanding changes or workers will be added to lists…
The second is the chokepoint. If you have access to payments and data, you can shut opponents off from key resources. Sen Wyden sees this coming.
Divert to loyalists. Once you have a 360 view, you can redirect resources to insiders and cut off the opposition.
Source: Kottke.org
Clinical studies have indicated that creatine might have an antidepressant effect

Along with about six different supplements, I add creatine to my protein smoothies every day I do exercise. Which, to be fair, is most days ending in a ‘y’. Too much of the white powder and I get angry but, as a male vegetarian, it’s important that I get some in my diet.
It turns out that creatine isn’t just good for building and maintaining muscle mass, though. It turns out that it’s also good for mental health, too — and combining it with various forms of therapy is especially beneficial.
More recently, researchers have begun to look at the broader systemic effects of creatine supplementation. Of particular interest has been the relationship between creatine and brain health. Following the discovery of endogenous creatine synthesis in the human brain, research quickly moved to understand what role this compound plays in things like cognition and mood.
Most studies linking brain benefits to creatine supplementation are either small or preliminary but there are enough clues to suggest that something positive could be going on here. For example, one oft-cited clinical trial from 2012 found creatine supplementation can effectively augment anti-depressant treatment. The trial was small (just 52 subjects, all women) but after eight weeks it found those subjects taking creatine supplements with their SSRI antidepressant were twice as likely to achieve remission from depression symptoms compared to those just taking antidepressants.
A recent article reviewing the research on creatine supplementation and depression pointed to several physiological mechanisms that could plausibly explain how this compound could improve mental health. Alongside citing several small trials that found positive results from creatine supplementation, the article concludes by stating: “Creatine is a naturally occurring organic acid that serves as an energy buffer and energy shuttle in tissues, such as brain and skeletal muscle, that exhibit dynamic energy requirements. Evidence, deriving from a variety of scientific domains, that brain bioenergetics are altered in depression and related disorders is growing. Clinical studies in neurological conditions such as PD [Parkinson’s Disease] have indicated that creatine might have an antidepressant effect, and early clinical studies in depressive disorders – especially MDD [Major Depressive Disorder] – indicate that creatine may have an important antidepressant effect.”
Source: New Atlas
Image: HowToGym
The idea that this might in any way appeal to 'newcomers' is bananas to me

It’s hard not to agree with John Gruber’s analysis of Openvibe, an app that allows you to mash together all of the different decentralised social networks (Mastodon, Bluesky, Threads, etc.) into one timeline. He doesn’t like it, and I have never liked the idea.
That’s partly because it’s confusing, but even if you managed to provide a compelling UX, the rhetorics of interactive communication are completely different on social networks. The way people interact on one social network use different norms and approaches than others. That means different literacies are involved. I’d argue that mashing it all together only really serves people who wish to ‘broadcast’ messages to multiple places at the same time.
I really don’t see the point of mashing the tweets from two (or more!) different social networks into one unified timeline. To me it’s just confusing. I don’t love the current situation where three entirely separate, thriving social networks are worth some portion of my attention (not to mention that a fourth, X, still kinda does too). But when I use each of these platforms, I want to use a client that is dedicated to each platform. These platforms all have different features, to varying degrees, and they definitely have different vibes and cultural norms. Pretending that they all form one big (lowercase-m) meta platform doesn’t make that pretense cohesive. Mashing them all together in one timeline isn’t simpler. It sounds simpler but in practice it’s more cacophonous.
The idea that this might in any way appeal to “newcomers” is bananas to me. The concept of streaming multiple accounts from multiple networks into one timeline is by definition a bit advanced. In my experience, for very obvious reasons, casual social network users only use the first-party client. They’re confused even by the idea of using, say, an app named Ivory to access a social network called Mastodon. The idea of explaining to them why they might want to use an app named Openvibe to access Mastodon, Bluesky, and Threads (and the weirdo blockchain network Nostr) is like trying to explain to your dog why they should stay out of the trash. There’s a market for third-party clients (or at least I hope there is), but that market is not made up of “newcomers”.
Source: Daring Fireball
The inevitable cracks in a rigid software logic that enables the surprising, delightful messiness of humanity to shine through

I’ve been following the development of Are.na since the early days of leading the MoodleNet project. It’s a great example of a platform that serves a particular niche of users (“connected knowledge collectors”) really well.
In this Are.na editorial, Elan Ullendorff — a designer, writer, and educator — talks about the course he teaches. In it, he helps students research and map algorithms, before writing their own, and releasing them to the world.
I write a newsletter, teach a course, and run workshops all called “escape the algorithm.” The implicit joke of the name’s particularity (not “escape algorithms” but “escape the algorithm”) is that living outside of algorithms isn’t actually possible. An algorithm is simply a set of instructions that determines a specific result. The recommendation engine that causes Spotify to encourage you to listen to certain music is a cultural sieve, but so were, in a way, the Billboard charts and radio gatekeepers that preceded it. There have always been centers of power, always been forces that exert gravitational pulls on our behavior.
The anxiety isn’t determined by the presence or absence of code. It comes from a lack of transparency and control. You are susceptible whether or not TikTok exists, whether or not you delete it. Logging off is one tool, but it will not alone cure you.
Instead of withdrawing, I encourage my students to dive deeper, engaging with platforms as if they were close reading a work of literature. In doing so, I believe that we can not only better understand a platform’s ideological premises, but also the inevitable cracks in a rigid software logic that enables the surprising, delightful messiness of humanity to shine through. And in so doing, we might move beyond the flight response towards a fight response. Or if it is a flight response, let it be a flight not just away from something, but towards something.
[…]
Resisting the paths most traveled invites us to look at the platforms we use with a critical eye, leading us to new forms of critique, making visible parts of the world and culture that are out of our view, and inspiring entirely new ways of navigating the web.
Take Andrew Norman Wilson’s ScanOps, a collection of Google Books screenshots that include the hands of low-paid Google data entry workers, or Chia Amisola’s The Sound of Love, which curates evocative comments on Youtube songs. Then there’s Riley Walz’s Bop Spotter (a commentary on ShotSpotter, gunshot detection microphones often licensed by city governments), a constantly Shazam-ing Android phone hidden on a pole in the Mission district.
Source: Are.na
Image: Андрей Сизов
⭐ Become a Thought Shrapnel supporter!
Hi everyone, Doug here. Just to let you know that it’s now possible to support Thought Shrapnel on a monthly basis!
Don’t worry, nothing’s changing other than your ability to ensure the sustainability of this publication, receive a holographic sticker to go on your water bottle (or whatever), and have your name listed as a supporter.
We used to have around 60 supporters of Thought Shrapnel back in the day, so I hope you’ll consider becoming one of the first to get this new (rare!) holographic sticker. This is part of a little February experimentation…
Description of Things and Atmosphere

My daughter was complaining that, now she’s in high school, her English teacher demands more of her writing. I happened to have just read a post at Futility Closet about the notebooks of F. Scott Fitzgerald which gives examples of him coming up with vividly atmospheric descriptions of scenes. I shared it with her, so hopefully she’ll use it as inspiration.
While I’m not a fan of overly-long descriptions just for the sake of it, this writing is sublime. It makes me want to re-read The Great Gatsby.
In the light of four strong pocket flash lights, borne by four sailors in spotless white, a gentleman was shaving himself, standing clad only in athletic underwear upon the sand. Before his eyes an irreproachable valet held a silver mirror which gave back the soapy reflection of his face. To right and left stood two additional menservants, one with a dinner coat and trousers hanging from his arm and the other bearing a white stiff shirt whose studs glistened in the glow of the electric lamps. There was not a sound except the dull scrape of the razor along its wielder’s face and the intermittent groaning sound that blew in out of the sea.
Source: The Notebooks of F. Scott Fitzgerald
Image: Illia Plakhuta
Cozy comfort for gamers

More articles about games should be games themselves, in my opinion! I loved this, and there’s a write-up of how and why it was created at here.
I spend enough times on screens, so haven’t really got into the ‘cozy’ genre, but I know that it’s a huge thing. Games that you can play on your own terms, provide a bit of escapism, are (as the article describes) proven to be as good as meditation and other forms of deep relaxation.
The gaming industry is larger than the film and music industries combined globally. A growing sector is the subgenre dubbed “cozy games.” They are marked by their relaxing nature, meant to help players unwind with challenges that are typically more constructive than destructive. Recent research explores whether this style of game, along with video games more generally, can improve mental health and quality of life.
These play-at-your-own-pace games attract both longtime gamers and newcomers. […]
There’s no hard definition for a “cozy game.” If the game gives the player a cozy, warm feeling then it fits.
[…]
These games can provide a space for people to connect in ways they may not in the real world. Suzanne Roman, who describes herself as an autistic advocate, said gaming communities can be lifelines for neurodivergent people, including her own autistic daughter who celebrated her 18th birthday in lockdown. “I think it’s just made them more confident people, who feel like they fit in socially. There’s even been relationships, of course, that have formed in the real world out of this.”
Source: Reuters