The "U-shaped curve" of cognitive offloading to AI tools

Auto-generated description: A visual framework illustrates the Cognitive Offloading Paradox, showing how the depth of learning changes with varying degrees of AI offloading, from doing it all oneself to committing to AI delegation.

Almost a year ago, I responded here on Thought Shrapnel to what I thought was a terrible paper which claimed to show, via brain scans, that using LLMs was bad for students' cognitive development.

As Philippa Hardman notes in this article, the academic literature has begun caught up with what people actually using these tools already know:

The theoretical picture sharpened in 2025–26. Favero et al. (2025) warned that cognitive offloading undermines learning outcomes unless the mental effort that’s freed up gets redirected towards other meaningful tasks.

Then, in March 2026, Lodge & Loble went a step further, arguing that cognitive offloading isn’t inherently harmful to learners — what matters is whether it’s beneficial or detrimental, and the difference depends entirely on what happens with the freed-up cognitive capacity.

So over the course of 2025 and into 2026, the field was starting to move beyond “AI is bad for learning” toward a harder question: when is it bad, and when might it actually help? But the empirical evidence to answer that question — across a large sample, across cultures, with a clear mechanism — didn’t exist yet.

It turns out that context matters, as does your mental model of what you can/should do with a tool. The scientific detail is in Hardman’s post, but I like her summary:

Zone 1 — No offloading. The learner does everything manually. AI isn’t part of the process. They carry the full cognitive load: reading every source, writing every draft, organising every dataset. Learning happens, but it’s slow and capacity-constrained. There’s no freed-up bandwidth for higher-order reflection because every minute is spent on execution. Zone 2 — Scattered, half-hearted offloading. The learner uses AI for a bit here and there — fixing a sentence, checking a fact, tidying a paragraph. This is where most current AI use in learning sits, and it’s the worst zone. The learner is still carrying almost all of the cognitive load, but now they’ve added the friction of managing the AI: deciding what to ask, evaluating whether the output is useful, switching between their own work and the tool. More effort, no meaningful benefit. This is what the negative studies measured. Zone 3 — Committed, strategic offloading. The learner delegates entire categories of substantive work to AI: all the source summarisation, the full first-pass literature review, the complete data organisation. The cognitive savings are large enough to genuinely free capacity — and that freed capacity gets invested in the work AI can’t do: critiquing frameworks, questioning assumptions, constructing original arguments, making judgement calls. This is where the paradox kicks in. This is where transformative learning lives.

So, essentially, there’s a “U-shaped curve” of adaptation, as anyone familiar with Charles Handy’s Sigmoid Curve will be aware. It’s definitely worth clicking through to read Hardman’s “Cheat Sheet” which helps reframe learning activities.

Source: Dr Phil’s Newsletter

Image: Claude Opus 4.7

"We have two Microsoft Outlooks and neither one is working"

Auto-generated description: A comparison of Earth images from Apollo 17 in 1972 and Artemis II in 2026 is humorously captioned with quotes from Neil Armstrong and Reid Wiseman.

So disappointing. I mean, who chooses Microsoft for anything mission-critical?

Source: That kafka Joke

The Journey Home

Auto-generated description: A small boat floats on a stylized, colorful sea under a radiant orange sun with birds flying nearby.

I came across this via Are.na, and then wanted to find out more about the “beautiful, melancholy genius of Matthew Wong”. This image is part of a triptych that you can view on the Christie’s auction house website

Source: Are.na

'Folk software' - not 'vibe coding'

Auto-generated description: A serene landscape depicts small houses with smoking chimneys, a path, rolling green hills, and snow-capped mountains under a bright sun.

I’m thankful to Pete Cohen for sharing this article with me, which enables me to put aside the awful term ‘vibe coding’ once and for all. I’ve created a bunch of software over the past few months, which you can see here: dynamicskillset.com/tools.

The bit of software I use the most, though, isn’t on that list. You can absolutely have a look at the source code but, really, I made Overflow just for me. It’s an app for my Mac which plays music from my Plex music collection on my home server. I tinker with it quite a bit, and now the app is 90% how I want it.

So, yes, folk software. I like it.

I’ve… made a CRM that works the way I work, a serendipity engine, a hype decay tracker, a draft graveyard and heaps more things.

I’ve started calling this folk software.

The songs of folk music emerged from communities rather than studios, passed around and adapted, never focus-grouped. Folk music didn’t disappear when recorded music arrived. It just stopped being the only option. For a while, if you wanted music, you either made it yourself or knew someone who could.

Software has been in its “recorded music” era for decades. If you wanted a tool, you either bought what the industry offered or you didn’t have it. The threshold for creation was high enough that almost everything had to be commercially justified. Will enough people pay? Can we get budget for this?

That threshold just collapsed.

Tools like Claude Code, Cursor, Replit mean the calculus is now simply: do I want this? The answer can be yes for an audience of one. And sometimes that audience will turn out to be more than just you.

Source: Move37

Image: Claude

Our communication currently often takes place via platforms over which we have no control

Auto-generated description: A crumpled piece of paper on the ground displays the words WhatsApp respects and protects your privacy.

I’ve never used WhatsApp, and the only Meta account I’ve ever had was for Facebook when it first came out. Which makes me a bit of an outlier, I know.

But it seems that other people are cottoning-on to the fact that US Big Tech companies do not have the best interests of European users at heart. I use and recommend Signal, but even that - if not ‘Big Tech’ - is US-based.

This article talks about how European governments are switching to encrypted apps under their control. I applaud the move! For more like this, see the first TechFreedom Dispatch.

Governments in France, Germany, Poland, the Netherlands, Luxembourg and Belgium have started rolling out in-house messaging services for officials to exchange sensitive information, in an effort to stop staff from using popular encrypted apps and switch to local alternatives they can control. Defense alliance NATO also has its own messenger, and the European Commission plans to make the switch by the end of the year.

The move toward government-controlled messaging apps is part of Europe’s search for alternatives to American technology, sparked by fears of being strategically dependent on Washington. WhatsApp is owned by U.S. tech giant Meta, while Signal is run by a U.S.-based non-profit and managed by a large community of open-source software enthusiasts.

The effort to unplug from American companies also reflects growing recognition among governments of the vulnerabilities of mainstream messaging apps for sharing sensitive information between politicians.

“Our communication currently often takes place via platforms over which we have no control,” Willemijn Aerdts, the Netherlands’ digital minister, told POLITICO in a statement. “In a world where technology is increasingly being used as a tool of power, that poses a risk.”

Source: Politico

Image: Tushar Mahajan

Note to self

Auto-generated description: A contemplative message encourages letting go by highlighting the absence of an audience, approval, or roles to play.

Source: Are.na

Scamming tourists in Nepal

Auto-generated description: A red helicopter is parked on a snowy mountain slope with a backdrop of majestic, snow-covered peaks.

This is wild 🤯

The mechanics of the fake rescue racket are straightforward: stage a medical emergency, call in a helicopter, check a tourist into a hospital, and file an insurance claim that bears little resemblance to what actually happened. But the sophistication lies in how each link in the chain is compensated, and how difficult it is for a foreign insurer — operating from Australia and the United Kingdom— to verify events that occurred at 3,000 metres in a remote Himalayan valley.

The CIB investigation identifies two primary methods for manufacturing an “emergency.”

The first involves tourists who simply don’t want to walk back. After completing a demanding trek — an Everest Base Camp trek, for instance, can take up to two weeks on foot — guides offer an alternative: pretend to be sick, and a helicopter will come. The guide handles the rest.

The second method is more troubling. At altitudes above 3,000 metres, mild symptoms of altitude sickness are common. Blood oxygen saturation can drop, hands and feet tingle, headaches develop. In most cases, rest, hydration or a gradual descent is all that is needed. But guides and hotel staff, according to the CIB investigation, have been trained to terrify trekkers at precisely this moment. They tell them they are at risk of dying, that only immediate evacuation will save them. In some cases, investigators found that Diamox (Acetazolamide) tablets, used to prevent altitude sickness, were administered alongside excessive water intake to induce the very symptoms that would justify a rescue call.

In at least one case cited in the investigation, baking powder was mixed into food to make tourists physically unwell.

Once a “rescue” is called, the financial choreography begins. A single helicopter carries multiple passengers. But separate, full-price invoices are submitted to each passenger’s insurance company, as if each had their own dedicated flight. A $4,000 charter becomes a $12,000 claim. Fake flight manifests and load sheets are fabricated. At the hospital, medical officers prepare discharge summaries using the digital signatures of senior doctors who were never involved in the case. In some cases, these are done without those doctors’ knowledge. Fake admission records are created for tourists who were, in some documented instances, drinking beer in the hospital cafeteria at the time they were supposedly receiving treatment.

[…]

Between 2022 and 2025, investigators identified 4,782 foreign patients treated across the implicated hospitals. Of these, 171 cases were confirmed as fake rescues. Over that period, Era International Hospital received deposits of more than $15.87 million linked to these activities. Shreedhi International Hospital received over $1.22 million.

Among rescue operators, Mountain Rescue Service conducted 171 fraudulent rescues out of 1,248 total charter flights, claiming approximately $10.31 million from insurers. Nepal Charter Service carried out 75 fake rescues from 471 flights, claiming $8.2 million. Everest Experience and Assistance was linked to 71 suspicious rescues from 601 flights, with insurance claims totalling $11.04 million.

In one instance that illustrates the brazenness of the scheme, police documented a case in which four tourists were rescued on a single helicopter flight, on the same date, using the same helicopter and manifest. Insurance claims were nonetheless submitted as multiple separate rescues, with the total rescue bill reaching $31,100, plus a separate hospital bill of $11,890.

Source: The Kathmandu Post

Image: Alexander Aashiesh

The system can generate options. It cannot supply ownership.

Auto-generated description: A table contrasts the capabilities of AI and LLMs with tasks that humans still need to perform, across four layers: generation, pattern matching, optimization, and scaling.

The above table is included in a fantastic article by Raj Nandan Sharma entitled Good Taste the Only Real Moat Left. He offers a nuanced view of working with LLMs, arguing that, yes, of course there is the lazy, ‘slop’ version of AI that involves what he calls “passive selection”. But what’s much more interesting valuable and value is active shaping.

There is a strong version of the “taste matters” argument that quietly pushes humans into a narrow role. In that version, AI generates many outputs and the human stands at the end of the pipeline selecting the best one.

That is a useful role, but it is also too small.

Historically, important work did not emerge from detached selection alone. It emerged from co-creation under constraint. Builders argued with reality, with collaborators, with budgets, with materials, with timelines, and with the consequences of getting things wrong.

That friction matters. It is where depth comes from.

Once you see that, the risk becomes clearer: if human value is reduced to curation, the human becomes a discriminator in a mostly machine-driven loop.

The analogy to machine learning is imperfect but useful. In generative adversarial setups, the discriminator exists to help the generator improve. Once the generator is good enough, the discriminator is not the part that ships.

The warning is not that taste has no value. It does. The warning is that taste without authorship, stake, or construction can become a narrow and eventually fragile role.

Source: Raj Nandan Sharma

'Google Docs' for Markdown?

Auto-generated description: A dark-themed user interface displays a collaborative Markdown editor with sections for Markdown features, suggestions, and comments, alongside a task list and chat menu.

A couple of months ago, Matt Webb shared a tool called mist which I’d describe as Etherpad with Markdown and track changes. I don’t think I even shared it here, because, although it was cool, it wasn’t Open Source, and therefore I didn’t think it would last very long.

Happily, Matt’s not only open-sourced it, but made it really easy to deploy via Cloudflare. Happy days!

What I love about Markdown is that it’s document-first. The formatting travels with the doc. I can’t tell you how many note-taking apps I’ve jumped between with my exact same folder of Markdown notes.

The same should be true for collaboration features like suggested edits. If somebody makes an edit to your doc, you should be able to download it and upload to a wholly different app before you accept the edit; you shouldn’t be tied to a single service just because you want comments.

(And of course the doc should still be human-readable/writeable, and it’s cheating to just stuff a massive data-structure in a document header.)

So mist mixes Markdown and CriticMarkup – and I would love it if others picked up the same format. If apps are cheap and abundant in the era of vibing, then let’s focus on interop!

With mist itself:

Several people have asked for the ability to self-host it. The README says how (it’s all on Cloudflare naturally). You can add new features to your own fork, though please do share upstream if you think others could benefit.

This is the first time I’ve come across CriticMarkup which is a layer on top of Markdown for ‘track changes’. The way that it’s done in mist is explained here.

Source: interconnected

Ideas are not products, as much as corporations would like them to be

Auto-generated description: A vintage-style light bulb hangs against a wall with concentric black lines creating an optical illusion.

It’s one thing believing that Intellectual Property (“IP”) is absolute bollocks, and it’s another thing living your life under capitalism. It’s the reason that my doctoral thesis is CC0 licensed (i.e. “donated to the public domain”) and all of the tools I’ve been building recently are AGPL-licensed (“specifically designed to ensure cooperation with the community”).

In this essay, Jenny Odell, author of the excellent How to Do Nothing tells the story of a Japanese farmer who rediscovered the old ways, and paying more attention to the seasons. The main thrust of what she has to say, though, is about where ideas come from.

Essentially, everything is emergent, and all your brain is doing is making links between things. Which is why I don’t have any problem in using LLMs as part of my workflow.

Why is it that when we sit down and try to force an idea, nothing comes—or, if we succeed in forcing it, it feels stale and contrived? Why do the best ideas appear uninvited and at the strangest times, darting out at us like an impish squirrel from a shrub?

The key, in my opinion, has to do with what you think it is that’s doing the producing, and where. It’s easy for me to say that “I” produce ideas. But when I’ve finished something, it’s often hard for me to say how it happened—where it started, what route it took, and why it ended where it did.

[…]

Ideas are not products, as much as corporations would like them to be. Ideas are intersections between ourselves and something else, whether that’s a book, a conversation with a friend, or the subtle suggestion of a tree. Ideas can literally arise out of clouds (if we are looking at them). That is to say: ideas, like consciousness itself, are emergent properties, and thinking might be more participation than it is production. If we can accept this view of the mind with humility and awe, we might be amazed at what will grow there.

Source: The Creative Independent

Image: JACQUELINE BRANDWAYN

Related Are.na collection: How to grow an idea

Games from Hacker News "Show HN" threads

Auto-generated description: A dark-themed poster features The HN Arcade with options to Browse Games or Submit a Game in orange.

I browse Hacker News most days, and earlier today came across a wonderfully addictive game called STARFLING.

When I shared it on our gaming chat, Adam Procter noticed that there’s a whole arcade that someone curates from “Show HN” threads! Delightful.

The HN Arcade is a community-driven directory of games discovered from Hacker News Show HN posts.

Source: The HN Arcade

Violently boiling water in some monstrous kettle

Auto-generated description: A complex pattern of abstract, black symbols and shapes is scattered across a white background.

What I like about this website is that it’s not just “art” but art with a purpose. The subtitle of this project is Experimental Notation in Music, Art, Poetry, and Dance, 1950–1975 and covers artists I’ve heard of, such as John Cage, and many I haven’t.

What they share is an ability to rethink the way in which their art is denoted. For example:

The pointillism of Morton Feldman’s Intersection 3 is an early example of experimental musical notation. One of many pieces in the 1950s that Feldman wrote on graph paper, the work features a metronomic tempo while inviting its performer, the pianist David Tudor, to decide what pitches to play, prescribing only the number of notes and the general pitch range. The sounds that resulted evoked associations of combat and even brutality among critics, an aesthetic that Feldman himself described as “violently boiling water in some monstrous kettle.”

Ambiguity often gets a bad rap, but it’s something that fascinates me – and is, I believe, at the heart of creativity. You can see what I mean by looking at the overlapping circles diagram in this paper I wrote with my thesis supervisor 15 years ago.

TL;DR: words and symbols both denote and connote things, and it’s at the overlap of this denotation and connotation that interesting things happen.

Source: The Scores Project

You made this?

Auto-generated description: A bird expresses amazement at a birdhouse another bird has made, to which the creator proudly responds, yes.

Source: they can talk

Lemme finish this sentence...

Auto-generated description: A man lies on the ground writing in a notebook while seemingly being swallowed by the open mouth of a large alligator or crocodile.

Obviously staged, but I love this photo of the “artist and bon vivant” Peter Beard.

Source: Are.na

Earthrise, Take 2

Auto-generated description: A view of Earth rising over the rugged, cratered surface of the Moon.

This is my favourite photo of those released by NASA from the Artemis II Lunar Flyby. There are also a lot more, including of the crew and ground control on the NASA Johnson Flickr account.

The first flyby images of the Moon captured by NASA’s Artemis II astronauts during their historic test flight reveal regions no human has ever seen before—including a rare in-space solar eclipse. Released Tuesday, April 7, 2026, the photos were taken on April 6 during the crew’s seven‑hour pass over the lunar far side, marking humanity’s return to the Moon’s vicinity.

Source: NASA | Artemis II Lunar Flyby

Commonplace

Auto-generated description: Commonplace is described as a platform for curating and sharing links across the open social web, with options like Mastodon and RSS, and no account required.

I’m building and experimenting with a new Thought Shrapnel-adjacent thing called Commonplace. It’s a federated collection manager for the open social web.

You can try it out and/or install your own version. I haven’t given up on the #MoodleNet idea of communities curating collections. It’s currently links-only, but once I get the right protections in place, I plan to allow resource uploads, as well as the ability to have to sign in to view collections.

Commonplace is a link collection manager for the open social web. Organise links by topic, invite collaborators, and share collections with people on Mastodon, Bluesky, and RSS — without asking them to sign up anywhere new.

Source: Commonplace

🐣 Happy Easter!

Auto-generated description: A brick wall with two circular windows and an arched doorway resembles a face.

No Thought Shrapnel this week. Enjoy celebrating/not celebrating Easter however you do (or don’t do) it! 🙂

A Victorian-era LLM

Statue of Queen Victoria in Croydon, London

If you scratch away the surface, I’m still a History teacher underneath, so I love this idea of training an LLM on Victorian-era texts! It’s pretty slow, but fun.

Mr. Chatterbox is a language model trained entirely from scratch on a corpus of over 28,000 Victorian-era British texts published between 1837 and 1899, drawn from a dataset made available by the British Library. He is not a modern AI putting on an accent — his vocabulary, ideas, and worldview are formed exclusively from nineteenth-century literature.

He excels at discussions of Victorian life, literature, science, philosophy, manners, and the great questions of the age. Ask him about the railways, the Crystal Palace, Mr. Darwin’s theories, or the proper conduct of a gentleman. As the model is still in beta, some responses may be a little wonky. If this happens, click on an answer to regenerate it.

Source: Hugging Face

Image: Kristin Snippe

2026 is about 'Aspirational Humanity' – amongst other things

Auto-generated description: A large bubble floats in the sky above pink clouds with the text What's Anu 2026 Macrotrend Report Public Preview overlaying the image.

The key themes in this slide deck are interesting, especially as I like to be able to name things that I’m seeing/sensing:

  • Aspirational Humanity – “As artificial intelligence hyper-flattens mass culture, anything denoting evidence of humanity becomes exceptionally desirable.”
  • Sensorial Potency – “The drive to over-optimize everything has left us in a sensory void.”
  • Subversive Sincerity – “The performance of ironic detachment is growing tired, and the fantasy of regressive nostalgia is no longer meeting expectations.”
  • Algorithmic Evasion – “Exasperation with social media isn’t new, but chaos overload and sloppification has pushed annoyance to a threshold for action.”
  • Subtle Sustainability – “Politicization has quieted brand environmental efforts over the past year, while the public succumbs to eco-fatigue upon realizing the relatively miniscule impact of individual action.”

Source: WHAT’S ANU (more here)

Clippy sez: Just Do It

Auto-generated description: A cartoon paperclip character humorously questions if someone is waiting for ideal conditions that don't exist, offering sarcastic responses I'm aware and Wow, rude.

Wow, rude.

Source: Are.na