On the gendered nature of (types of) hobbies

Auto-generated description: A matrix categorizes hobbies into four types: relational, sovereign, domestic, and restorative, based on their social orientation and adaptability to life or hobby.

This is an interesting look at the gendered nature of hobbies, how they’re coded, and how people treat them as provisional or non-negotiable. I’ve never been a woman, and never been in a long-term relationship with anyone other than my wife, so I don’t know how this works for other people.

What I do know is that there’s at least three forces at play here: gender norms and differences, peer pressure (real/imaginary) and expectations of self. The important thing is to talk about them, and I appreciate this post as opening up space to do that.

On one axis is a simple question: who does the hobby primarily serve? Some hobbies benefit others or the household — they produce something useful or supportive, something that flows outward. Others revolve around relationships and social connection. And some hobbies exist purely for the person doing them, serving no one but herself.

On the other axis is a less obvious question: who adapts to whom? Does the hobby fit itself around life, squeezing into whatever time is available? Or does life rearrange itself around the hobby, moving other things aside to make room?

[…]

What’s happening is not simply that men choose different hobbies. It is that men treat their hobbies — whatever they are — as non-negotiables, while women treat theirs as provisional. Men bring an entitlement to leisure that operates almost independently of what the activity is. Women bring a posture of permission-seeking and when they push back, as I did, they are made to feel they are asking for something extraordinary rather than basic consideration. The result over time is a gravitational pull: men drift upward toward the fourth sovereign quadrant regardless of what their hobby is. Women’s participation in sovereign hobbies get pull downward towards the elective.

Source: Astrid

Digital literacies involve layers of abstraction

Auto-generated description: A digital artwork showcases a whimsical arrangement of geometric shapes and structures resembling files and data stores.

On the one hand, yes I feel this. On the other hand, things change! There are layers of abstraction, especially with computing.

I was having a conversation with someone recently who’s senior in an educational computing organisation. We both agreed that the equivalent of Mozilla Webmaker these days wouldn’t be teaching kids HTML, CSS, and JavaScript; we’d be teaching them how to understand and use AI tools to achieve their ends. This is still making.

There’s a certain kind of person who’s becoming extinct. You’ve probably met one. Maybe you are one. Someone who actually understood the tools they used. Someone who could sit down at an unfamiliar system, poke at it for twenty minutes, and have a working mental model of what it was doing and why. Someone who read error messages instead of dismissing them. Someone who, when something broke, treated it as a puzzle rather than a betrayal.

That person is dying off. And nobody in the industry seems to care. In fact, most of them are actively celebrating the funeral while billing it as progress.

This isn’t an accident. This is the result of two decades of deliberate, calculated effort by the largest technology companies on earth to turn users into consumers, instruments into appliances, and technical literacy into a niche hobby for weirdos. They succeeded beyond their wildest expectations. Congratulations to everyone involved. You’ve built a generation that can’t extract a zip file without a dedicated app and calls it innovation.

[…]

The concept of a filesystem — of hierarchical storage that you own, that lives on hardware you control, that persists independently of any company’s servers — is genuinely alien to them. Not because it’s complicated. A child can understand that files live in folders. But they’ve never had to understand it because the platforms they grew up on hid it from them. iOS shipped without a user-accessible filesystem for over a decade. Google Drive abstracts away the folder metaphor entirely if you let it. iCloud will “optimize” your local storage, which is a polite way of saying it will silently move your files to Apple’s servers and give you a ghost of them on your own machine, and most users have no idea this is happening or what it means.

[…]

The users who grew up on these platforms don’t know what they’re missing. They’ve never used a system where they were genuinely in control. The idea that you should be able to run arbitrary code on hardware you paid for is foreign to them — not rejected, but simply absent as a concept. They’ll defend the restrictions without prompting because they’ve internalized the vendor’s framing so thoroughly that they experience the cage as comfortable. “I don’t want to root my phone, that sounds scary.” Cool. You’ve successfully trained yourself to be afraid of ownership. The platform vendors are proud of you.

[…]

The obituary for the power user is being written right now. The people writing it are the same ones who sold you the phone, designed the app store, wrote the terms of service you didn’t read, and built the algorithm that decided you didn’t need to see this.

Source: fireborn

Image: Creative Minds Factory

How power structures and relationships really work

Auto-generated description: A humorous organizational chart depicts a tangled web of personal relationships and hidden connections, including affairs, secrets, and rivalries among cartoon faces.

I’m not sure what I enjoyed more, the org chart showing how power structures and relationships really work, or the LinkedIn comment that said:

Very interesting how the dealer sells to his coworkers, and yet they’re still sad.A lack of clearly defined KPIs and regular milestone celebrations can make it difficult to maintain alignment and momentum with stakeholders. Would be insightful to create a internal customer feedback loop here.

Source: LinkedIn

Time as an instrument?

Auto-generated description: A website page features a product called interval for macOS, displaying a digital timer interface and a purchase button for $21.99.

I’m fascinated by this. Not fascinated enough to pay $21.99 to use it on just one of my devices, but I just think it’s a really interesting example of reducing functionality, working hard on the aesthetic, and making something simple to use.

I can, and do, use Toggl which is much more fully-featured, but there’s something to be said for things being nice to use. Perhaps I need to create my own cross-platform version, rather than an Apple-only one, as I did with Stream

Source: interval for macOS

On originality

Auto-generated description: A quote about finding inspiration and individuality is attached to a surface using red tape and paperclips.

100% agree.

Source: Are.na

Quite the week

Image CC BY-ND Visual Thinkery for WAO: A group of people discusses starting a co-op, and their journey is depicted as climbing a mountain over ten years.

No Thought Shrapnel this Sunday. It’s been quite the week.

– Doug

The concentration of power in AI labs is now one of the defining political questions of the decade

Auto-generated description: A vibrant, multicolored pattern of computer circuit boards repeats across the image.

This is an excellent post which talks lucidly about what it means for power to be decentralised in the world of AI. The author, Alex Chalmers, argues that decentralisation is not automatically good; it only works when embedded in a framework that can coordinate local actors, define boundaries, and step in when things go wrong.

Chalmers draws on historical thinkers and different traditions, ultimately arguing that if we care about pluralism and autonomy, we should design bounded decentralisation with explicit constitutional guardrails. In other words, we shouldn’t just assume “more nodes” or “more voices” automatically means more freedom.

In today’s world, a handful of companies control the compute, data, and frontier models that are restructuring how billions of people interact with the world. Existing institutions are struggling to keep up. The concentration of power in AI labs is now one of the defining political questions of the decade.

Many are unhappy about this development, with groups like the AI Now Institute, the Distributed AI Research Institute (DAIR), and the Algorithmic Justice League arguing that AI development as currently constituted is irredeemably centralizing. They believe that we need to relocate authority away from corporations and regulators towards the communities most affected by these systems. When policymakers look for alternatives to the status quo of corporate self-governance and light-touch regulation, these groups are frequently in the room.

Ideas around participatory AI governance draw on a deep intellectual tradition that integrates technology and power, dating back to nineteenth century anarchism and running through twentieth century American social theory. While elements of the diagnosis have force, both the analysis and the prescriptions suffer from fatal flaws that become even more acute in the AI age.

[…]

If your starting premise is that human flourishing is what happens when the megamachine gets out of the way, you don’t need to weigh the goods it produces, because they aren’t really goods. You don’t need a theory of when expertise is legitimate, because expertise is a symptom of the problem. You don’t need mechanisms against capture, because capture is what happens under the current system and will dissolve along with it.

The intellectual apparatus is structured to avoid the questions that a functional governance regime has to answer. What looks like a program for radical democracy turns out to be a refusal of the conditions under which democratic decisions about complex systems can be made at all.

[…]

[G]overnance must go where the knowledge is. This could be professional bodies, academic institutions, or open-source communities. They would each govern usage within the domain where their members have the requisite competence and stakes. Fortunately, most of these institutions already exist. They do not need to be designed from first principles or assembled by the participation industry.

While the vast majority of governance questions are deployment problems where domain-specific institutions have the advantage, there are a handful of bigger challenges that sit above this. Problems like the security of frontier model weights or thresholds for certain dangerous capabilities sit at a higher layer that require a degree of either state or interstate coordination.

[…]

No quantity of nested enterprises can resolve the production-side concentration of frontier AI. A handful of labs control the most powerful models, and no amount of deployment-side checks and balances can change that.

But a thick ecosystem of intermediary institutions on the deployment side creates countervailing power. The labs must satisfy many masters rather than capturing one regulator, or, as the anarchist model would have it, being replaced by a constellation of community-run alternatives that will never match their capabilities.

[…]

Freedom has never depended on power being small. It has depended on power being answerable to more than one authority at a time, and on citizens belonging to institutions that can push back on their own terms. The task ahead of us is building that intermediary layer.

Source: Cosmos Institute

Image: Deborah Lupton

Grand ambitions vs reality

Auto-generated description: A confident character is depicted with grand ambitions during a walk, contrasted with a subdued reaction shortly after returning home.

It’s not just walking, but solo travel of any sort that does this kind of thing to me. I’ve spent too long at home recently.

Source: Are.na

Renewable energy: 98% of days in Britain are either windy, sunny, or both

Auto-generated description: A scatter plot and heat maps illustrate how wind and solar energy complement each other in Britain, highlighting patterns of energy generation through 2024 and early 2025.

I discovered this particular one via LinkedIn, but the original creators of this infographic, Ember Energy, has loads of them on their website – mainly focused on the EU.

Source: LinkedIn

Having a system built on context puts the power in the people's hands

Auto-generated description: A vibrant network of interconnected nodes of various colors, including pink, yellow, and blue, is displayed against a dark background.

This post is by a journalist, talking about journalism. But it’s not a huge conceptual leap to think about this in terms of education.

The people who are stuck in the AI = chatbot are getting it all wrong. Interacting with an AI is an amazing way of connecting together things you care about in an order that suits you and the way you learn. It’s not just about sitting kids in front of a computer, but about finding ways of exploring human knowledge in ways that go beyond the limited experience of the people who happen to be available for guidance.

I was talking with Justin Spooner today about “multiplayer AI” which is really the problem that needs solving next. Learning isn’t a solo activity.

Even when the web granted us unlimited space, we stuck to [the] old formats. We gave everyone the same packaged product because that’s all we were able to do, and most of the primary material went nowhere.

Those constraints are gone now. Artificial intelligence and large language models mean we can make sense of all that source material that was previously left on the cutting room floor. That data is actually hugely valuable as unique, new information. And at the same time, the limits on how we create stories are gone. Generative AI can craft a story tailored to a single person and produce millions of those stories at once.

Given this change, making “content” the driving force and output for publishing systems no longer makes sense. What we need now is a new form of software for media companies to orient around: the Context Management System.

[…]

Having a system built on context puts the power in the people’s hands. The context becomes the raw material that AI can use to create a unique piece of media just for that person at that time. It can be text, audio or video based on what the user prefers, shaped by their preferences, location and time of day. The story can be customized based on where a person lives, their educational level, or what they know based on past consumption patterns.

Source: Hacks/Hackers

Image: BoliviaInteligente

THE FUTURE IS OFFLINE

Auto-generated description: A red sticker on a gray surface reads THE FUTURE IS OFFLINE.

Source: maique

Why I adore the night

I have noticed that when all the lights are on, people tend to talk about what they are doing – their outer lives. Sitting round in candlelight or firelight, people start to talk about how they are feeling – their inner lives. They speak subjectively, they argue less, there are longer pauses. To sit alone without any electric light is curiously creative. I have my best ideas at dawn or at nightfall, but not if I switch on the lights – then I start thinking about projects, deadlines, demands, and the shadows and shapes of the house become objects, not suggestions, things that need to done, not a background to thought.

Why I adore the night, by Jeanette Winterson

Life advice

Auto-generated description: A slice of toast has the words YOU'LL SOON BE COMPOST written on it with alphabet pasta.

Some people are fond of sharing the saccharine quote by Mary Oliver: “Tell me, what is it you plan to do with your one wild and precious life?” I, on the other hand, am more fond of much more down-to-earth reminders that we will soon be dust – or, as this piece of toast suggests compost.

Source: Are.na

The AI Adoption Spiral

Auto-generated description: A spiral diagram labeled The AI Adoption Spiral humorously illustrates the stages of understanding AI, from initial confusion to eventual capability and innovation.

I love this from Liz Fosslien, as the outer part of it shows the journey that people go on with any new tool. The inner part, however, is more philosophical. And for some people it’s emancipatory and for others it gets really dark, quickly.

Source: Liz Fosslien

The "U-shaped curve" of cognitive offloading to AI tools

Auto-generated description: A visual framework illustrates the Cognitive Offloading Paradox, showing how the depth of learning changes with varying degrees of AI offloading, from doing it all oneself to committing to AI delegation.

Almost a year ago, I responded here on Thought Shrapnel to what I thought was a terrible paper which claimed to show, via brain scans, that using LLMs was bad for students' cognitive development.

As Philippa Hardman notes in this article, the academic literature has begun caught up with what people actually using these tools already know:

The theoretical picture sharpened in 2025–26. Favero et al. (2025) warned that cognitive offloading undermines learning outcomes unless the mental effort that’s freed up gets redirected towards other meaningful tasks.

Then, in March 2026, Lodge & Loble went a step further, arguing that cognitive offloading isn’t inherently harmful to learners — what matters is whether it’s beneficial or detrimental, and the difference depends entirely on what happens with the freed-up cognitive capacity.

So over the course of 2025 and into 2026, the field was starting to move beyond “AI is bad for learning” toward a harder question: when is it bad, and when might it actually help? But the empirical evidence to answer that question — across a large sample, across cultures, with a clear mechanism — didn’t exist yet.

It turns out that context matters, as does your mental model of what you can/should do with a tool. The scientific detail is in Hardman’s post, but I like her summary:

Zone 1 — No offloading. The learner does everything manually. AI isn’t part of the process. They carry the full cognitive load: reading every source, writing every draft, organising every dataset. Learning happens, but it’s slow and capacity-constrained. There’s no freed-up bandwidth for higher-order reflection because every minute is spent on execution.

Zone 2 — Scattered, half-hearted offloading. The learner uses AI for a bit here and there — fixing a sentence, checking a fact, tidying a paragraph. This is where most current AI use in learning sits, and it’s the worst zone. The learner is still carrying almost all of the cognitive load, but now they’ve added the friction of managing the AI: deciding what to ask, evaluating whether the output is useful, switching between their own work and the tool. More effort, no meaningful benefit. This is what the negative studies measured.

Zone 3 — Committed, strategic offloading. The learner delegates entire categories of substantive work to AI: all the source summarisation, the full first-pass literature review, the complete data organisation. The cognitive savings are large enough to genuinely free capacity — and that freed capacity gets invested in the work AI can’t do: critiquing frameworks, questioning assumptions, constructing original arguments, making judgement calls. This is where the paradox kicks in. This is where transformative learning lives.

So, essentially, there’s a “U-shaped curve” of adaptation, as anyone familiar with Charles Handy’s Sigmoid Curve will be aware. It’s definitely worth clicking through to read Hardman’s “Cheat Sheet” which helps reframe learning activities.

Source: Dr Phil’s Newsletter

Image: Claude Opus 4.7

"We have two Microsoft Outlooks and neither one is working"

Auto-generated description: A comparison of Earth images from Apollo 17 in 1972 and Artemis II in 2026 is humorously captioned with quotes from Neil Armstrong and Reid Wiseman.

So disappointing. I mean, who chooses Microsoft for anything mission-critical?

Source: That kafka Joke

The Journey Home

Auto-generated description: A small boat floats on a stylized, colorful sea under a radiant orange sun with birds flying nearby.

I came across this via Are.na, and then wanted to find out more about the “beautiful, melancholy genius of Matthew Wong”. This image is part of a triptych that you can view on the Christie’s auction house website

Source: Are.na

'Folk software' - not 'vibe coding'

Auto-generated description: A serene landscape depicts small houses with smoking chimneys, a path, rolling green hills, and snow-capped mountains under a bright sun.

I’m thankful to Pete Cohen for sharing this article with me, which enables me to put aside the awful term ‘vibe coding’ once and for all. I’ve created a bunch of software over the past few months, which you can see here: dynamicskillset.com/tools.

The bit of software I use the most, though, isn’t on that list. You can absolutely have a look at the source code but, really, I made Overflow just for me. It’s an app for my Mac which plays music from my Plex music collection on my home server. I tinker with it quite a bit, and now the app is 90% how I want it.

So, yes, folk software. I like it.

I’ve… made a CRM that works the way I work, a serendipity engine, a hype decay tracker, a draft graveyard and heaps more things.

I’ve started calling this folk software.

The songs of folk music emerged from communities rather than studios, passed around and adapted, never focus-grouped. Folk music didn’t disappear when recorded music arrived. It just stopped being the only option. For a while, if you wanted music, you either made it yourself or knew someone who could.

Software has been in its “recorded music” era for decades. If you wanted a tool, you either bought what the industry offered or you didn’t have it. The threshold for creation was high enough that almost everything had to be commercially justified. Will enough people pay? Can we get budget for this?

That threshold just collapsed.

Tools like Claude Code, Cursor, Replit mean the calculus is now simply: do I want this? The answer can be yes for an audience of one. And sometimes that audience will turn out to be more than just you.

Source: Move37

Image: Claude

Our communication currently often takes place via platforms over which we have no control

Auto-generated description: A crumpled piece of paper on the ground displays the words WhatsApp respects and protects your privacy.

I’ve never used WhatsApp, and the only Meta account I’ve ever had was for Facebook when it first came out. Which makes me a bit of an outlier, I know.

But it seems that other people are cottoning-on to the fact that US Big Tech companies do not have the best interests of European users at heart. I use and recommend Signal, but even that - if not ‘Big Tech’ - is US-based.

This article talks about how European governments are switching to encrypted apps under their control. I applaud the move! For more like this, see the first TechFreedom Dispatch.

Governments in France, Germany, Poland, the Netherlands, Luxembourg and Belgium have started rolling out in-house messaging services for officials to exchange sensitive information, in an effort to stop staff from using popular encrypted apps and switch to local alternatives they can control. Defense alliance NATO also has its own messenger, and the European Commission plans to make the switch by the end of the year.

The move toward government-controlled messaging apps is part of Europe’s search for alternatives to American technology, sparked by fears of being strategically dependent on Washington. WhatsApp is owned by U.S. tech giant Meta, while Signal is run by a U.S.-based non-profit and managed by a large community of open-source software enthusiasts.

The effort to unplug from American companies also reflects growing recognition among governments of the vulnerabilities of mainstream messaging apps for sharing sensitive information between politicians.

“Our communication currently often takes place via platforms over which we have no control,” Willemijn Aerdts, the Netherlands’ digital minister, told POLITICO in a statement. “In a world where technology is increasingly being used as a tool of power, that poses a risk.”

Source: Politico

Image: Tushar Mahajan

Note to self

Auto-generated description: A contemplative message encourages letting go by highlighting the absence of an audience, approval, or roles to play.

Source: Are.na