We are, effectively, being fracked to death.

Auto-generated description: A person dressed in an animal costume holds a sign that reads AI IS CRINGE in a lively, colorful setting with others in similar costumes.

I like experimenting with AI tools, but as I mentioned in a recent post about the Claude Constitution, I’m not a huge fan of the governance behind them. I also get for those in the creative arts, and especially for those for whom AI has arrived midway through their career, the whole “innovation” feels like armageddon.

This post from Andrew Sempere “artist, designer, developer and internet old” starts off with a heartfelt note about how he is desperately looking for work, which lends a appropriately melancholy vibe to what follows. Having really struggled this time last year, both in terms of work and life in general, I really feel for him.

It’s worth reading in its entirety, but here’s an extended excerpt, because he’s not wrong. But what can we salvage from the ruins? What can we create together?

To the extent that any of these AIs “think” at all, it’s because the models have strip-mined the last two decades of internet conversation. The conversation that we used to have in public. All of the blog posts, commentary, free software and detailed debates, RFCs and white papers. Free documentation and APIs and Stack Overflow posts. And now, as far as we can tell, this collective generative activity has almost completely ceased to be.

We have stopped talking to each other on the open channels.

We don’t “need” to anymore.

We definitely don’t want to anymore.

The current moment is so incredibly extractive, it has converted every ounce of human care, kindness and creativity into a “model,” which we then burn as much fossil fuel as possible to convert into a subscription service. We are being asked to pay for a chopped-and-screwed memory of the time we used to live in a semi-functional society.

We are, effectively, being fracked to death. And this isn’t an impending collapse, it’s already happened, we’re just living out the consequences right now.

It’s not the internet that’s dead, it’s the entire tech industry and it wasn’t an accident, it was a murder to try and cash in on the life insurance money. We’re not into late-stage capitalism, we’re fully in The Jackpot, and I fear nothing will ever get any better than it was about two years ago, when we ingested the entire program into itself and then just… gave up… trading all of our living friends for their ghosts.

It seems increasingly likely that we’re locked in a death-cult-groundhog-day timeloop now, forever, remixing only hit-parade records from 2023 again and again and again and again and again until we’re convinced they’re new.

I almost miss the era when there was a shitty startup every week.

[…]

I have memories. Vivid ones, of living in a different country, a different city, a different planet. I look at screenshots and photographs of life from as little as 18 months ago and it becomes difficult to believe that future ever existed. I don’t even feel right in my body, nothing seems to work, everything previously solid seems damaged, unstable, treacherous and unreliable. Things have glitched, gone sideways, completely wrong timeline. I’m told I’m wrong, that things are fine, that this is normal. We’ve always been this way.

So this might be just old-man shit, but I think the word I’m looking for is: bereft.

Source & image: Feral Research

Sometimes the work is rest

Auto-generated description: Text emphasizes the importance of rest and allowing oneself to simply exist without constant optimization.

My one week of rest to give my autonomic system a break turned into two as, predictably, I caught whatever lurgy my daughter brought home from school. Two weeks away from exercise provided a bit of a reset, and I felt much better.

It reminds me of this excellent podcast episode.

Source: Tumblr

A bit more than a to-do list

Auto-generated description: A webpage promoting Anytype as a secure platform for digital collaboration, featuring illustrations of folders and documents alongside highlighted features like privacy and offline access.

I’m pretty sure I’ve come across anytype before, but I was reminded of it after asking for suggestions in the Freelancers Get Sh*t Done Slack. I’ve been using Google Tasks, which are nicely integrated with Google Calendar, for the last few years.

While I can still do that for my co-op to-do’s, I need a different solution for my consultancy business, which I’ve switched to Proton. Lots of people use pen and paper, but I need the ability to includes clickable links, etc.

While anytype is way more than just a tasks app, I do like the way it’s end-to-end encrypted, European, and the kind of Notion alternative I might actually use. I shall be experimenting…

Source: anytype

Because I learned a second thing at the end of my two days of vertigo: That my idea was terrible.

Auto-generated description: A pair of eyeglasses rests on a keyboard in front of a laptop screen displaying various lines of code and digital data.

One of our recurring biases is assuming that our last experience of something, or somewhere, still reflects how it is today. For example, places I haven’t visited in over a decade, since before the pandemic, are almost certainly quite different now. The same applies to software and digital tools, which tend to evolve far faster than we expect.

The same is true of software and digital tools, which often evolve much faster than we think. If you haven’t been playing with AI tools (especially things like Cursor and Claude Code) then you really don’t know what you’re missing out on.

This post was shared with me by Tom Watson, who I did some noodling with on Friday in-person (I know, I know). I like the honest reflections in this post, and it meshes with my own experience building things with the help of a Little Robot Friend.

Within a few minutes of installing Cursor and setting up a handful of developmental studs, I had built a working prototype. Within 24 hours, I not only had a polished app; I was already at the limit of my initial idea, no longer just manufacturing what had been in my head, but trying to come up with new things to manifest. And within 48 hours, I was ruined. The comfortable physics that I thought governed Silicon Valley—that stuff takes time to build; that products need to be designed before they can be created; that computers cannot assume intent or interpolate their way through incomplete ideas—broke, utterly. It all worked too well, too fast.2 I was staggered, drunk on the Kool-Aid and high on the pills, unwell and off-brand. I knew that anyone can now build vibe-coded toys; I did not know that people with a basic familiarity with code could go much, much further.

Though it’s hard to benchmark how far I got in two days, this is my best guess: The app is roughly equivalent to what a designer and a couple professional engineers could build in a month or two. Granted, I didn’t build any of the scaffolding that a real company would—proper signup pages, hardened security policies, administrative features, “tests”—but the product expresses its core functionality as completely as any prototype that we showed Mode’s early investors and first customers. In 2013, it took us eight people, nine months, and hundreds of thousands of dollars to build something we could sell, and that was seen as reasonably efficient. Today, that feels possible to do with one person in two days.

Well, with one more caveat. Because I learned a second thing at the end of my two days of vertigo: That my idea was terrible.

The entire conceit didn’t work. My long-loved thesis, when rendered on a screen, was catastrophically bad.3 I did not want to start a business around my app. I did not want to take notes in my app. I did not want to use my app. I wanted to start over.4 Great chefs can come from anywhere, but not everyone can be a great chef.

Equally interesting is this bit, which talks about what happens when people don’t have to accept the “solutions” created by people who don’t experience their problems:

In other words, a lot of today’s technology is the levered ideas of technologists. It is a book store, run by an engineer from a hedge fund; it is computerized cash registers, from a social media founder and Oracle employees; it is fitness classes, built by a Bain consultant and an MIT grad; it is a note-taking app, built by someone who knows enough Typescript to build a note-taking app. But if these products succeed, it’s often more because of the technology than the idea of the technologist. It’s not that the idea was bad; it’s that the idea was not the transformational advantage. A fine CAD program beats a drafting table. A fine banking app beats driving to a branch. Even my app beats hand-written note cards. And because people who are technologists first, and architects or bankers or writers second, are the only people who can lever their ideas with technology, their ideas win.

Moreover, this isn’t just some accidental selection bias; this is the whole point of Silicon Valley. Flagship incubators like Y Combinator are built on the thesis that a smart kid with a computer and summer internship at Goldman Sachs can outwit all of American Express. That’s not because the kid understands the needs of payment processors better than people at American Express, or has better ideas than they do; it’s because the kid can build their idea.

But what if anyone can? What if lots of people at American Express can build stuff? What if someone who’s been an architect for twenty years can make the design software they’ve always wanted? What if a veteran investment banker can write a program that automatically generates pitch books? What if a real writer makes a note-taking app? What if software is the levered ideas of experts?

Source: benn.substack

Image: Kevin Ku

Words/phrases used more in AI-generated text

Auto-generated description: A table compares the frequency of phrases used by AI and humans along with examples, highlighting the differences in usage.

Whether or not you use LLMs as part of your workflow, you don’t want to be accused of “sounding like AI”. This guide gives a list of words and phrases that are (much) more commonly used in the output from LLMs. I’ve added it my document of AI words/phases to avoid. You’re welcome.

These words and phrases are ranked based on the frequency they appear in AI documents, compared to human documents in our research of 3.3 million texts.

Source: GPTzero

Makes you think

I'm so glad I can still talk to my AI chatbot friends

Auto-generated description: A person in a lavender hoodie expresses loneliness due to a social media ban but finds comfort in talking to AI chatbot friends.

Source: Reddit

National security assessment on global ecosystems

Auto-generated description: A report by HM Government titled Global biodiversity loss, ecosystem collapse and national security: A national security assessment features multicolored circular graphics.

It’s always worth looking at what governments decide to publish when the public are busy looking the other way. Recently, Trump’s actions around Greenland have been in the news, and so the UK government though it would be a good time to publish this.

It’s only 14 pages long and easily scannable, but TL;DR: “Significant disruption to international markets as a result of ecosystem degradation or collapse will put UK food security at risk."

This assessment is an analysis of how global biodiversity loss and ecosystem collapse could affect UK national security.

It shows how environmental degradation can disrupt food, water, health and supply chains, and trigger wider geopolitical instability. It identifies 6 ecosystems of strategic importance for the UK and explores how their decline could drive cascading global impacts.

This assessment, which was developed by analysts and experts across HM Government, supports long-term resilience planning. Publishing the assessment highlights opportunities for innovation, green finance and global partnerships that can drive growth while safeguarding the ecosystems that underpin our collective security and prosperity.

Source: GOV.UK

Psychological Defence and Information Influence

Auto-generated description: A textbook titled Psychological Defence and Information Influence – A Textbook on Theory and Practice is shown, alongside details about its publication and a brief description of its focus on security and psychological defense.

This looks interesting. It’s from Sweden’s Psychological Defence Agency and their website has lots of interesting stuff on it.

In an age defined by rapid information flows and shifting security landscapes, the resilience of societies rests not only on military strength or technological capacity, but equally on the ability of individuals and institutions to withstand psychological influence and manipulation. Psychological defence is therefore not merely a technical field, it is a civic responsibility and a cornerstone of democratic resilience.

This textbook is the first of its kind: a comprehensive overview of key issues related to psychological defence and information influence written by leading scholars on each topic. The textbook is an anthology with contributions reflecting the broad debate in Sweden and it provides knowledge, practical guidance, and reflection on how psychological defence can be understood and applied in different contexts. It explores the threats we face – such as disinformation and propaganda – and the tools available to counter them, ranging from critical thinking and communication strategies to institutional preparedness.

The overarching aim of this book is not only to raise awareness of psychological threats, but also to empower readers with the knowledge and confidence to respond effectively and responsibly. By fostering trust, openness, and critical engagement, psychological defence contributes to safeguarding the values of free and democratic societies.

Source: Psychological Defence Agency

Weapon of the enemy

Auto-generated description: Batman is shown breaking a gun while stating, This is the weapon of the enemy. We do not need it. We will not use it.

What do we mean when we talk about pollution and toxicity in online spaces?

Auto-generated description: Aerial view of a lush green marshland with winding waterways.

As someone who has done a lot of thinking about community spaces over the years, I like this investigation into what we mean when we use environmental analogies for online communities. It seems like it’s the start of a research project, and there’s a call for people to get in touch with the author.

The metaphor of online communities that “have become toxic” or that are “being polluted” in different ways is a common one. But what do we mean when we talk about pollution and toxicity in online spaces; and what can we learn from the environmental sciences and natural ecosystems to improve things with and for communities?

[…]

Websites solely based around machine generated content are proliferating, polluting both search engines and journalism, crowding our human-generated, high-quality journalism. The analogy of pollution that many of these communities and maintainers refer to seems like an apt one. It even predates the launch of generative AI systems that currently are the focus of this “digital pollution”: The related environmental concept of toxicity is a staple when discussing how people interact in online communities, references to which go back to at least the early 2000s. And more recently, people have argued that social media companies themselves should be viewed as potential polluters of society and how our information is being polluted.

[…]

[T]he goal of the “digital pollution” framing is not to call individual community participants or types of online cultures per se as toxic or polluted. Instead, it can serve to understand how online ecosystems can suffer, despite lots of well-intentioned and well-meaning interactions. Understanding these pollution dynamics is not just of academic interest, it might also help with modeling online interactions. Which in turn can help design interventions that have the potential to support moderators and improve online communities.

If we look at “pollution” more closely, in which ways do different factors in “commons pollution” mirror environmental pollution? Firstly, both environmental pollution and digital pollution can come in different shapes and forms. If we just think of water pollution, we have point source pollution, in which a single, identifiable source such as a factory discharges harmful materials into bodies of water. Online, we can find similar “point sources” in targeted misinformation campaigns, run by humans or bots.

Source: Citizens and Tech Lab

Image: Dan Meyers

Living with your incapacity

The one who learns to live with his incapacity has learned a great deal. This will lead us to the valuation of the smallest things, and to wise limitation, which the greater height demands… . The heroic in you is the fact that you are ruled by the thought that this or that is good, that this or that performance is indispensable, … this or that goal must be attained in headlong striving work, this or that pleasure should be ruthlessly repressed at all costs. Consequently you sin against incapacity. But incapacity exists. No one should deny it, find fault with it, or shout it down.

— Carl Jung

My Are.na channels are now more organised

Auto-generated description: A digital collection platform interface displays various categorized topics like Systems Thinking, Gaming, and Life Advice, each represented by black tiles.

I only post a small selection of the things I bookmark here on Thought Shrapnel. As ever when I’m not sure about how to organise things, when I started using Are.na again, I just put everything in a single “Finds” channel.

Now that I’ve been using it for a few months, I decided it was high time I was a bit more organised. So I’ve defined some channels and included them with “Finds” for ease of finding.

Note that Are.na allows you to subscribe to any of these in the app — or via RSS by appending /feed/rss to the end of any channel.

Source: Are.na

Are they ever tricked by a voice that is false when they expected it to be a real, live human?​

Oh good, it’s not just me who thinks about these things. Ambiguity is a fundamental part of how we interact with each other and with our devices. And it’s very rarely discussed in general, and certainly not part of the stories we read, watch, or listen to — except as a plot device.

These days, when I watch movies with voice interfaces or “AI” assistants in them, I find myself pretty surprised by how many fictional worlds seem to be full of people who never experience any ambiguity about whether they’re talking to a person or to software. Everyone in a sci-fi setting has usually fully internalized the rules about what is or isn’t “real” in their conversational world, and they usually all have a social script for how they’re “supposed” to treat AI voice assistants.

Characters in modern film and TV are almost never rude or cruel to voice assistants except in scenes where they’re being misunderstood by voice recognition. People in stories like these rarely ever get confused about whether something is a human or an AI unless that’s, like, the entire point of the story. But in real life, we’re constantly forced to interact with an unwanted voice UI, or a phone scammer voice that’s pretending to be real. I have found myself really missing moments like these in movies, where humans express any material awareness of the false voices they interact with. Who made their voice assistant? How do they feel about that company or person? Are they ever tricked by a voice that is false when they expected it to be a real, live human?

[…[

I still haven’t seen much media that reflects the way I actually feel about conversational interfaces in the real world - frustrated, tricked, manipulated, and inconvenienced. And I haven’t seen any media at all recently about the equally insidious trend of real human labor being marketed as if it is an autonomous system.

Source: Laura Michet’s Blog

Image: Jelena Kostic

It makes a lot more sense

Auto-generated description: A tweet by Dan Hon discusses the idea of the universe being a simulation versus being an output of a generative AI model.

Source: Bluesky

Privacy by design means what it says on the tin

Auto-generated description: A close-up view of a keyboard shows the delete, return, power, and volume control keys.

This was shared with me by Tom Watson yesterday, and we discussed it briefly as part of our now-regular Friday ‘noodling’ sessions. Now, fair enough, one would not expect that turning off ChatGPT’s data consent option would delete files on your own computer.

But then, not having backups is, at the very least, cavalier when your livelihood depends on your outputs. So it’s a reminder not only that LLMs are simultaneously very powerful and ‘stupid’ but also that, just like every other time in the history of digital devices you should have backups.

None of us are perfect. This week, for example, after setting a ‘duress’ password on my GrapheneOS-powered smartphone, I accidentally triggered it and all of my data was instantly wiped. Did I blame GrapheneOS? No, I was actually thankful that it did what it said it would do. I blamed myself.

While I lost some history of my chats in Signal it was a reminder that they’ve got an encrypted cloud backup option. So I turned that on, and didn’t write an article blaming everyone except myself.

This was not a case of losing random notes or idle chats. Among my discussions with ChatGPT were project folders containing multiple conversations that I had used to develop grant applications, prepare teaching materials, refine publication drafts and design exam analyses. This was intellectual scaffolding that had been built up over a two-year period.

We are increasingly being encouraged to integrate generative AI into research and teaching. Individuals use it for writing, planning and teaching; universities are experimenting with embedding it into curricula. However, my case reveals a fundamental weakness: these tools were not developed with academic standards of reliability and accountability in mind.

If a single click can irrevocably delete years of work, ChatGPT cannot, in my opinion and on the basis of my experience, be considered completely safe for professional use. As a paying subscriber (€20 per month, or US$23), I assumed basic protective measures would be in place, including a warning about irreversible deletion, a recovery option, albeit time-limited, and backups or redundancy.

OpenAI, in its responses to me, referred to ‘privacy by design’ — which means that everything is deleted without a trace when users deactivate data sharing. The company was clear: once deleted, chats cannot be recovered, and there is no redundancy or backup that would allow such a thing (see ‘No going back’). Ultimately, OpenAI fulfilled what they saw as a commitment to my privacy as a user by deleting my information the second I asked them to.

Source: Nature Briefing

Image: Ujesh Krishnan

The correct response to Dachau was not better training for the guards

Auto-generated description: A group of people are participating in a protest holding signs with various political messages, including Abolish ICE.

This is a must-read from Andrea Pitzer. As she points out, the window of opportunity to do something about what’s happening in the US is closing.

I’ve looked at mass civilian detention around the world. I’ve visited the facilities where people were held. I’ve talked to the people involved—those detained and tortured, those who supported camps, and those who stood idly by. It’s critical to recognize that each of the societies that has had camps underwent a lengthy process. This process is often easier to see happening in your own country if you first look at an example in another one.

My goal today is to warn you that the U.S. has already been seized by the same camp dynamic. It’s not that I’m trying to tell you that bad things are coming, and you have to look out for them. What I’m saying is that the camps have already taken root and are on a fast-track to get exponentially worse. We’re already deep inside the process.

Yet there is power in that knowledge, because in some big ways, we can know what will happen next. We have models for how other societies have moved out of our current perilous state. And we have a ton of tactics we can use to fight back against the expanding harm directed at all of us.

I’ll add right up front that nobody sane now thinks the answer to abuses at Dachau was to give the guards more training.

[…]

[I]f we count the Biden administration as simply a pause on the larger Trump authoritarian agenda in several ways, the U.S. is currently approaching the end of that three-to-five year window. We may already be living in a concentration-camp regime, but it hasn’t yet hardened into the kind of vast system that becomes the controlling factor in the country’s political future.

Still, we’re on the verge of entrenching a massive system, which is a very bad place to be. It’s my opinion that we have a limited window in which to act. What happens this year will be critical for significantly dismantling the existence of and any future capacity for building the extrajudicial camp network the government is constructing today.

Again, we need to do more than stop the construction of additional facilities, more than just get ICE agents to behave more politely. We need to dismantle the current system and remove the possibility for it to exist again. In my opinion, that is what “Abolish ICE” should mean.

[…]

You can’t reform a concentration camp regime. You have to dismantle it and replace it. We have a thousand ways to do it. And most U.S. citizens—particularly white ones—have the freedom to act, for now, with far less risk than the many people currently targeted.

Source: Degenerate Art

Image: Bradley Andrews

SOLVEM PROBLER

Auto-generated description: A yellow baseball cap features the misspelled text SOLVEM PROBLER in green embroidery.

Source: @brucesterling

They have no idea what’s happening now.

Auto-generated description: A group of people stands outside a large, narrow building with a sign labeled Entry Level, while a man carries an AI box on the side.

I was talking with Laura about how full-time jobs are pretty much over as a construct. There aren’t as many as there used to be, particularly for knowledge workers, and there’s plenty of people (like us!) who wouldn’t want one in any case.

This time last year I was laughing at the prediction that AI would be able to replace developers. Now I’m vibe coding actually useful software from scratch. AGI is kinda already here. What does that mean in practice? Probably that the world as we know it is slowly going to disappear.

Most people met AI in late 2022 or something, poked ChatGPT once like it was a digital fortune cookie, got a mediocre haiku about their cat, and decided: ah yes, cute toy, overhyped, wake me when it’s Skynet. Then they went back to their inbox rituals.

They have no idea what’s happening now.

They haven’t seen the latest models that quietly chew through documents, write code, design websites, summarize legal contracts, and generate decent strategy decks faster than a middle manager can clear their throat.

Claude just released a “coworker”, which sometime next year will become your new colleague, and then after that year, your replacement.

[…]

We can automate away huge chunks of the drudgery that used to be biologically unavoidable.

And we’re still out here proudly defending the 40-hour week like it’s some sacred law of physics. Still tying healthcare, housing, and dignity to whether you can convince someone that your job should exist for another quarter.

[…]

So yes, we need a transition, and it will be messy.

  • We will need new rhythms for days and weeks that aren’t defined by clocking in.

  • We will need new institutions of community—not offices, but workshops, labs, studios, clubs, care centers, gardens, research guilds, whatever—places where humans gather to do things that matter without a boss breathing down their neck for quarterly results.

  • We will need new ways to recognize status: not “job title and salary,” but contribution, curiosity, care, creativity.

  • We will need economic architecture: universal basic income, or even better, universal basic services (housing, healthcare, education, mobility) that are not held hostage by employers.

And we’ll need therapy. A lot of it.

Source: The Pavement

Image: Janet Turra & Digit

What’s strange is how little of that generosity we extend to each other

Auto-generated description: A man in a layered digital art composition combines real and abstract elements, with a background of blue sky and geometric patterns.

Last week, I discussed how our interactions with LLMs can provide some insights into ways we treat other humans.

I’d recommend reading this article, which is based on The Four Agreements: A Practical Guide to Personal Freedom by Don Miguel Ruiz, as it shows how applying some of the ways we show patience and understanding with AI systems might help us in our interactions in general.

I’ve been thinking about how quickly we’ve adapted to working with AI. We all understand the deal. If the output is bad, it’s probably on us. The prompt was vague. The context was missing. We didn’t give it enough constraints.

So we revise. We clarify. We try again.

No frustration. No judgment. Just iteration.

What’s strange is how little of that generosity we extend to each other. Somewhere along the way, we learned to treat machines as systems that need better inputs—but we still treat humans as if they should just know. And when they don’t, we judge competence, take it personally, make assumptions, or shut down.

Source: UXtopian

Image: Alan Warburton