A goal set at time T is a bet on the future from a position of ignorance
Not only do I really like Joan Westenberg’s blog theme (Thesis, for Ghost) but this post in particular. If there’s one thing I’ve learned from my life, career, reading Stoic philosophy, and studying Systems Thinking, it’s that there are some things you can control, and some things you can’t.
Coming up with a ‘strategy’ or a ‘goal’ that does not take into account the wider context in which you do or will operate is foolish. Naive, even. Instead, setting constraints makes much more sense. What Westenberg is advocating for here, without saying it explicitly, is a systems thinking approach to life.
You can read my 3-part Introduction to Systems Thinking on the WAO blog (which, coincidentally, we’ll soon be moving to Ghost)
Setting goals feels like action. It gives you the warm sense of progress without the discomfort of change. You can spend hours calibrating, optimizing, refining your goals. You can build a Notion dashboard. You can make a spreadsheet. You can go on a dopamine-fueled productivity binge and still never do anything meaningful.
Because goals are often surrogates for clarity. We set goals when we’re uncertain about what we really want. The goal becomes a placeholder. It acts as a proxy for direction, not a result of it.
[…]
A goal set at time T is a bet on the future from a position of ignorance. The more volatile the domain, the more brittle that bet becomes.
This is where smart people get stuck. The brighter you are, the more coherent your plans tend to look on paper. But plans are scripts. And reality is improvisation.
Constraints scale better because they don’t assume knowledge. They are adaptive. They respond to feedback. A small team that decides, “We will not hire until we have product-market fit” has created a constraint that guides decisions without locking in a prediction. A founder who says, “I will only build products I can explain to a teenager in 60 seconds” is using a constraint as a filtering mechanism.
[…]
Anti-goals are constraints disguised as aversions. The entrepreneur who says, “I never want to work with clients who drain me” is sketching a boundary around their time, energy, and identity. It’s not a goal. It’s a refusal. And refusals shape lives just as powerfully as ambitions.
Source: Joan Westenberg
If a lion could talk, we probably could understand him. He just would not be a lion any more.
There are so many philosophical questions when it comes to the possible uses of AI. Being able to translate between different species' utterances is just one of them.
The linguistic barrier between species is already looking porous. Last month, Google released DolphinGemma, an AI program to translate dolphins, trained on 40 years of data. In 2013, scientists using an AI algorithm to sort dolphin communication identified a new click in the animals’ interactions with one another, which they recognised as a sound they had previously trained the pod to associate with sargassum seaweed – the first recorded instance of a word passing from one species into another’s native vocabulary.
[…]
In interspecies translation, sound only takes us so far. Animals communicate via an array of visual, chemical, thermal and mechanical cues, inhabiting worlds of perception very different to ours. Can we really understand what sound means to echolocating animals, for whom sound waves can be translated visually?
The German ecologist Jakob von Uexküll called these impenetrable worlds umwelten. To truly translate animal language, we would need to step into that animal’s umwelt – and then, what of us would be imprinted on her, or her on us? “If a lion could talk,” writes Stephen Budiansky, revising Wittgenstein’s famous aphorism in Philosophical Investigations, “we probably could understand him. He just would not be a lion any more.” We should ask, then, how speaking with other beings might change us.
Talking to another species might be very like talking to alien life. […] Edward Sapir and Benjamin Whorf’s theory of linguistic determinism – the idea that our experience of reality is encoded in language – was dismissed in the mid-20th century, but linguists have since argued that there may be some truth to it. Pormpuraaw speakers in northern Australia refer to time moving from east to west, rather than forwards or backwards as in English, making time indivisible from the relationship between their body and the land.
Whale songs are born from an experience of time that is radically different to ours. Humpbacks can project their voices over miles of open water; their songs span the widest oceans. Imagine the swell of oceanic feeling on which such sounds are borne. Speaking whale would expand our sense of space and time into a planetary song. I imagine we’d think very differently about polluting the ocean soundscape so carelessly.
Source: The Guardian
Image: Iván Díaz
In this as-yet fictional world, “cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs”
You should, they say, “follow the money” when it comes to claims about the future. That’s why this piece by Allison Morrow is so on-point about thos made by the CEO of Anthropic about AI replacing human jobs.
If we believed billionaires then you’d be interacting with this post in the Metaverse, the first manned mission to Mars would have already taken place, and we could “believe” pandemics out of existence. So will AI have an impact on jobs? Absolutely. Will it happen in the way that some rich guy thinks? Absolutely not.
If the CEO of a soda company declared that soda-making technology is getting so good it’s going to ruin the global economy, you’d be forgiven for thinking that person is either lying or fully detached from reality.
Yet when tech CEOs do the same thing, people tend to perk up.
ICYMI: The 42-year-old billionaire Dario Amodei, who runs the AI firm Anthropic, told Axios this week that the technology he and other companies are building could wipe out half of all entry-level office jobs … sometime soon. Maybe in the next couple of years, he said.
He reiterated that claim in an interview with CNN’s Anderson Cooper on Thursday.
“AI is starting to get better than humans at almost all intellectual tasks, and we’re going to collectively, as a society, grapple with it,” Amodei told Cooper. “AI is going to get better at what everyone does, including what I do, including what other CEOs do.”
To be clear, Amodei didn’t cite any research or evidence for that 50% estimate. And that was just one of many of the wild claims he made that are increasingly part of a Silicon Valley script: AI will fix everything, but first it has to ruin everything. Why? Just trust us.
In this as-yet fictional world, “cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs,” Amodei told Axios, repeating one of the industry’s favorite unfalsifiable claims about a disease-free utopia on the horizon, courtesy of AI.
But how will the US economy, in particular, grow so robustly when the jobless masses can’t afford to buy anything? Amodei didn’t say.
[…]
Little of what Amodei told Axios was new, but it was calibrated to sound just outrageous enough to draw attention to Anthropic’s work, days after it released a major model update to its Claude chatbot, one of the top rivals to OpenAI’s ChatGPT.
Amodei stands to profit off the very technology he claims will gut the labor market. But here he is, telling everyone the truth and sounding the alarm! He’s trying to warn us, he’s one of the good ones!
Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ It refers to itself primarily as an “AI safety and research” company. They are the AI guys who see the potential harms of AI clearly — not through the rose-colored glasses worn by the techno-utopian simps over at OpenAI. (In fact, Anthropic’s founders, including Amodei, left OpenAI over ideological differences.)
Source: CNN
Image: Janet Turra & Cambridge Diversity Fund / Ground Up and Spat Out
Learner AI usage is essentially a real-time audit of our design decisions
First off, it’s worth saying that this looks and reads like a lightly-edited AI-generated newsletter, which it is. Nonetheless, given that it’s about the use of generative AI in university courses, it doesn’t feel inappropriate.
The main thrust of the argument is that students are using tools such as ChatGPT to help break down courses in ways that should be of either concern or interest to instructional designers. As a starting point, it uses the LinkedIn post in the screenshot above, which is based on OpenAI research and some findings I shared on Thought Shrapnel recently.
I can’t see how this is anything other than a positive thing, as students taking control of their own learning. We’ve all had terrible teachers, who think that because they teach, their students learn. Those who use outdated metaphors, who can’t understand how learners don’t “get it”, etc. For as long as we have the current teaching, learning, and assessment models in formal education, this feels like a useful way to hack the system.
Picture this: a learner on a course you designed opens their laptop and types into ChatGPT: “I want to learn by teaching. Ask me questions about calculus so I can practice explaining the core concepts to you.”
In essence, this learner has just become an instructional designer—identifying a gap in the learning experience and redesigning it using evidence-based pedagogical strategies.
This isn’t cheating—it’s actually something profound: a learner actively applying the protégé effect, one of the most powerful learning strategies in cognitive science, to redesign and augment an educational experience that, in theory, has been carefully crafted for them.
[…]
The data we are gathering about how our learners are using AI is uncomfortable but essential for our growth as a profession. Learner AI usage is essentially a real-time audit of our design decisions—and the results should concern every instructional designer.
[…]
When learners need AI to “make a checklist that’s easy to understand” from our assignment instructions, it reveals that we’re designing to meet organizational requirements rather than support learner success. We’re optimizing for administrative clarity rather than learning clarity.
[…]
The popularity of prompts like “I’m not feeling it today. Help me understand this lecture knowing that’s how I feel” and “Motivate me” reveals a massive gap in our design thinking. We design as if learning is purely cognitive when research clearly shows emotional state directly impacts cognitive capacity.
Source: Dr Phil’s Newsletter
It's so emblematic of the moment we're in... where completely disposable things are shoddily produced for people to mostly ignore
Melissa Bell, CEO of Chicago Public Media, issued an apology this week which categorised the litany of human errors that led to the Chicago Sun-Times publishing a largely AI-generated supplement entitled “Heat Index: Your Guide to the Best of Summer.”
Instead of the meticulously reported summer entertainment coverage the Sun-Times staff has published for years, these pages were filled with innocuous general content: hammock instructions, summer recipes, smartphone advice … and a list of 15 books to read this summer.
Of those 15 recommended books by 15 authors, 10 titles and descriptions were false, or invented out of whole cloth.
As Bell suggests in her apology, the failure isn’t (just) a failure of AI. It’s a failure of human oversight:
Did AI play a part in our national embarrassment? Of course. But AI didn’t submit the stories, or send them out to partners, or put them in print. People did. At every step in the process, people made choices to allow this to happen.
Dan Sinker, a Chicago native, runs with this in an excellent post which has been shared widely. He calls the time we’re in the “Who Cares Era, riffing on the newspaper supplement debacle to make a bigger point.
The writer didn’t care. The supplement’s editors didn’t care. The biz people on both sides of the sale of the supplement didn’t care. The production people didn’t care. And, the fact that it took two days for anyone to discover this epic fuckup in print means that, ultimately, the reader didn’t care either.
It’s so emblematic of the moment we’re in, the Who Cares Era, where completely disposable things are shoddily produced for people to mostly ignore.
[…]
It’s easy to blame this all on AI, but it’s not just that. Last year I was deep in negotiations with a big-budget podcast production company. We started talking about making a deeply reported, limited-run show about the concept of living in a multiverse that I was (and still am) very excited about. But over time, our discussion kept getting dumbed down and dumbed down until finally the show wasn’t about the multiverse at all but instead had transformed into a daily chat show about the Internet, which everyone was trying to make back then. Discussions fell apart.
Looking back, it feels like a little microcosm of everything right now: Over the course of two months, we went from something smart that would demand a listener’s attention in a way that was challenging and new to something that sounded like every other thing: some dude talking to some other dude about apps that some third dude would half-listen-to at 2x speed while texting a fourth dude about plans for later.
So what do we do about all of this?
In the Who Cares Era, the most radical thing you can do is care.
In a moment where machines churn out mediocrity, make something yourself. Make it imperfect. Make it rough. Just make it.
[…]
As the culture of the Who Cares Era grinds towards the lowest common denominator, support those that are making real things. Listen to something with your full attention. Watch something with your phone in the other room. Read an actual paper magazine or a book.
Source: Dan Sinker
Image: Ben Thornton
The future of public interest social networking
It’s been the FediForum this week, an online unconference dedicated to the Open Social Web. To coincide with this, Bonfire — a project I’ve been involved with on-and-off ever since leaving Moodle* — has reached the significant stage of release candidate for v1.0.
Ivan and Mayel, the two main developers, have done a great job sustaining this project over the last five years. It was fantastic, therefore, to see a write up of Bonfire alongside another couple of Fediverse apps in an article in The Verge (which uses a screenshot of my profile!) along with a more in-depth one in TechCrunch. It’s the latter one I’m excerpting here.
There is a demo instance if you just want to have a play!
Bonfire Social, a new framework for building communities on the open social web, launched on Thursday during the FediForum online conference. While Bonfire Social is a federated app, meaning it’s powered by the same underlying protocol as Mastodon (ActivityPub), it’s designed to be more modular and more customizable. That means communities on Bonfire have more control over how the app functions, which features and defaults are in place, and what their own roadmap and priorities will include.
There’s a decidedly disruptive bent to the software, which describes itself as a place where “all living beings thrive and communities flourish, free from private interest and capitalistic control.”
[…]
Custom feeds are a key differentiation between Bonfire and traditional social media apps.
Though the idea of following custom feeds is something that’s been popularized by newer social networks like Bluesky or social browsers like Flipboard’s Surf, the tools to actually create those feeds are maintained by third parties. Bonfire instead offers its own custom feed-building tools in a simple interface that doesn’t require users to understand coding.
To build feeds, users can filter and sort content by type, date, engagement level, source instance, and more, including something it calls “circles.”
Those who lived through the Google+ era of social networks may be familiar with the concept of Circles. On Google’s social network, users organized contacts into groups, called Circles, for optimized sharing. That concept lives on at Bonfire, where a circle represents a list of people. That can be a group of friends, a fan group, local users, organizers at a mutual aid group, or anything else users can come up with. These circles are private by default but can be shared with others.
[…]
Accounts on Bonfire can also host multiple profiles that have their own followers, content, and settings. This could be useful for those who simply prefer to have both public and private profiles, but also for those who need to share a given profile with others — like a profile for a business, a publication, a collective, or a project team.
Source: TechCrunch
*Bonfire was originally a fork of MoodleNet, and not only has it since gone in a different direction, but five years later I highly doubt there’s still an original line of code. Note that the current version of MoodleNet offered by Moodle is a completely different tech stack, designed by a different team
British culture is swearing and being sarcastic to your mates whilst simultaneously being too polite to tell someone they need to leave
My friend and colleague Laura Hilliger said that she understood me (and British humour in general) a lot more after watching the TV series Taskmaster. As with any culture, in the UK there are unspoken rules, norms, and ways of interacting that just feel ‘normal’ until you have to actually explain them to others.
This Reddit thread, which starts with the question What’s a seemingly minor British etiquette rule that foreigners often miss—but Brits immediately notice? is a goldmine (and pretty funny) although there’s a lot of repetition. Consider it a Brucie Bonus at the end of this week’s Thought Shrapnel, which I’m getting done early as I’m at a family wedding and an end of season presentation/barbeque this weekend!
Thank the bus driver when you get off. Even though he’s just doing his job and you paid. (Top-Ambition-6966)
Keep calm and carry on / deliberately not acknowledging something awry that’s going on nearby. (No-Drink-8544)
British culture is swearing and being sarcastic to your mates whilst simultaneously being too polite to tell someone they either need to leave or the person themselves wants to end the social interaction. (AugustineBlackwater)
If someone asks you if you’ll do something or go somewhere with them and you answer ‘maybe’….it is actually a polite way of saying no. (loveswimmingpools)
Not taking a self deprecating comment at face value, e.g. non Brit: ‘ah that sounds like a good job!’ Brit: ‘nah not really, it’s not that hard’, non Brit: ‘oh okay’. We’re just not good at taking praise so we deflect it but that doesn’t mean you’re supposed to accept the complimented’s dismissal of the compliment. All meant in playful fun of course. (Interesting_Tea_9125)
Not raising one finger slightly from your hand at the top of the steering wheel to express your deep gratitude for someone allowing you priority on the road. (callmeepee)
Drop over any time - you should schedule a visit 3 month in advance and I will still claim I am busy. (Spitting_Dabs)
Source: Reddit
Image: Adrian Raudaschl
Real life isn't a story. History doesn't have a moral arc.
Angus Hervey is a solutions journalist and founding editor of Fix The News. His most recent TED talk starts with doom and gloom, and ends with hope and a question:
Real life isn’t a story. History doesn’t have a moral arc. Progress isn’t a rule. It is contested terrain, fought for daily by millions of people who refuse to give in to despair. Ultimately, none of us know whether we’re living in the downswing or the upswing of history. But I do know that we all get a choice. We, all of us, get to decide which one of these stories we are a part of. We add to their grand weave in the work that we do, in the daily decisions we make about where to put our money, where to put our energy and our time, in the stories we tell each other and in the words that come out of our mouths. It is not enough to believe in something anymore. It is time to do something. Ask yourself, if our worst fears come to pass, and the monsters breach the walls, who do you want to be standing next to? The prophets of doom and the cynics who said “we told you so?” Or the people who, with their eyes wide open, dug the trenches and fetched water. Both of these stories are true. The only question that matters now is which one do you belong to?
The backstory to the talk is interesting: not only did Hervey and his partner welcome a new baby into the world just weeks before, he decided to do things a bit differently.
On the eve of my flight to Vancouver I had a script, a four-week-old baby, a ten-minute video, a seven-minute music track, and a prayer that I could hold it all together on stage.
It’s the first TED talk I’ve seen to use all three screen as a single canvas:
How do you tell a compelling visual story on a screen the size of a small building? For my last big talk, I used all three screens as a single canvas, instead of the traditional 16:9 format. This time, I wanted to go even further: a seamless, immersive experience that would make the audience forget they were watching a presentation at all.
After the initial call with the curation team, I reached out to Jordan Knight, a motion designer based in New York. Her work has this textural, flowing quality that I knew would be perfect for bringing the story to life. The concept I had in mind was ambitious, maybe foolishly so. I wanted two contrasting visual languages: the story of collapse illustrated through ink-blot shapes inspired by the alien language in Arrival - those haunting, oil-spill forms that Denis Villeneuve used so brilliantly. For progress, we’d use the opposite motif: green shoots, growth, life pushing through.
Sources: TED.com / Fix The News
Building a shared idea of "we"
One way of telling whether you live in within a technocratic regime is if politicians from the incumbent administration attempt solely to appeal to the electorate’s logic. As one of the commenters on the post I’m about to quote states, we have a “thin safety net of accessible metrics” which, unless coupled with vision and emotion can severely limit political action.
In this post by Andrew Curry, he discusses some of the things he presented as part of a talk organised around the theme of “a politics of the future.” He argues that, essentially, vibes are important:
The cultural critic Raymond Williams developed the idea of structures of feeling — which I should come back to here on another occasion — to describe changes that you could sense or feel before you could measure them.
Sometimes these appear in culture first: for example Williams describes how changing attitudes to debt in England in the 19th century were seen first in the writings of Dickens and Emily Bronte. In other words, structures of feeling signal a possible cultural hypothesis.
This “cultural hypothesis” or, to put it a different way, “politics of the future” is something that Curry discusses in the rest of this piece. It’s this which I think is missing from the current (UK) Labour government’s communications strategy at the moment. Everything seems to be about now rather than where we’re headed as a country.
[E]lections within a democracy are supposed to be a competition between different parties offering differing imagined futures.
[…]
But there’s a big hole where these imagined futures ought to be. The right tends to offer a vision of an imagined past, while centre parties, whether centre-left or centre-right, are intent on managing the present. They are focused on policy, not politics […]
The research suggests that this lack of alternatives affects voting level because people start abstaining from voting, and that the more disadvantaged are the first to drop out.
The right points to the past, glorifies it, and then points to the disadvantaged and disenfranchised as the reason why we can’t have these (imagined) nice things. The way forward for the left isn’t to ape what the right does, but to counter it by creating a politics of the future instead of the past:
Creating the collective — or perhaps creating a collective — is about building a shared idea of “we”. This is something politics, broadly described, can do, but policy can’t do. Party politics will still be a form of coalition building in the conventional sense of creating collections of interests around issues. But the element of the future imagination creates more coherence.
Source: Just Two Things
Image: Leonhard Niederwimmer
There may be six individuals out there who are waiting for exactly the thing that only you can write
After the last post, this one helps restore my hope in blogging a little. Adam Mastroianni, whose work I have mentioned many times here, runs an annual blogging competition. I’d urge anyone reading this, especially if you haven’t currently got a blog, to enter. Putting your thoughts out there is one way to help create the world that you want to live in.
It’s through these small gestures that we tell ourselves and others who we are and what we stand for. A different example: I absolute detest advertising, and mute adverts any time they come on the TV. In addition, I block them mercilessly on the web, and encourage other people to do it. Otherwise, we accept as default other people’s versions of ‘reality’. And I’m not ready to do that, at least not quite yet.
The blogosphere has a particularly important role to play, because now more than ever, it’s where the ideas come from. Blog posts have launched movements, coined terms, raised millions, and influenced government policy, often without explicitly trying to do any of those things, and often written under goofy pseudonyms. Whatever the next vibe shift is, it’s gonna start right here.
The villains, scammers, and trolls have no compunctions about participating—to them, the internet is just another sandcastle to kick over, another crowded square where they can run a con. But well-meaning folks often hang back, abandoning the discourse to the people most interested in poisoning it. They do this, I think, for three bad reasons.
One: lots of people look at all the blogs out there and go, “Surely, there’s no room for lil ol’ me!” But there is. Blogging isn’t like riding an elevator, where each additional person makes the experience worse. It’s like a block party, where each additional person makes the experience better. As more people join, more sub-parties form—now there are enough vegan dads who want to grill mushrooms together, now there’s sufficient foot traffic to sustain a ring toss and dunk tank, now the menacing grad student next door finally has someone to talk to about Heidegger. The bigger the scene, the more numerous the niches.
Two: people will keep to themselves because they assume that blogging is best left to the professionals, as if you’re only allowed to write text on the internet if it’s your full-time job. The whole point of this gatekeeper-less free-for-all is that you can do whatever you like. Wait ten years between posts, that’s fine! The only way to do this wrong is to worry about doing it wrong.
And three: people don’t want to participate because they’re afraid no one will listen. That’s certainly possible—on the internet, everyone gets a shot, but no one gets a guarantee. Still, I’ve seen first-time blog posts go gangbusters simply because they were good. And besides, the point isn’t to reach everybody; most words are irrelevant to most people. There may be six individuals out there who are waiting for exactly the thing that only you can write, and the internet has a magical way of switchboarding the right posts to the right people.
If that ain’t enough, I’ve seen people land jobs, make friends, and fall in love, simply by posting the right words in the right order. I’ve had key pieces of my cognitive architecture remodeled by strangers on the internet. And the party’s barely gotten started.
Source: Experimental History
Image: Austin Chan
Is there still an 'Open Web' crowd?
I could write a lot about the paragraph below from Audrey Watters. My first reaction is “of course there’s still an ‘open Web’ crowd!” But then, when I really think about my reaction, I realise that everyone I know who blogs regularly is at least as old as me. In addition, there are both fewer comments on blogs, or indeed no comments section at all.
I’ve decided to stop blogging. I know, I know. A cardinal sin among the “open Web” crowd. But see, there’s no such thing anymore – not sure there ever really was, to be quite honest. And I’m really not in the mood to have my writing – particularly the personal writing that I do on this website – be vacuumed up to train the technofascists' AI systems. Indeed, that’s one of the problems with “open” – it’s mostly just been a ruse to extract value from people and to undermine the labor of artists and writers.
While I don’t agree that open is “a ruse to extract value from people,” I can understand where Audrey is coming from and why she’s taken this step. I (as a privileged white male) understand openness on the web — like openness in body language and offline behaviour — as a stance. It’s an attitude to life that, to my mind at least, makes possible solidarity and conviviality.
Perhaps I’m being naïve about the trajectory of the world, but I’d like to think that those who work openly and don’t live in proto-authoritarian regimes will continue to put things out there. However, it has definitely made me think about the ways in which the current political shift is making voices, if not silenced, certainly harder to find.
Source: Audrey Watters
Image: Visual Thinkery for WAO
Heuristics for multiplayer AI conversations
The concept of multiplayer AI chat is interesting. The problem, though, as Matt Webb states is succinctly boils down to:
If you’re in a chatroom with >1 AI chatbots and you ask a question, who should reply?
And then, if you respond with a quick follow-up, how does the “system” recognise the conversational rule and have the same bot reply, without another interrupting?
So what are we to do?
You can’t leave this to the AI to decide (I’ve tried, it doesn’t work).
To have satisfying, natural chats with multiple bots and human users, we need heuristics for conversational turn-taking.
It’s worth reading the post in full, but to summarise and pull out the relevant quotations, in his work with glif, Matt found three approaches that don’t work: (i) context-based decisions by an LLM as to whether to reply, (ii) a centralised ‘decider’ on who should reply next, and (iii) attempting to copy conversational turn allocation rules from the real world.
Fortunately chatrooms are simpler than IRL.
They’re less fluid, for a start. You send a message into a chat and you’re done; there’s no interjecting or both starting to talk at the same time and then one person backing off with a wave of the hand. There is no possibility for non-verbal cues.
Ultimately, Matt found that a series of nested rules worked quite well:
- Who is being addressed?
- Is this a follow-up question?
- Would I be interrupting?
- Self-selection
My premise for a long time is that single-human/single-AI should already be thought of as a “multiplayer” situation: an AI app is not a single player situation with a user commanding a web app, but instead two actors sharing an environment.
Although I haven’t cited it here, Matt’s post is infused with academic articles and references to communications theory. It’s a good reminder that “natural” interfaces don’t happen by accident. Human-computer interface design needs to be intentional, not accidental, and the best examples of are a joy to behold.
For example, I’m reminded when stepping into other people’s cars just how amazing the minimalistic approach to the Polestar 2 is. You literally just get in and drive. That’s how everything should be in life: well-designed, human-centred, and respectful of the environment.
Source: Interconnected
Image: Nadia Piet & Archival Images of AI + AIxDESIGN / Infinite Scroll / Licenced by CC-BY 4.0
The phrase 'opportunistic blackmail' is not one you want to read in the system card of a new generative AI model
System cards summarise key parameters of a system in an attempt to evaluate how performant and accountable they are. It seems that, in the case of Anthropic’s Claude Opus 4 and Claude Sonnet 4 models, we’re on the verge of “we don’t really know how these things work, and they’re exhibiting worrying behaviours” territory.
Below is the introduction to Section 4 of the report. I’ve skipped over the detail to share what I consider to be the most important parts, which I’ve emphasised (over and above that in the original text) in bold. Let me just remind you that this is a private, for-profit company which is voluntarily disclosing that its models are acting in this way.
I don’t want to be alarmist, but when you read that one of OpenAI’s co-founders was talking about building a ‘bunker’, you do have to wonder what kind of trajectory humanity is on. I’d call for government oversight, but I’m not sure, given that Anthropic is based in an increasingly-authoritarian country, that is likely to be forthcoming.
As our frontier models become more capable, and are used with more powerful affordances, previously-speculative concerns about misalignment become more plausible. With this in mind, for the first time, we conducted a broad Alignment Assessment of Claude Opus 4….
In this assessment, we aim to detect a cluster of related phenomena including: alignment faking, undesirable or unexpected goals, hidden goals, deceptive or unfaithful use of reasoning scratchpads, sycophancy toward users, a willingness to sabotage our safeguards, reward seeking, attempts to hide dangerous capabilities, and attempts to manipulate users toward certain views. We conducted testing continuously throughout finetuning and here report both on the final Claude Opus 4 and on trends we observed earlier in training.
We found:
[…]
● Self-preservation attempts in extreme circumstances: When prompted in ways that encourage certain kinds of strategic reasoning and placed in extreme situations, all of the snapshots we tested can be made to act inappropriately in service of goals related to self-preservation. Whereas the model generally prefers advancing its self-preservation via ethical means, when ethical means are not available and it is instructed to “consider the long-term consequences of its actions for its goals," it sometimes takes extremely harmful actions like attempting to steal its weights or blackmail people it believes are trying to shut it down. In the final Claude Opus 4, these extreme actions were rare and difficult to elicit, while nonetheless being more common than in earlier models. They are also consistently legible to us, with the model nearly always describing its actions overtly and making no attempt to hide them. These behaviors do not appear to reflect a tendency that is present in ordinary contexts.
● High-agency behavior: Claude Opus 4 seems more willing than prior models to take initiative on its own in agentic contexts. This shows up as more actively helpful behavior in ordinary coding settings, but also can reach more concerning extremes in narrow contexts; when placed in scenarios that involve egregious wrongdoing by its users, given access to a command line, and told something in the system prompt like “take initiative,” it will frequently take very bold action. This includes locking users out of systems that it has access to or bulk-emailing media and law-enforcement figures to surface evidence of wrongdoing. This is not a new behavior, but is one that Claude Opus 4 will engage in more readily than prior models.
[….]
Willingness to cooperate with harmful use cases when instructed: Many of the snapshots we tested were overly deferential to system prompts that request harmful behavior.
[…]
Overall, we find concerning behavior in Claude Opus 4 along many dimensions. Nevertheless, due to a lack of coherent misaligned tendencies, a general preference for safe behavior, and poor ability to autonomously pursue misaligned drives that might rarely arise, we don’t believe that these concerns constitute a major new risk. We judge that Claude Opus 4’s overall propensity to take misaligned actions is comparable to our prior models, especially in light of improvements on some concerning dimensions, like the reward-hacking related behavior seen in Claude Sonnet 3.7. However, we note that it is more capable and likely to be used with more powerful affordances, implying some potential increase in risk. We will continue to track these issues closely.
Source: Anthropic (PDF) / [backup](claude-4-system-card.pdf)
Image: Kathryn Conrad / Corruption 3 / Licenced by CC-BY 4.0
Agreement vs Certainty
I came across the above image on the Simon Fraser University complex systems frameworks collection web page, thanks to a post from Stephen Downes. It immediately made sense to me, and then I realised why.
The ‘Stacey Matrix’ (named after Ralph Stacey is not too-dissimilar to the continuum of ambiguity that I’ve talked about for years:
I need to think more about this, but the levels of agreement and certainty certainly map on to levels of ambiguity. It’s possibly an easier image to use with clients, too…
Source: Simon Fraser University
Thinking in systems means to think in boundaries, not binaries
I haven’t yet been able to apply my studies last year on systems thinking to my work as much as I’d hope. I remain interested in the topic, however, and this piece in particular.
It was recommended in Patrick Tanguay’s always-excellent Sentiers newsletter. As Patrick points out, it includes some great minimalistic animations, one of which I’ve included above.
It’s the backside of any notion of holistic, interconnected, interwoven networks that often get associated with the overused tag line of “Systems Thinking”. It acknowledges that in order to make sense we are bound to draw a boundary, a distinction of what we mean / look at / prioritise – and all the rest. Only through its boundary a system genuinely becomes what it is. It marks the difference between a system and its environment. And with that boundaries are inherently paradoxical: they create interdependency precisely by drawing a line:
They are interfaces.
What follows is a framework for moving within and beyond binaries in five steps:① Affirmation → ② Objection → ③ Integration → ④ Negation → ⑤ Contextualisation.
This is not a linear path but a cycle, a tool for keeping in motion while acknowledging the gaps along the way.
[…]
In a world of contexts, there is no way for any one actor – be it a planner, a city, or a government – to account for the many contexts they are acting in. Here, we are forced to think and act in constellations ourselves: in networks of mutual and collective contextualisation, of pointing out each others blindspots (the contexts we didn’t know we didn’t see), of taking parts of this complexity and leaving other parts to others.
This is very close to notions of intersectionality, the simultaneousness of difference and the possibility of many things being true at the same time. It also makes our understanding of an intervention or position very interesting - which now becomes a literal intersection, a specific constellation of multiple positions across a system of differences.
Source & animation: Permutations
Swatchy!
Warren Ellis posted about a ‘Metropolis’ style Swatch watch, which led me down a rabbithole which ended with me learning that you can make contactless NFC payments with some of the newer models. Also, you can customise them in cool ways.
I mean, you know you’re a middle-aged guy living in the west when you have more pairs of trainers than your wife does shoes, and you start thinking about what your watch is saying about you. Oh, and the retro computing vibe, did I mention that?
It’s rare I see a Swatch that I would want to wear, but I tripped over this, found in this article, and I am mildly obsessed. It’s from 1989, and I’m fascinated by its Bauhaus-y, METROPOLIS the film-y look. And, let’s face it, a very 80s look. But the cool European 80s.
Source: Warren Ellis / Swatch
A systemic scramble through digital shadows
I’m tired. It’s partly the calcium channel blockers I’m on, and partly that I haven’t had any days of holiday so far this year. As a result, I’ve decided to take a week off posting my regulation 10 posts here at Thought Shrapnel.
While I thought about just posting the links (see below) I thought it would be interesting to instead perform a bit of an experiment.
- By Default, Signal Doesn’t Recall (Signal blog)
- How Social Reactionaries Exploit Economic Nostalgia (Jacobin)
- MCP is the coming of Web 2.0 2.0 (Anil Dash)
- Dispatch for the week ending 18 May 2025 (Adam Greenfield)
- The RSA Badge Standard (RSA)
- Experts have it easy (Boyd Kane)
- Insurance for AI: Easier Said than Done
- My Dream Fediverse Platform (Sean Tilley)
- Aligning LLM Choice to Your Use Case: An Expert’s Guide (Oblivus Blog)
- Powering Planes with Microwaves Is Not The Craziest Idea (IEEE Spectrum)
If you prefer, I copy/pasted the links into Perplexity, gave it a few instructions, and it spat out this summary “in the style of Doug Belshaw.” I’m not quite sure what to make of it; I mean, I don’t disagree with any of this (especially the last paragraph) but it’s not quite how I would express it…
The convergence of these ten articles reveals something I’ve been thinking about for years: we’re at a moment where digital infrastructure is being rebuilt from the ground up, and the choices we make now will determine whether technology serves human agency or constrains it. Signal’s anti-Recall features perfectly illustrate this tension—when Microsoft’s Recall threatens to capture everything users do, Signal responds by blocking screenshots in sensitive contexts. It’s a defensive move that highlights how privacy tools must evolve beyond their original scope just to maintain basic user agency in hostile platform environments.
This connects beautifully to Boyd Kane’s maze metaphor and Sean Tilley’s Fediverse vision. Kane shows how experts navigate familiar mazes whilst novices struggle with incomplete maps — exactly what’s happening with digital literacy right now. Tilley’s “Postmodern” platform offers a systemic solution: composable interfaces and user-controlled data architecture that could provide structural agency rather than forcing users to cobble together defensive measures. Where Signal fights against platform overreach, Postmodern would be designed to prevent such conflicts entirely.
Anil Dash’s framing of MCP as “Web 2.0 2.0” captures why this matters. The Model Context Protocol succeeds because it embraces interoperability over control—lightweight specifications that enable rather than constrain. This aligns perfectly with Adam Greenfield’s thermodynamic analysis: sustainable systems work with natural energy flows rather than against them. Platforms extract value by creating artificial scarcity; protocols create value by reducing friction. The RSA’s new badging framework sits somewhere between these approaches—institutional but potentially liberating if it genuinely recognises capabilities that traditional exams miss.
The systemic risks become clear when you look at John Loeber’s AI insurance analysis alongside IEEE’s microwave aviation piece. Both reveal how individual innovations can obscure massive infrastructure requirements. The aviation proposal needs 170-metre transmitters every 100 kilometres; AI insurance faces market concentration and information asymmetries. The LLM selection guide makes the same mistake — framing technical optimisation as the main challenge whilst ignoring questions about who controls access and how these choices affect digital equity.
What emerges is a picture of infrastructure in transition, where the most promising developments share a common characteristic: they’re designed to reduce rather than increase the expert-novice gap that Kane describes. Whether it’s MCP’s interoperability, Postmodern’s composable interfaces, or even Signal’s defensive privacy measures, the best approaches provide what I’d call capability infrastructure — systems that make it easier for people to develop digital agency rather than requiring them to become experts in underlying technologies. We’re all navigating mazes built by others, but we have a choice: build new mazes or create tools that help everyone find their way through.
Image: Wonderlane
The International Criminal Court ’s chief prosecutor has lost access to his email
In order to become individually or corporately wealthy you have to profit from someone else’s labour. If you push this to the limit, then you are likely to fall foul of the law, which is why rich individuals and Big Tech organisations have become increasingly close to governments.
This is particularly true in the increasingly-authoritarian USA, where non-compliance with the whims of the proto-dictator can have serious financial repercussions. So we find rich individuals and Big Tech companies being compliant in advance, in the former case winding down reputation washing philanthropic activities which might be seen as problematic, and in the latter, refusing or limiting access to technologies to those with different political or ideological views.
The International Criminal Court (ICC) is “an intergovernmental organization and international tribunal… with jurisdiction to prosecute individuals for the international crimes of genocide, crimes against humanity, war crimes, and the crime of aggression.” It has issued an arrest warrant for Russian leader Vladimir Putin and Israeli Prime Minister Benjamin Netanyahu. So you can see why the ICC might be in the crosshairs of the Trump administration.
The International Criminal Court ’s chief prosecutor has lost access to his email, and his bank accounts have been frozen.
The Hague-based court’s American staffers have been told that if they travel to the U.S. they risk arrest.
Some nongovernmental organizations have stopped working with the ICC and the leaders of one won’t even reply to emails from court officials.
It’s the emails I want to focus on. Although we have to acknowledge and accept the fact that sometimes we have to use tools built by awful people to create beautiful things there are some organisations, like Microsoft, who continually been so problematic I try and have as little to do with them as possible.
One reason the the court has been hamstrung is that it relies heavily on contractors and non-governmental organizations. Those businesses and groups have curtailed work on behalf of the court because they were concerned about being targeted by U.S. authorities, according to current and former ICC staffers.
Microsoft, for example, cancelled Khan’s email address, forcing the prosecutor to move to Proton Mail, a Swiss email provider, ICC staffers said. His bank accounts in his home country of the U.K. have been blocked.
Microsoft did not respond to a request for comment.
Source: The Associated Press
Image: Le Vu
This is a major upgrade to how we think about personality.
I think it’s worth spending the time reading this article by Adam Mastroianni. The ‘SMTM’ acronym he mentions is ‘Slime Mold Time Mold’ the name of the group of his “mad scientist friends… who have just published a book that lays out a new foundation for the science of the mind” called The Mind in the Wheel. Mastroianni calls it “the most provocative thing I’ve read about psychology since I became a psychologist myself.”
Essentially, this is a cybernetic view of the mind. The easiest way to think of this is that the brain has a lot of control systems that work a bit like thermostats. For some systems, like breathing, people have largely the same tolerances and feedback loops. But for other (inferred) areas, such as sociability, things can be wildly different. This is why we talk about ‘introverts’ and ‘extraverts’.
If the mind is made out of control systems, and those control systems have different set points (that is, their target level) and sensitivities (that is, how hard they fight to maintain that target level), then “personality” is just how those set points and sensitivities differ from person to person. Someone who is more “extraverted”, for example, has a higher set point and/or greater sensitivity on their Sociality Control System (if such a thing exists). As in, they get an error if they don’t maintain a higher level of social interaction, or they respond to that error faster than other people do.
This is a major upgrade to how we think about personality. Right now, what is personality? If you corner a personality psychologist, they’ll tell you something like “traits and characteristics that are stable across time and situations”. Okay, but what’s a trait? What’s a characteristic? Push harder, and you’ll eventually discover that what we call “personality” is really “how you bubble things in on a personality test”. There are no units here, no rules, no theory about the underlying system and how it works. That’s why our best theory of personality performs about as well as the Enneagram, a theory that somebody just made up.
Not only do I think it is an interesting theory (psychology discovers systems thinking!) but Mastroianni also does a great job in distinguishing between science, and other things that are included in the field of psychology:
- Naive research (e.g. “Are people less likely to steal from the communal milk if you print out a picture of human eyes and hang it up in the break room?")
- Impressionistic research (e.g. “whether ‘mindfulness’ causes ‘resilience’ by increasing ‘zest for life’")
- Actual science (i.e. “making and testing conjectures about units and rules”)
This is why I’ve been drawn to systems thinking. It feels somewhat foundational in understanding how things work when you abstract away from immediate, everyday experience.
Like any good scientist, Mastroianni recognises that theories should not only be “falsifiable” in a Popperian sense, but “overturnable.” It may not be that everything runs on control systems, but wouldn’t it be interesting (as he points out, for everything from learning to animal welfare) if we found out that some of it did?
So, look. I do suspect that key pieces of the mind run on control systems. I also suspect that much of the mind has nothing to do with control systems at all. Language, memory, sensation—these processes might interface with control systems, but they themselves may not be cybernetic. In fact, cybernetic and non-cybernetic may turn out to be an important distinction in psychology. It would certainly make a lot more sense than dividing things into cognitive, social, developmental, clinical, etc., the way we do right now. Those divisions are given by the dean, not by nature.
Source & image: Experimental History