Misinformation and disinformation don’t actually need to convince anyone of anything to have an impact. They just need to make you question what you’re seeing.

A red wall displays a quote by Pierre Bourdieu about habitus being both a structuring structure and a structured structure.

Ryan Broderick is spot in this piece for Garbage Day about misinformation and disinformation. I do wonder why you’d want to continue using a service where you’re not quite sure what or who to believe. But then, I guess when most people are getting their news from social media, that is their information environment and questioning it might feel like questioning reality itself.

We live at a time where extremely high-resolution and extraordinarily detailed fake news can be generated almost instantly. But also, the threshold is (and always has been) extremely low for getting people to believe things — as the recent post about prompt injecting reality showed. When people spend so long online and don’t curate their own information environment, the habitus that guides their social actions can be actively dangerous.

You’d think the imminent breakdown of the global order would be worrying people more, but it’s hard to pay attention when you’re busy using AI to channel spirits and have ChatGPT-induced psychotic episodes. According to TikTokers, ChatGPT can “lift the veil between dimensions.” There’s also a guy on X who’s struggling to change the temperature of his AI-powered bed at the moment. The verified X user currently painting their roof blue to protect themselves from “direct energy weapons,” however, did not get the idea from an AI. They’re just the normal kind of internet insane.

[…]

It doesn’t matter if anyone believes the unreality of what they’re seeing online. Misinformation and disinformation don’t actually need to convince anyone of anything to have an impact. They just need to make you question what you’re seeing. The Big Lie and the millions of small ones online, whatever they happen to be wherever you’re living right now, just have to cause division. To wear you down. To provide an opening for those in power, who now have both too much of it and too few concerns about how to wield it. The populist demagogues and ravenous oligarchs the internet gave birth to in the 2010s are now firmly at the helm of the global order and, also, hooked up to the same chaotic, emotionally-gratifying global information networks that we all are, both social and, now, AI-generated. And, also like us, they are being heavily influenced by them in ways we can’t totally see or predict. Which is how we’ve ended up in a place where missiles are flying, planes are dropping out of the sky, and vulnerable people are being thrown in gulags, all while our leaders are shitposting about their big, beautiful plans for more extrajudicial arrests and genocidal territorial expansion. Assured by mindless AI chatbots that their dreams of world domination and self-enrichment are valid and noble and righteous. And there is no off ramp there. Everyone, even the folks with the nuclear codes, is entertaining themselves online as the world burns. Posting through it and monitoring the situation until it finally reaches their doorstep and forces them to look up from their phone and log off.

Source: Garbage Day

Image: Andrea De Santis

GPQA is difficult enough to be useful for scalable oversight research on future models significantly more capable than the best existing public models

A graph illustrates the shifting frontier of AI model performance and cost over time, comparing various models' GPQA Diamond Scores and costs per million tokens.

The GPQA is “a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry” where even experts with unrestricted access to the web only reach 64% accuracy. It’s a benchmark used to rate generative AI models and, as Ethan Mollick notes using the chart he created above, they’re getting better at the GPQA even while the cost is coming down.

I used MiniMax Agent today, a new agentic AI webapp based on MiniMax-M1, “the world’s first open-source, large-scale, hybrid-attention reasoning model” according to the press release. It was impressive, both in terms of capability and flexibility of output. The kind of chain-of-reasoning it uses is going to be very useful to knowledge workers and researchers like me.

MiniMax-M1 is probably on a par with the ChatGPT o3 model, but of course it’s both Chinese and open source, so a direct competitor to OpenAI. I stopped using OpenAI’s products in January when it became clear that using them involved about the same level of associated cringe as driving a Tesla in 2025.

The questions are reasonably objective: experts achieve 65% accuracy, and many of their errors arise not from disagreement over the correct answer to the question, but mistakes due to the question’s sheer difficulty (when accounting for this conservatively, expert agreement is 74%). In contrast, our non-experts achieve only 34% accuracy, and GPT-4 with few-shot chain-of-thought prompting achieves 39%, where 25% accuracy is random chance. This confirms that GPQA is difficult enough to be useful for scalable oversight research on future models significantly more capable than the best existing public models.

Sources: arXiv & Ethan Mollick | LinkedIn

Our society is in the thrall of dumb management, and functions as such

Two signs stating Business As Usual are mounted on a weathered wall near a doorway.

It’s not easy to summarise this 13,000-word article by Ed Zitron, nor decide which parts to pull out and highlight. The main gist is that our economy is dominated by managers who lack real understanding of their businesses and customers. Their poor decisions are fueled by decades of neoliberal thinking, which promotes short-term gains over meaningful contributions. The name Zitron gives to these managers is “Business Idiots” who thrive on alienation and avoid accountability.

I think he’s using this term because ranting about rich people in an unequal society is pointless; most are desperately looking upwards trying to copy behaviours which might pull them out of the mire. Also, talking about “Big Tech” is meaningless, because it’s difficult for people to understand structures and systems. So, to personify things, Zitron uses “Business Idiots” to make his points. I don’t disagree with him, but it is an argument which lacks nuance, despite the number of words used and links sprinkled liberally amongst the paragraphs. What he’s really talking about, as he tends to, is generative AI.

Perhaps it’s easier to take some of the highlights I made of the article and rearrange them to make a bit more sense. I’m not saying that Zitron doesn’t make sense, just that, if I presented them in the order in which I highlighted them, they wouldn’t benefit from the structure of the entire article.

Let’s start here:

On some level, modern corporate power structures are a giant game of telephone where vibes beget further vibes, where managers only kind-of-sort-of understand what’s going on, and the more vague one’s understanding is, the more likely you are to lean toward what’s good, or easy, or makes you feel warm and fuzzy inside.

Zitron has an issue with managers within large, hierarchical, for-profit businesses. He talks about hiring being broken (something I’ve talked about a lot) but in a way which situates it with the “vibe-based structure” outlined above:

We live in a symbolic economy where we apply for jobs, writing CVs and cover letters to resemble a certain kind of hire, with our resume read by someone who doesn’t do or understand our job, but yet is responsible for determining whether we’re worthy of going to the next step of the hiring process. All this so that we might get an interview with a manager or executive who will decide whether they think we can do it. We are managed by people whose job is implicitly not to do work, but oversee it. We are, as children (and young adults), encouraged to aspire to become a manager or executive, to “own our own business,” to “have people that work for us,” and the terms of our society are, by default, that management is not a role you work at, so much as a position you hold — a figurehead that passes the buck and makes far more of them than you do.

[…]

It’s about “managing people,” and that can mean just about anything, but often means “who do I take credit from or pass blame to,” because modern management has been stripped of all meaning other than continually reinforcing power structures for the next manager up.

I don’t think this is a modern phenomenon. I think that someone reading this in, say, the 1960s, would recognise this problem. The issue is hierarchy. The issue is capitalism.

The difference is that we now live within a neoliberal world order. But, again, Zitron isn’t really saying anything new here when again, later in the article, talks about us living in a “symbolic society.” The situationists such as Guy Debord were talking about this decades ago. It has long been thus.

I believe this process has created a symbolic society — one where people are elevated not by any actual ability to do something or knowledge they may have, but by their ability to make the right noises and look the right way to get ahead. The power structures of modern society are run by business idiots — people that have learned enough to impress the people above them, because the business idiots have had power for decades. They have bred out true meritocracy or achievement or value-creation in favor of symbolic growth and superficial intelligence, because real work is hard, and there are so many of them in power they’ve all found a way to work together.

What has changed — and this why I prefer reading someone measured and insightful like Cory Doctorow — is that the policy environment has changed. This has enabled and encouraged what Zitron calls the “business idiot” to flourish.

⁠Big companies build products sold by specious executives or managers to other specious executives, and thus the products themselves stop resembling things that solve problems so much as they resemble a solution. After all, the person buying it — at least at the scale of a public company — isn’t necessarily the recipient of the final product, so they too are trained (and selected) to make calls based on vibes.

[…]

Our society is in the thrall of dumb management, and functions as such. Every government, the top quarter of every org chart, features little Neros who, instead of battling the fire engulfing Rome, are sat in their palaces strumming an off-key version of “Wonderwall” on the lyre and grumbling about how the firefighters need to work harder, and maybe we could replace them with an LLM and a smart sprinkler system.

The reason that executives can move between the top echelons of society even after serial failure is because of regulatory capture and the resultant lack of punishment for white-collar crime. If we rinse-and-repeat this kind of behaviour enough, we end up with money moving to the top of society at the expense of the rest of us. Governments, frightened of the elites, impose austerity policies, enter “public-private partnerships” and otherwise indemnify rich people from the downsides of their speculation.

Our economy in the west is therefore one where the only real game in town is to create products and services for individuals and businesses with money. And because of the regulatory environment, these are not, by and large good companies that exist to promote human flourishing:

The Business Idiot’s economy is one built for other Business Idiots. They can only make things that sell to companies that must always be in flux — which is the preferred environment of the Business Idiot, because if they’re not perpetually starting new initiatives and jumping on new “innovations,” they’d actually have to interact with the underlying production of the company. As these men – and it’s almost almost men – gain more political power, this situation is only likely to get worse. “You should believe people when they tell you who they are,” is advice I’ve been given before. You should also believe people when they tell you what their version of a utopian future looks like. I’m not sure the general population’s vision is in line with that of tech billionaires: These people are antithetical to what’s good in the world, and their power deprives us of happiness, the ability to thrive, and honestly any true innovation. The Business Idiot thrives on alienation — on distancing themselves from the customer and the thing they consume, and in many ways from society itself. Mark Zuckerberg wants us to have fake friends, Sam Altman wants us to have fake colleagues, and an increasingly loud group of executives salivate at the idea of replacing us with a fake version of us that will make a shittier version of what we make for a customer that said executive doesn’t fucking care about.

They’re building products for other people that don’t interact with the real world. We are no longer their customers, and so, we’re worth even less than before — which, as is the case in a world dominated by shareholder supremacy, not all that much.

They do not exist to make us better — the Business Idiot doesn’t really care about the real world, or what you do, or who you are, or anything other than your contribution to their power and wealth. This is why so many squealing little middle managers look up to the Musks and Altmans of the world, because they see in them the same kind of specious corporate authoritarian, someone above work, and thinking, and knowledge.

[…]

These people don’t want to automate work, they want to automate existence. They fantasize about hitting a button and something happening, because experiencing — living! — is beneath them, or at least your lives and your wants and your joy are. They don’t want to plan their kids’ birthday parties. They don’t want to research things. They don’t value culture or art or beauty. They want to skip to the end, hit fast-forward on anything, because human struggle is for the poor or unworthy.

Meanwhile, of course, young people – and especially young men – are spending hours each day on social media platforms owned by these tech billionaires. Their algorithms valorise topics and ideas which promote various forms of alienation. I’m not particularly hopeful for the future, especially after reading articles like this.

But the thing is, I think that writers such as Zitron have a duty to spell out the kind of utopia that he thinks we should be striving for. As with other techno-critics, it’s all very well pointing out how terrible things and people are, but if this is what you are doing, you need to be explicit about your position. What do you stand for? It’s very easy to point and one thing after another saying “this is terrible,” “that person is awful,” “this is broken,” etc. What’s much harder is to argue and fight for a world where the things you dislike are fixed.

Source: Where’s Your Ed At

Image: Hoyoun Lee

Prompt injecting reality

It’s easy to think that people who fall for misinformation are somehow stupid. However, a lot of what counts as ‘plausible’ information depends on the context in which its presented. Sending out a million fake ‘DHL has got your parcel and needs extra payment’ messages is successful to the scammer if 1% of recipients are expecting such a parcel. If 0.01% of the overall group click on the link, that’s still 100 people scammed.

You may or may not have seen that there has been some ‘backlash’ about the design changes in iOS 26. The approach, named “Liquid Glass” has been criticised by accessibility and usability experts, which leads to this plausible-looking tweet:

A satirical tweet about being fired by Apple is featured amid various news logos.

Several news outlets reported on this as fact, meaning that Google News ended up looking like this (screenshot by Georg Zoeller:

Various news headlines about Apple firing a lead designer of the new iOS 26 Liquid Glass UI are displayed on a smartphone screen.

Fake news, but with real consequences. Is Yongfook what he says he is? Of course not! (screenshot again by Georg Zoeller):

A person shares two contrasting tweets, one claiming to be a 42-year-old running a $550,000/year business, and another claiming to be a 17-year-old with a $10 million/month business.

As I said many moons ago, our information environment is crucial to a flourishing democracy and civil society

Source: [The Quint]((https://www.thequint.com/news/webqoof/satirical-post-about-fired-from-apple-jon-yongfook-viral-as-real#read-more)

Sandwich bags for cheese, blister plasters, and a 'bubble of pain'

Auto-generated description: A spiral notebook and a pencil are placed on a red and green background.

There was a time, in the BuzzFeed era, where ‘listicles’ were everywhere. It seemed like everything was a list, and you couldn’t escape them. A decade or so later, we’re seeing more of a balance in the force, and so lists are useful rather than egregious.

This list in The Guardian is entitled ‘52 tiny annoying problems, solved!’ and I’d like to share a few of the suggestions which caught my eye.

I have a sandwich bag in my fridge of all the odds and ends of cheese; they keep for ages. I would always freeze feta, though, as it doesn’t last long. Likewise, keep any last little bits of carrot, onion or other veg in a bag and next time you are making a ragu or soup, chuck them in. If you buy a pot of cream for a recipe and use only a small amount, freeze the rest in an ice cube tray. Do the same with wine. GH

One idea I’ve found useful for dealing with irritating interruptions when you’re trying to concentrate is: be careful not to define more things than necessary as “interruptions”. If you’re the kind of person who tries to schedule your whole day very strictly, you’re pretty much asking to feel annoyed when reality collides with your rigid plan. If you have autonomy over your schedule, a better idea is to try to safeguard three or four hours at most for total focus – this is, it turns out, the maximum countless authors, scientists and artists have managed in an uninterrupted fashion anyway. If I’m working at home on a day when it’s not my turn for school pickup, and my son bursts in to tell me excitedly about something he’s done, it’s a shame if I feel annoyed by the intrusion rather than delighted by the serendipitous interaction, solely because I’ve defined that period as time for deep focus. OB

I discovered this by accident, but unsolicited door-knockers are eager to conclude their business and go away if you open the door while holding some kind of large electric gardening implement. I just happened to be carrying a hedge trimmer when the bell rang, but a chainsaw would be even better. You could leave it on a hook by the door. TD

Sooner or later, if you are running you will get a big bastard blister on your heel, and there is no point using anything other than one of those expensive padded blister plasters. Normal plasters won’t get you home without pain, or let you run again next day. PD

When someone has a minor injury, such as stubbing their toe, give them a full minute to themselves so they can enter, then exit, their “bubble of pain”. This is what we do in our family and I swear it helps get rid of pain much faster. We don’t ask, “What happened?” or, “Are you OK?” until the injured person speaks first. A hand on their shoulder or a respectful bowing of the head to the Gods of Minor Pain is sufficient at this time. Anonymous

May I just +1 the advice about blister plasters? If you’ve never used them, I don’t think you can possibly understand how much better they are than regular plasters. Next time you’re stocking up your first aid kid, consider buying some!

Source: The Guardian

Image: Seaview N.

Minimum Viable Organisations: low emotional labour, low technical labour, zero cost

Auto-generated description: A vibrant abstract pattern features swirling, multicolored lines with a dynamic flow.

I can’t believe it’s been 12 years since I published a series of posts entitled Minimum Viable Bureaucracy, based on the work of Laura Thomson, who worked for the Mozilla Corporation (while I was at the Foundation).

So what’s it about? What is ‘Minimum Viable Bureaucracy’ (MVB)? Well, as Laura rather succinctly explains, it’s the difference between ‘getting your ducks in a row’ and ‘having self-organising ducks’. MVB is a way of having just enough process to make things work, but not so much as to make it cumbersome. It’s named after Eric Ries’ idea of a Minimum Viable Product which, “has just those features that allow the product to be deployed, and no more.”

The contents of Laura’s talk include:

  • Basics of chaordic systems
  • Building trust and preserving autonomy
  • Effective communication practices
  • Problem solving in a less-structured environment
  • Goals, scheduling, and anti-estimation
  • Shipping and managing scope creep and perfectionism
  • How to lead instead of merely managing
  • Emergent process and how to iterate

I truly believe that MVB is an approach that can be used in whole or in part in any kind of organisation. Obviously, a technology company with a talented, tech-savvy, distributed workforce is going to be an ideal testbed, but there’s much in here that can be adopted by even the most reactionary, stuffy institution.

I’ve spent nine of the last ten years since leaving Mozilla as part of a worker-owned cooperative, and part of a couple of networks of co-ops. I’ve learned many, many things, including that hierarchy is just a lazy default, ways to deal with conflict, and (perhaps most importantly) consent-based decision making.

Which brings me to this post, which talks about ‘Minimum Viable Organisations’. The author, Dr Kim Foale, of the excellent GFSC. They call it a work in progress, and start with the following:

Basic principle: It should be easy (low emotional labour, low technical labour, zero cost) to start a project with a small group of people with shared goals.

The list of reasons Kim gives as to why groups ‘fail’ seems familiar to me, as it might do to you:

  • Lack of care of people in the group
  • Over-reliance on attendance at organising meetings as a prerequisite for being in the group
  • As groups grow in numbers, making any kind of decision becomes more and more difficult
  • Trying to fix every problem / having too broad a remit
  • Poor record keeping and attention to process
  • Misunderstanding and misuse of consensus processes

It’s worth noting on the last point that ‘consensus’ and ‘consent’ sound very similar but are very different approaches. With the first you’re trying to get full agreement, while with the latter you’re trying to achieve alignment.

What Kim suggests is all very sensible. Things like a written constitution, a code of conduct, a minimum commitment requirement, and a process by which members can change things. It’s an unfinished post, so I’m assuming they’re coming back to finish it off.

For me, the combination of having a stated aim, code of conduct, and working openly usually leads to good results. The minimum commitment requirement is an interesting addition, though, and one I’ll noodle on.

Source: kim.town

Image: Tomáš Petz


(I did a bit of digging and it looks like Kim’s using Quartz to power their site, probably linked to Obsidian. The idea of turning either my personal blog or Thought Shrapnel into a digital garden is quite appealing. More info on options for this here

Drowning in culture, we skim, we rush, we skip over.

Auto-generated description: A black Sony camera is placed on a bright yellow background.

For some reason, an article from 2012 about “Bliss” — the name given to the famous Windows XP background of a grassy hill and blue sky — was near the top of Hacker News earlier this week. The photographer, Charles O’Rear, explains how it was all very serendipitous:

For such a famous photograph, O’Rear says it was almost embarrassingly easy to make. ‘Photographers like to become famous for pictures they created,’ he told the Napa Valley Register in an interview in 2010. ‘I didn’t “create” this. I just happened to be there at the right moment and documented it.

‘If you are Ansel Adams and you take a particular picture of Half Dome [in Yosemite National Park] and want to light it in a certain way, you manipulate the light. He was famous for going into the darkroom and burning and dodging. Well, this is none of that.’

Which brings me to the post I actually want to talk about, by Lee A Johnson, who is also a professional photographer. It’s effectively a 10-year retrospective on his career, which takes in changes in his field, technology, and his own personal development. I absolutely loved reading it, and encourage you to take the time to do so.

I’m just going to excerpt some parts that (hopefully) don’t require the narrative of the rest of the post to make sense. (I wasn’t sure about casually using a photograph taken by a professional without an explicit license to illustrate this blog, so I’ve used another.)

I started writing this post on my iPhone during an overnight stay on Prince Edward Island (PEI) sometime in 2015. One stop on a short road trip in Canada. The photo I uploaded to Instagram at the time confirms that was indeed exactly a decade ago1. Since then I’ve completed a few long-term projects, visited numerous portfolio reviews, been to several countries, photographic retreats, galleries, exhibitions, book festivals, talks. All in pursuit of understanding what I’m doing with the photography I am taking.

What I’m trying to say is that the scope of the post crept over those ten years. It’s all a bit of a mess.

[…]

Photography finds itself in an interesting place. So common it’s like breathing. Everyone has a camera in their pocket, and we’re collectively producing more photographs every week than were taken in the entire 20th century. Soon that will be every day. Then every hour. Most won’t survive the next phone upgrade, let alone be seen by human eyes.

And photography is now easy, really. Easier than ever. The technicalities can be picked up by anyone in five minutes. It’s much harder and takes much longer to figure out what you want to say, to develop a visual language that’s truly your own. To create something that stands out. Something outstanding.

[…]

Ephemeral photos, long-term projects? Most of what I photograph will never matter to anyone but me. Of the tens of thousands of frames I’ve shot, perhaps a few dozen will outlive me, and even fewer will be seen by strangers a century from now. So why bother with long-term projects, with work that takes years to complete, when the cultural landscape shifts so rapidly that by the time you’re finished the conversation has moved on? Because the long-term projects, the works with depth and commitment behind them, are the ones that have any chance of lasting impact.

[…]

Drowning in culture, we skim, we rush, we skip over. At the same time we favourite too much, follow too much, the signal to noise ratio is larger than ever. Our attention has become a commodity, harvested and sold by platforms that profit from our endless scrolling. We open tabs for articles we’ll never read, save posts we’ll never revisit, follow accounts whose content blurs together into an indistinguishable stream.

[…]

Really we’re all on one big curve, an exponential curve to nowhere. Inevitable, given an exponential curve is not sustainable. The democratization of photography, the explosion of content, the fragmentation of audience - it’s all happening at a pace that makes it hard to find stable ground. We’re constantly racing to catch up, feeling behind, trying to make sense of a landscape that transforms even as we observe it7.

[…]

Almost gone are the days of a human looking at work and deciding what is worth looking at, now replaced with machine learning and algorithms to tell us instead. But what does a computer know about art? Because of that you can be sure what i’m seeing is not the same as what anyone else is seeing. A shared culture disjoint to keep you on the platform. Keep you scrolling. Keep you viewing ads.

If you jump out of your own petri dish you will always find the culture much different, this has always been the case. Now we have the web throwing all the samples in a bucket and saying have a bit of everything. If you don’t like it then the next sample is only a click away.

Does that mean the culture’s impact is diluted? Probably. Does it matter? Probably not, but it follows that if we are defined by our cultural interests then a larger variety of parts leads to a far more interesting variety of wholes. There will be fewer parts in common, if that is the case then perhaps that means we should have more to talk about. Tell me about the things I don’t know, or haven’t seen.

Fill in the gaps.

Source: leejo.github.io

Image: C D-X

6 AI use case primitives

Auto-generated description: Six ways to use AI are illustrated, focusing on automating tasks, generating ideas, analyzing data, creating content, discovering insights, and developing tools.

It’s not often I link directly to a LinkedIn post. However, the author of this, Ben Cohen, doesn’t seem to have posted it elsewhere, so needs must. Cohen also doesn’t cite the original source of the analysis he references, but it looks like it comes from an OpenAI report entitled Identifying and scaling AI use cases in which the “six use case primitives” are:

  1. Content creation
  2. Research
  3. Coding
  4. Data analysis
  5. Ideation/strategy
  6. Automation

Cohen has renamed these into much snappier “stuff” language, along with a graphic (see above) which looks like the OpenAI logo. I like this framing, it resonates.

I do wonder how many of these six use cases that vehemently anti-AI critics have actually used. I can confidently say I’ve used them all, and probably hit four of the six categories most working days at the moment.

Turns out there are only six ways to use AI well.

OpenAI looked at 600+ of the most successful GenAI use cases.

Every single one fell into just 6 categories (which I’ve taken the liberty to rename):

Create stuff → Content, policies, presentations, images, emails, contracts

Find stuff → Insights, research, competitor analysis, trends

Build stuff → Tools, websites, apps

Make sense of stuff → Data analysis, dashboards, performance reports

Think stuff through → Idea generation, strategies, decision-making

Do stuff automatically → Workflows, email automation, customer chatbots

That’s it.

Source & image: Ben Cohen | LinkedIn

The workload fairy tale

Auto-generated description: A garden gnome with a red hat and white beard sits in a meditative position surrounded by colourful flowers and lush greenery.

Most people are very surprised when I say that I work around 20-25 hours per week. I then clarify that this is paid work, so not things like blogging, doing lots of reading, looking for business development leads, giving free advice, etc.

Still, it means that I have a life where I can exercise every day, be around for my kids, and manage my stress/anxiety levels. While not everyone runs their own business, most knowledge workers do have a fair amount of freedom. As Cal Newport points out in this article, the 4-day workweek is a way of pushing back against the expectation that a company owns all of your time.

So, I’d say that the 4-day workweek is more of a mindset change, especially if you’re getting done the same amount as before. I’d definitely try it! When I worked at Moodle, I did a 4-day week, and being able to say “I won’t be able to as I don’t work Fridays” or similar is as much as a story you tell yourself as one you tell other people.

Another thing, which we try and do at WAO is to co-work on projects, and not to switch between multiple projects within one day. So, if we’ve got three projects on the go, we’ll try and dedicate either a whole day to one, or a morning to one, an afternoon to another, and leave the third until the next day. Of course, it doesn’t always work out like that, but collaborating with others (not just having meetings with them!) and allocating time to different projects makes them not only manageable, but… maybe even enjoyable?

Most knowledge workers are granted substantial autonomy to control their workload. It’s technically up to them when to say “yes” and when to say “no” to requests, and there’s no direct supervision of their current load of tasks and projects, nor is there any guidance about what this load should ideally be.

Many workers deal with the complexity of this reality by telling themselves what I sometimes call the workload fairy tale, which is the idea that their current commitments and obligations represent the exact amount of work they need to be doing to succeed in their position.

The results of the 4-day work week experiment, however, undermine this belief. The key work – the efforts that really matter – turned out to require less than forty hours a week of effort, so even with a reduced schedule, the participants could still fit it all in. Contrary to the workload fairytale, much of our weekly work might be, from a strict value production perspective, optional.

So why is everyone always so busy? Because in modern knowledge work we associate activity with usefulness (a concept I call “pseudo-productivity” in my book), so we keep saying “yes,” or inventing frenetic digital chores, until we’ve filled in every last minute of our workweek with action. We don’t realize we’re doing this, but instead grasp onto the workload fairy tale’s insistence that our full schedule represents exactly what we need to be doing, and any less would be an abdication of our professional duties.

The results from the 4-day work week not only push back against this fairy tale, but also provide us with a hint about how we could make work better. If we treated workload management seriously, and were transparent about how much each person is doing, and what load is optimal for their position; if we were willing to experiment with different possible configurations of these loads, and strategies for keeping them sustainable, we might move closer to a productive knowledge sector (in a traditional economic sense) free of the exhausting busy freneticism that describes our current moment. A world of work with breathing room and margin, where key stuff gets the attention it deserves, but not every day is reduced to a jittery jumble.

Source: Cal Newport

Image: Dorota Dylka

The question remains, though, what will be left to browse.

Auto-generated description: A laptop screen displays a webpage with a search box titled What do you want to know? above a keyboard.

Let’s say that, as often happens, I half-remember an article that I’ve been reading. It’s not in my Reader saves, so what am I going to do? Even this time last year, I would have typed what I could remember into my browser address bar, which would then take me to my default search engine: DuckDuckGo.

Over the last few months, however, for anything more complex than just quickly looking something up, I’ve been using Perplexity, which allows you to search the web (default), as well as academic and social sites such as Reddit. Unlike other LLMs, its not sycophantic, and it always shows its sources.

Casey Newton discusses the advent of the AI-first browser which uses ‘agents’ to go searching on your behalf. I’m kind of already doing this. And before you judge me, let’s just reflect on the fact that almost 40% of people click on the first result on Google results, and [fewer than 0.5% go past the first page of search engine results. So even an LLM that goes out, reads 20 links and presents back the most salient results is already doing a better job.

[T]he decline of the web has been met with a surprising counter-phenomenon: a huge investment in new web browsers.

On Wednesday, Opera — the Norwegian company whose namesake browser commands about 2 percent market share worldwide — announced that it is building a new browser.

Two days earlier, the Browser Company said it plans to open source its Arc browser and turn its efforts fully to a new one.

The moves came a few months after “answer engine” company Perplexity teased a new browser of its own called Comet. And while the company has not confirmed it, OpenAI has reportedly been working on a browser for more than six months.

It has been a long time since the internet saw a proper browser war. The first, in the earliest days of the web, saw Microsoft’s Internet Explorer defeat Netscape Navigator decisively. (Though not before a bruising antitrust trial.) In the second, which ran from roughly 2004 to 2017, new browsers from Mozilla (Firefox) and Google (Chrome) emerged to challenge Internet Explorer and eventually kill it. Today the majority of web users use Chrome.

[…]

“Traditional browsers were built to load webpages,” said Josh Miller, the Browser Company’s CEO, in a post announcing its forthcoming Dia browser. “But increasingly, webpages — apps, articles, and files — will become tool calls with AI chat interfaces. In many ways, chat interfaces are already acting like browsers: they search, read, generate, respond. They interact with APIs, LLMs, databases. And people are spending hours a day in them. If you’re skeptical, call a cousin in high school or college — natural language interfaces, which abstract away the tedium of old computing paradigms, are here to stay.”

Perplexity is one of my pinned tabs both on my desktop and laptop. I use it multiple times every day in both professional and personal contexts, for example when researching information that is helping me make a decision about car leasing. This year, I’ve also used it to help decipher medical records, pull out information from extremely dense reports, and synthesise information from multiple sources.

It does feel a bit like a superpower when you use these things well. But, as Newton points out, as the business model for putting content on the web fails, where are AI browsers going to get their information from?

[I]t’s easy to imagine the possibilities for an AI browser. It could function as a research assistant, exploring topics on your behalf and keeping tabs on new developments automatically. It could take your to-do list and attempt to complete tasks for you while you’re away. It could serve as a companion for you while you browse, identifying factual errors and suggesting further reading.

[…]

The question remains, though, what will be left to browse. The entire structure of the web — from journalism to e-commerce and beyond — is built on the idea that webpages are being viewed by people. When it’s mostly code that is doing the looking, a lot of basic assumptions are going to get broken.

Source: Platformer

Image: almoya

Maximum fines have never before been applied simultaneously, but some might say these scoundrels have earned it.​

Auto-generated description: A diagram illustrates the interaction between Meta and Yandex systems, detailing data tracking, user identity handling, and potential risks through mobile browsers and apps.

Technical things don’t interest most people. I’m definitely at the edges of my understanding with this one, but the implications are pretty huge. Essentially, Meta (the organisation behind Facebook, Instagram, and WhatsApp) and Yandex have been caught covertly tracking users on Android devices via a novel method.

On the day that this disclosure was made public, Meta “mysteriously” stopped using this technique. But, by that point, they’d been using it for well over six months, and it appears that Yandex (a Russian tech company) has been using it for EIGHT YEARS.

The website dedicated to the disclosure is, as you’d expect, pretty technical. But it does say this:

This novel tracking method exploits unrestricted access to localhost sockets on the Android platforms, including most Android browsers. As we show, these trackers perform this practice without user awareness, as current privacy controls (e.g., sandboxing approaches, mobile platform and browser permissions, web consent models, incognito modes, resetting mobile advertising IDs, or clearing cookies) are insufficient to control and mitigate it.

We note that localhost communications may be used for legitimate purposes such as web development. However, the research community has raised concerns about localhost sockets becoming a potential vector for data leakage and persistent tracking. To the best of our knowledge, however, no evidence of real-world abuse for persistent user tracking across platforms has been reported until our disclosure.

A Spanish site called Zero Party Data, which also posts in English explains what’s going in an easier-to-understand way:

Meta devised an ingenious system (“localhost tracking”) that bypassed Android’s sandbox protections to identify you while browsing on your mobile phone — even if you used a VPN, the browser’s incognito mode, and refused or deleted cookies in every session.

[…]

Meta faces simultaneous liability under the following regulations, listed from least to most severe: GDPR, DSA, and DMA (I’m not even including the ePrivacy Directive because it’s laughable).

GDPR, DMA, and DSA protect different legal interests, so the penalties under each can be imposed cumulatively.

The combined theoretical maximum risk amounts to approximately €32 billion** (4% + 6% + 10% of Meta’s global annual revenue, which surpassed €164 billion in 2024).

Maximum fines have never before been applied simultaneously, but some might say these scoundrels have earned it.

Briefly, here’s how it works (according to the above website):

  • Step 1: The app installs a hidden “intercom”
  • Step 2: You think, “hmm, nice day to check out my guilty pleasure website in incognito mode.”
  • Step 3: The web pixel talks to the Facebook/Instagram app using WebRTC
  • Step 4: The same pixel on your favorite website, without hesitation, sends your alphanumeric sausage over the internet to Meta’s servers
  • Step 5: The app receives the message and links it to your real identity

This is why I don’t use apps from Meta and use a security-hardened version of Android called GrapheneOS

Sources: Local Mess / Zero Party Data

Image: Local Mess

Delightful Fediverse apps

Screenshot of 'Social Verifiable Credentials' section of website

I’m sharing this list of “delightful fediverse apps” as it includes a couple of things (Bonfire, BadgeFed) in which I’m particularly interested. It’s also a great example of how many types of difference service can be created via a protocols-based approach.

A curated list of fediverse software that offer decentralized social networking services based on the W3C ActivityPub family of related protocols.

Source: delightful.coding.social

Expert-in-the-loop vs. layperson-in-the-loop

Auto-generated description: A comparison between stacks of annotated, crumpled documents and a cleaner, structured digital map is shown, highlighting efficient digitization.

This is on a Google blog, so it foregrounds the use of their Gemini AI model in a new tool called Extract, built by the UK Government’s AI Incubator team. I should imagine that they’d be able to switch Gemini out for any model that has “advanced visual reasoning and multi-modal capabilities.” At least, I’d hope so.

So long as there is some kind of expert-in-the-loop, I think this is a great use of AI in public services. Planning in the UK, as I should imagine it is in most countries, is outdated, awkward, and slow. Speeding things up, especially if it allows multiple factors to be considered automatically, is a great idea.

A couple of years ago, I pored over technical documents I didn’t understand, feeding them into LLMs to try and figure out whether or not to buy a house by a river that had previously flooded, but now had flood defences. I was not an “expert-in-the-loop” but instead a “layperson-in-the-loop.” There’s a difference.

Traditional planning applications often require complex, paper-based documents. Comparing applications with local planning restrictions and approvals is a time-consuming task. Extract helps councils to quickly convert their mountains of planning documents into digital structured data, drastically reducing the barriers to adopting modern digital planning systems, and the need to manually check around 350,000 planning applications in England every year.

Once councils start using Extract, they will be able to provide more efficient planning services with simpler processes and democratised information, reducing council workload and speeding up planning processes for the public. However, converting a single planning document currently takes up to 2 hours for a planning professional – and there are hundreds of thousands of documents sitting in filing cabinets across the country. Extract can remove this bottleneck by accelerating the conversion with AI.

As the UK Government highlights, “The new generative AI tool will turn old planning documents—including blurry maps and handwritten notes—into clear, digital data in just 40 seconds – drastically reducing the time it takes planners.”

Using modern data and software, councils will be able to make informed decisions faster, which could lead to quicker application processing times for things like home improvements, and more time freed up for council staff to focus on strategic planning. Extract is being tested with planning officials at four Councils around the country including Hillingdon Council, Westminster City Council, Nuneaton and Bedworth Council and Exeter City Council and will be made available to all councils by Spring 2026.

Source: The Keyword

A goal set at time T is a bet on the future from a position of ignorance

Screenshot of Joan Westenberg's blog with a 3-column them

Not only do I really like Joan Westenberg’s blog theme (Thesis, for Ghost) but this post in particular. If there’s one thing I’ve learned from my life, career, reading Stoic philosophy, and studying Systems Thinking, it’s that there are some things you can control, and some things you can’t.

Coming up with a ‘strategy’ or a ‘goal’ that does not take into account the wider context in which you do or will operate is foolish. Naive, even. Instead, setting constraints makes much more sense. What Westenberg is advocating for here, without saying it explicitly, is a systems thinking approach to life.

You can read my 3-part Introduction to Systems Thinking on the WAO blog (which, coincidentally, we’ll soon be moving to Ghost)

Setting goals feels like action. It gives you the warm sense of progress without the discomfort of change. You can spend hours calibrating, optimizing, refining your goals. You can build a Notion dashboard. You can make a spreadsheet. You can go on a dopamine-fueled productivity binge and still never do anything meaningful.

Because goals are often surrogates for clarity. We set goals when we’re uncertain about what we really want. The goal becomes a placeholder. It acts as a proxy for direction, not a result of it.

[…]

A goal set at time T is a bet on the future from a position of ignorance. The more volatile the domain, the more brittle that bet becomes.

This is where smart people get stuck. The brighter you are, the more coherent your plans tend to look on paper. But plans are scripts. And reality is improvisation.

Constraints scale better because they don’t assume knowledge. They are adaptive. They respond to feedback. A small team that decides, “We will not hire until we have product-market fit” has created a constraint that guides decisions without locking in a prediction. A founder who says, “I will only build products I can explain to a teenager in 60 seconds” is using a constraint as a filtering mechanism.

[…]

Anti-goals are constraints disguised as aversions. The entrepreneur who says, “I never want to work with clients who drain me” is sketching a boundary around their time, energy, and identity. It’s not a goal. It’s a refusal. And refusals shape lives just as powerfully as ambitions.

Source: Joan Westenberg

If a lion could talk, we probably could understand him. He just would not be a lion any more.

Auto-generated description: A silhouette of a lion stands majestically on a hill against a sunrise or sunset.

There are so many philosophical questions when it comes to the possible uses of AI. Being able to translate between different species' utterances is just one of them.

The linguistic barrier between species is already looking porous. Last month, Google released DolphinGemma, an AI program to translate dolphins, trained on 40 years of data. In 2013, scientists using an AI algorithm to sort dolphin communication identified a new click in the animals’ interactions with one another, which they recognised as a sound they had previously trained the pod to associate with sargassum seaweed – the first recorded instance of a word passing from one species into another’s native vocabulary.

[…]

In interspecies translation, sound only takes us so far. Animals communicate via an array of visual, chemical, thermal and mechanical cues, inhabiting worlds of perception very different to ours. Can we really understand what sound means to echolocating animals, for whom sound waves can be translated visually?

The German ecologist Jakob von Uexküll called these impenetrable worlds umwelten. To truly translate animal language, we would need to step into that animal’s umwelt – and then, what of us would be imprinted on her, or her on us? “If a lion could talk,” writes Stephen Budiansky, revising Wittgenstein’s famous aphorism in Philosophical Investigations, “we probably could understand him. He just would not be a lion any more.” We should ask, then, how speaking with other beings might change us.

Talking to another species might be very like talking to alien life. […] Edward Sapir and Benjamin Whorf’s theory of linguistic determinism – the idea that our experience of reality is encoded in language – was dismissed in the mid-20th century, but linguists have since argued that there may be some truth to it. Pormpuraaw speakers in northern Australia refer to time moving from east to west, rather than forwards or backwards as in English, making time indivisible from the relationship between their body and the land.

Whale songs are born from an experience of time that is radically different to ours. Humpbacks can project their voices over miles of open water; their songs span the widest oceans. Imagine the swell of oceanic feeling on which such sounds are borne. Speaking whale would expand our sense of space and time into a planetary song. I imagine we’d think very differently about polluting the ocean soundscape so carelessly.

Source: The Guardian

Image: Iván Díaz

In this as-yet fictional world, “cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs”

The image features a silver meat grinder. Going into the grinder at the top are various culturally symbolic, historical, and fun icons – such as emojis, old statutes, a computer, newspapers, an aeroplane. At the other end of the meat grinder, coming out is a sea of blue and grey icons representing chat bot responses like 'Let me know if this aligns with your vision' in a grey chat bot message symbol.

You should, they say, “follow the money” when it comes to claims about the future. That’s why this piece by Allison Morrow is so on-point about thos made by the CEO of Anthropic about AI replacing human jobs.

If we believed billionaires then you’d be interacting with this post in the Metaverse, the first manned mission to Mars would have already taken place, and we could “believe” pandemics out of existence. So will AI have an impact on jobs? Absolutely. Will it happen in the way that some rich guy thinks? Absolutely not.

If the CEO of a soda company declared that soda-making technology is getting so good it’s going to ruin the global economy, you’d be forgiven for thinking that person is either lying or fully detached from reality.

Yet when tech CEOs do the same thing, people tend to perk up.

ICYMI: The 42-year-old billionaire Dario Amodei, who runs the AI firm Anthropic, told Axios this week that the technology he and other companies are building could wipe out half of all entry-level office jobs … sometime soon. Maybe in the next couple of years, he said.

He reiterated that claim in an interview with CNN’s Anderson Cooper on Thursday.

“AI is starting to get better than humans at almost all intellectual tasks, and we’re going to collectively, as a society, grapple with it,” Amodei told Cooper. “AI is going to get better at what everyone does, including what I do, including what other CEOs do.”

To be clear, Amodei didn’t cite any research or evidence for that 50% estimate. And that was just one of many of the wild claims he made that are increasingly part of a Silicon Valley script: AI will fix everything, but first it has to ruin everything. Why? Just trust us.

In this as-yet fictional world, “cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs,” Amodei told Axios, repeating one of the industry’s favorite unfalsifiable claims about a disease-free utopia on the horizon, courtesy of AI.

But how will the US economy, in particular, grow so robustly when the jobless masses can’t afford to buy anything? Amodei didn’t say.

[…]

Little of what Amodei told Axios was new, but it was calibrated to sound just outrageous enough to draw attention to Anthropic’s work, days after it released a major model update to its Claude chatbot, one of the top rivals to OpenAI’s ChatGPT.

Amodei stands to profit off the very technology he claims will gut the labor market. But here he is, telling everyone the truth and sounding the alarm! He’s trying to warn us, he’s one of the good ones!

Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ It refers to itself primarily as an “AI safety and research” company. They are the AI guys who see the potential harms of AI clearly — not through the rose-colored glasses worn by the techno-utopian simps over at OpenAI. (In fact, Anthropic’s founders, including Amodei, left OpenAI over ideological differences.)

Source: CNN

Image: Janet Turra & Cambridge Diversity Fund / Ground Up and Spat Out

Learner AI usage is essentially a real-time audit of our design decisions

Auto-generated description: A LinkedIn post from Leah Belsky shares a list of Top 20 chats for finals that students use for study assistance, highlighting diverse ways to leverage ChatGPT for academic purposes.

First off, it’s worth saying that this looks and reads like a lightly-edited AI-generated newsletter, which it is. Nonetheless, given that it’s about the use of generative AI in university courses, it doesn’t feel inappropriate.

The main thrust of the argument is that students are using tools such as ChatGPT to help break down courses in ways that should be of either concern or interest to instructional designers. As a starting point, it uses the LinkedIn post in the screenshot above, which is based on OpenAI research and some findings I shared on Thought Shrapnel recently.

I can’t see how this is anything other than a positive thing, as students taking control of their own learning. We’ve all had terrible teachers, who think that because they teach, their students learn. Those who use outdated metaphors, who can’t understand how learners don’t “get it”, etc. For as long as we have the current teaching, learning, and assessment models in formal education, this feels like a useful way to hack the system.

Picture this: a learner on a course you designed opens their laptop and types into ChatGPT: “I want to learn by teaching. Ask me questions about calculus so I can practice explaining the core concepts to you.”

In essence, this learner has just become an instructional designer—identifying a gap in the learning experience and redesigning it using evidence-based pedagogical strategies.

This isn’t cheating—it’s actually something profound: a learner actively applying the protégé effect, one of the most powerful learning strategies in cognitive science, to redesign and augment an educational experience that, in theory, has been carefully crafted for them.

[…]

The data we are gathering about how our learners are using AI is uncomfortable but essential for our growth as a profession. Learner AI usage is essentially a real-time audit of our design decisions—and the results should concern every instructional designer.

[…]

When learners need AI to “make a checklist that’s easy to understand” from our assignment instructions, it reveals that we’re designing to meet organizational requirements rather than support learner success. We’re optimizing for administrative clarity rather than learning clarity.

[…]

The popularity of prompts like “I’m not feeling it today. Help me understand this lecture knowing that’s how I feel” and “Motivate me” reveals a massive gap in our design thinking. We design as if learning is purely cognitive when research clearly shows emotional state directly impacts cognitive capacity.

Source: Dr Phil’s Newsletter

It's so emblematic of the moment we're in... where completely disposable things are shoddily produced for people to mostly ignore

Auto-generated description: Neon text on a dark background reads SOMETIMES I THINK SOMETIMES I DON'T.

Melissa Bell, CEO of Chicago Public Media, issued an apology this week which categorised the litany of human errors that led to the Chicago Sun-Times publishing a largely AI-generated supplement entitled “Heat Index: Your Guide to the Best of Summer.”

Instead of the meticulously reported summer entertainment coverage the Sun-Times staff has published for years, these pages were filled with innocuous general content: hammock instructions, summer recipes, smartphone advice … and a list of 15 books to read this summer.

Of those 15 recommended books by 15 authors, 10 titles and descriptions were false, or invented out of whole cloth.

As Bell suggests in her apology, the failure isn’t (just) a failure of AI. It’s a failure of human oversight:

Did AI play a part in our national embarrassment? Of course. But AI didn’t submit the stories, or send them out to partners, or put them in print. People did. At every step in the process, people made choices to allow this to happen.

Dan Sinker, a Chicago native, runs with this in an excellent post which has been shared widely. He calls the time we’re in the “Who Cares Era, riffing on the newspaper supplement debacle to make a bigger point.

The writer didn’t care. The supplement’s editors didn’t care. The biz people on both sides of the sale of the supplement didn’t care. The production people didn’t care. And, the fact that it took two days for anyone to discover this epic fuckup in print means that, ultimately, the reader didn’t care either.

It’s so emblematic of the moment we’re in, the Who Cares Era, where completely disposable things are shoddily produced for people to mostly ignore.

[…]

It’s easy to blame this all on AI, but it’s not just that. Last year I was deep in negotiations with a big-budget podcast production company. We started talking about making a deeply reported, limited-run show about the concept of living in a multiverse that I was (and still am) very excited about. But over time, our discussion kept getting dumbed down and dumbed down until finally the show wasn’t about the multiverse at all but instead had transformed into a daily chat show about the Internet, which everyone was trying to make back then. Discussions fell apart.

Looking back, it feels like a little microcosm of everything right now: Over the course of two months, we went from something smart that would demand a listener’s attention in a way that was challenging and new to something that sounded like every other thing: some dude talking to some other dude about apps that some third dude would half-listen-to at 2x speed while texting a fourth dude about plans for later.

So what do we do about all of this?

In the Who Cares Era, the most radical thing you can do is care.

In a moment where machines churn out mediocrity, make something yourself. Make it imperfect. Make it rough. Just make it.

[…]

As the culture of the Who Cares Era grinds towards the lowest common denominator, support those that are making real things. Listen to something with your full attention. Watch something with your phone in the other room. Read an actual paper magazine or a book.

Source: Dan Sinker

Image: Ben Thornton

The future of public interest social networking

Auto-generated description: A desktop view of a social media platform shows a user's profile with a dark-themed interface, featuring a profile picture, user information, and a list of trending topics.

It’s been the FediForum this week, an online unconference dedicated to the Open Social Web. To coincide with this, Bonfire — a project I’ve been involved with on-and-off ever since leaving Moodle* — has reached the significant stage of release candidate for v1.0.

Ivan and Mayel, the two main developers, have done a great job sustaining this project over the last five years. It was fantastic, therefore, to see a write up of Bonfire alongside another couple of Fediverse apps in an article in The Verge (which uses a screenshot of my profile!) along with a more in-depth one in TechCrunch. It’s the latter one I’m excerpting here.

There is a demo instance if you just want to have a play!

Bonfire Social, a new framework for building communities on the open social web, launched on Thursday during the FediForum online conference. While Bonfire Social is a federated app, meaning it’s powered by the same underlying protocol as Mastodon (ActivityPub), it’s designed to be more modular and more customizable. That means communities on Bonfire have more control over how the app functions, which features and defaults are in place, and what their own roadmap and priorities will include.

There’s a decidedly disruptive bent to the software, which describes itself as a place where “all living beings thrive and communities flourish, free from private interest and capitalistic control.”

[…]

Custom feeds are a key differentiation between Bonfire and traditional social media apps.

Though the idea of following custom feeds is something that’s been popularized by newer social networks like Bluesky or social browsers like Flipboard’s Surf, the tools to actually create those feeds are maintained by third parties. Bonfire instead offers its own custom feed-building tools in a simple interface that doesn’t require users to understand coding.

To build feeds, users can filter and sort content by type, date, engagement level, source instance, and more, including something it calls “circles.”

Those who lived through the Google+ era of social networks may be familiar with the concept of Circles. On Google’s social network, users organized contacts into groups, called Circles, for optimized sharing. That concept lives on at Bonfire, where a circle represents a list of people. That can be a group of friends, a fan group, local users, organizers at a mutual aid group, or anything else users can come up with. These circles are private by default but can be shared with others.

[…]

Accounts on Bonfire can also host multiple profiles that have their own followers, content, and settings. This could be useful for those who simply prefer to have both public and private profiles, but also for those who need to share a given profile with others — like a profile for a business, a publication, a collective, or a project team.

Source: TechCrunch


*Bonfire was originally a fork of MoodleNet, and not only has it since gone in a different direction, but five years later I highly doubt there’s still an original line of code. Note that the current version of MoodleNet offered by Moodle is a completely different tech stack, designed by a different team

British culture is swearing and being sarcastic to your mates whilst simultaneously being too polite to tell someone they need to leave

Auto-generated description: A small Union Jack flag is attached to a pedestrian crosswalk button on a rainy day, with people holding umbrellas in the background.

My friend and colleague Laura Hilliger said that she understood me (and British humour in general) a lot more after watching the TV series Taskmaster. As with any culture, in the UK there are unspoken rules, norms, and ways of interacting that just feel ‘normal’ until you have to actually explain them to others.

This Reddit thread, which starts with the question What’s a seemingly minor British etiquette rule that foreigners often miss—but Brits immediately notice? is a goldmine (and pretty funny) although there’s a lot of repetition. Consider it a Brucie Bonus at the end of this week’s Thought Shrapnel, which I’m getting done early as I’m at a family wedding and an end of season presentation/barbeque this weekend!

Thank the bus driver when you get off. Even though he’s just doing his job and you paid. (Top-Ambition-6966)

Keep calm and carry on / deliberately not acknowledging something awry that’s going on nearby. (No-Drink-8544)

British culture is swearing and being sarcastic to your mates whilst simultaneously being too polite to tell someone they either need to leave or the person themselves wants to end the social interaction. (AugustineBlackwater)

If someone asks you if you’ll do something or go somewhere with them and you answer ‘maybe’….it is actually a polite way of saying no. (loveswimmingpools)

Not taking a self deprecating comment at face value, e.g. non Brit: ‘ah that sounds like a good job!’ Brit: ‘nah not really, it’s not that hard’, non Brit: ‘oh okay’. We’re just not good at taking praise so we deflect it but that doesn’t mean you’re supposed to accept the complimented’s dismissal of the compliment. All meant in playful fun of course. (Interesting_Tea_9125)

Not raising one finger slightly from your hand at the top of the steering wheel to express your deep gratitude for someone allowing you priority on the road. (callmeepee)

Drop over any time - you should schedule a visit 3 month in advance and I will still claim I am busy. (Spitting_Dabs)

Source: Reddit

Image: Adrian Raudaschl