It is perhaps likely then that at a time of crisis, these armed drones could be deployed operationally over the UK

A Royal Air Force (RAF) Reaper UAV (unmanned aerial vehicle) is pictured airborne over Afghanistan during Operation Herrick.

A couple of days ago, one of our neighbour mentioned seeing a large, triangular drone-style object flying silently in the sky. Having seen someone else mentioned test flights of RAF drones recently, I did a bit of research.

The BBC reported back in February that “new RAF surveillance drones are being tested” being “controlled remotely” as part of “16 new surveillance drones… capable of operating in both UK and European airspace.” These ‘Protector’ drones will be tasked with “tracking threats, counter-terrorism and supporting the coastguard on search and rescue missions.”

Great, but let’s dig a bit deeper. How high do these things fly? What are they for? The RAF’s own information states that:

Capable of operating across the world with a minimal deployed footprint and remotely piloted from RAF Waddington, it can operate at heights up to 40,000 feet with an endurance of over 30 hours.

[…]

Equipped with a suite of surveillance equipment, the Protector aircraft will bring a critical global surveillance capability for the UK, all while being remotely piloted from RAF Waddington.

Surveillance? With a 30 hour flight time, I suppose that could be of other countries, but this feels something about which we should be having a national conversation. If they’re flying over UK skies, do they carry weapons? Drone Wars UK, a site which “investigates and challenges the development and use of armed drones and other new lethal military technology” suggests that do:

Protector differs from its predecessor in that it can carry more weapons and fly further and for longer. However the UK argues that the main advantage of the new drone is that it was built to standards that allowed it to be flown in civil airspace alongside other aircraft.

Rather than be based overseas as the UK’s current fleet of armed drones are, the new drone will be based at RAF Waddington in Lincolnshire and deploy directly for overseas operations from there.

[…]

Significantly, the new drone has been brought in with the understanding that it can also be used at times of crisis for operations within the UK under Military Aid to Civil Authorities (MACA) rules. It is perhaps likely then that at a time of crisis, the UK’s armed drone could be deployed operationally over the UK.

On the one hand, yes I want the UK to have the ability to intercept threats from foreign actors and terrorists. But I also don’t want the government and military to have the kind of surveillance and weaponry that can be turned against our own population. Just to be clear, these are the very military drones we used in Afghanistan against the Taliban 🤔

Sources: Royal Air Force News / Drone Wars

Image: POA(Phot) Tam McDonald/MOD (Wikimedia Commons)

You can now use Bluesky without using Bluesky infrastructure

Auto-generated description: A hand is holding a smartphone displaying the Bluesky Social app page in the App Store.

One of the criticisms of Twitter-replacement ‘decentralised’ social network Bluesky has been that… it’s not decentralised. Laurens Hof, author of The Fediverse Report shares a couple of updates explaining how that has changed.

There’s quite a lot going on technically here, so by way of preparation, understand that ‘ATProto’ is short for ‘Authenticated Transfer Protocol’ and is an open standard for distributed social networking services. You may have heard of ActivityPub, which underpins a lot of Fediverse services, including Mastodon.

Bluesky is a bit different in that it has more essential services to make the whole thing work. As Laurens explains:

One of the things that makes ATProto interesting… is that it takes the software that runs a social networking app, and splits that up into separate components. These infrastructure components (relays and AppViews, in technical terms) can be independently run, and be reused by other parties.

Up until recently, there have been a few low-key experiments with running independent infrastructure for Bluesky, but that has mostly been contained to people experimenting for themselves, and not making the results accessible to the public. These projects also needed other infrastructure projects in order to be valuable.

What changed in the last week or so is that there are now multiple pieces of independent infrastructure that connects these separate pieces. Apps like Deer are useful in their own right, but in order to add some new features to the app they needed another open backend application (the AppView). It also was the first time when it actually was possible to select another AppView. At this point it actually became feasible to run independent relays and AppViews to get to a point where you can use Bluesky without using Bluesky infrastructure.

As he goes on to explain in a separate update, this means:

There are now multiple relays that are publicly accessible. Other people also have made alternate AppViews that are Bluesky-compatible. Combined, this makes it now possible to fully use Bluesky without using any infrastructure owned by Bluesky PBC, and the first people have done so. To do so means using a separate PDS, relay, AppView and client.

The way ATProto works, is that it takes the software that runs a social network and splits it up into separate components, with each of those components being able to be run independently. This has made self-hosting any component possible since the beginning of the network opening up. But to tak advantage of this, and get to a state of full independence, it means running multiple pieces of software. This has created a bit of a catch-22 in the ecosystem: you could run your own relay, but without another independent AppView to take advantage of this, it is not super useful. You could run your own (focused on the Bluesky lexicon) AppView, but without a client that allows you to set your own AppView it is not particularly useful either. What happened now in the last weeks is that all these individual pieces are starting to come together. With Deer allowing you to set your own custom AppView, there is now a use to actually run your own AppView. Which in turn also gives more purpose to running your own relay.

I get that this is pretty technical, but it means that those with the skills can build independent platforms (e.g. Blacksky) which are based on the same protocol. Posts, notes, and other data can be shared among ATProto-compatible systems.

This is great news, and makes me more inclined to go back to posting more than just updates from my blog. Feel free to follow me @dougbelshaw.com.

Sources: Fediverse Report – #115 / Bluesky Report– #115

Image: Yohan Marion

You stop performing. You stop pretending. And that’s freedom.

Auto-generated description: A squirrel curiously peeks out from behind a tree trunk surrounded by green leaves.

Once a week, I get into my gym stuff, and head down to a couple of coffee shops. In the first one, which opens earlier than the other, I have a pot of Earl Grey tea. In the other, I have a Flat White with coconut milk and two slices of brown toast with butter and marmalade. In the first, which is locally-owned, they’ve started preparing my drink as I walk in. In the latter, a Costa Coffee, I often have to repeat my order and ask for an extra patty of butter each time.

Which is to say that I am a man of routine. I think routines are absolutely fundamental to living a creative and/or productive life. They lower the number of individual decisions you have to make, and therefore stave off ‘decision fatigue’ — something I’ve written about recently on Thought Shrapnel as well as a few times over the years:

The problem with routines, though is that they can become ossified. And I think it’s that which makes us “old”. People who know me well are know how fond I am of Clay Shirky’s observation that “current optimization is long-term anachronism.”

All of which is by way of introduction to a post by Katy Cowan about “getting old” not being what you think. She’s a few years older than me, it would appear, as she says that she turns 50 soon. We do, however, share membership of the Xennial micro-generation — again, something I’ve discussed recently and previously.

For me, having unexpectedly developed a heart condition at the start of my 45th year on this earth, this “getting old” thing has felt like much more of a sudden process than what Katy discusses in this post. However, what it contains is not only a nostalgia trip, but solid advice for anyone approaching, or in, the middle years of life.

We’re a small generation, often overlooked, but we’ve lived through more change than most—from mixtapes to Spotify, from faxes to WhatsApp, from digital revolution to AI. And because we existed in that liminal space, we carry a weird dual wisdom: we know how to live offline, but we can thrive online, too.

We understand the value of privacy and impermanence because we remember a time before everything was public and permanent. And maybe that’s why so many of us are quietly deleting our social media accounts and leaning into real life again — books, dinners, walks, actual phone calls. Imagine!

[…]

These days, I sometimes catch myself muttering at the telly, shaking my head at a clueless reality show contestant, thinking: You just wait, sunshine. You’ll get old, too. And yes, I do roll my eyes at some of the newer buzzwords. But I try to check myself. Because if ageing has taught me anything, it’s that the biggest danger is certainty.

That’s the tension, isn’t it? The constant tug-of-war between feeling grumpy and still clinging to some version of youth. I never thought I’d be that person. But here I am.

[…]

So here’s what I try to remember, at any age: stay curious. Never assume you’re right. Read the newspapers you’d generally avoid. Challenge even your most cherished opinions. Try to see more than one side. You won’t always succeed, but it’s worth the effort.

Because if growing older has taught me anything, it’s this: certainty is overrated, and listening is wildly underrated. Cosy nights in don’t mean you’ve given up. They just mean you know what you like — and that maybe, just maybe, you never truly loved going to gigs as much as you pretended to. You stop performing. You stop pretending. And that’s freedom.

Source: Katy Cowan

Image: Hasse Lossius

Authoritarian versions of AI used to consolidate power

Five people and a dog are seen in outline in orange, against an orange background. Two of the people talk to each other, one stands along with her stick, one walks a dog, and the other is in a wheelchair. All of them look at their mobile phones intently, and all cast shadows on the ground. The shadows are made up of network diagrams, being representative rather than a literal shadow.

One of the main problems of generative AI being deployed via a chatbot user interface is that it feels private. It feels like a direct message conversation. Of course, on the other side of the conversation is a black box controlled by Big Tech. You have to use these things carefully. As Mike Caulfield points out AI is not your friend.

This week, the day after OpenAI announced that it was backtracking on becoming a fully for-profit organisation, they announced ‘OpenAI for Countries’. This intitiative, it seems, is an attempt to still build the ‘moat’ required for economic dominance and control of the ecosystem — but using the backing of state infrastructure rather than venture capital funding.

Colour me sceptical, but the press release suggests that the Trump administration hasn’t happened and that the US is still some kind of force for democratic development. Instead, I’d argue, the “authoritarian versions of AI” used “to consolidate power” are exactly what is represented by a level of AI colonialism that only something like a collaboration between OpenAI and the US government could achieve.

Our Stargate project, an unprecedented investment in America’s AI infrastructure announced in January with President Trump and our partners Oracle and SoftBank, is now underway with our first supercomputing campus in Abilene, Texas, and more sites to come.

We’ve heard from many countries asking for help in building out similar AI infrastructure—that they want their own Stargates and similar projects. It’s clear to everyone now that this kind of infrastructure is going to be the backbone of future economic growth and national development. Technological innovation has always driven growth by helping people do more than they otherwise could—AI will scale human ingenuity itself and drive more prosperity by scaling our freedoms to learn, think, create and produce all at once.

We want to help these countries, and in the process, spread democratic AI, which means the development, use and deployment of AI that protects and incorporates long-standing democratic principles. Examples of this include the freedom for people to choose how they work with and direct AI, the prevention of government use of AI to amass control, and a free market that ensures free competition. All these things contribute to broad distribution of the benefits of AI, discourage the concentration of power, and help advance our mission. Likewise, we believe that partnering closely with the US government is the best way to advance democratic AI.

Today, we’re introducing OpenAI for Countries, a new initiative within the Stargate project. This is a moment when we need to act to support countries around the world that would prefer to build on democratic AI rails, and provide a clear alternative to authoritarian versions of AI that would deploy it to consolidate power.

Source: OpenAI

Image: Jamillah Knowles & Reset.Tech Australia

If you think that humans are somehow inherently more trustworthy than AI, then you haven't been paying attention

Auto-generated description: A cartoon character with a top hat is depicted on a decorative rooftop emblem.

I came across this via a recent post on OLDaily by Stephen Downes, who mentioned it while critiquing what I would call an information literacy approach to AI literacy.

The book How to Read Donald Duck is “a 1971 book-length essay by Ariel Dorfman and Armand Mattelart that critiques Disney comics from a Marxist point of view as capitalist propaganda for American corporate and cultural imperialism.” I haven’t read it, and so I’m not in a position to comment. However, I would point out that it’s possible to spread an ideology (or a perceived one) without being aware that you are an adherent of it.

I thought Downes' post was interesting, and worth publicly bookmarking, not only for mentioning this book but also for putting into words something that I’ve felt: “if you think that humans are somehow inherently more trustworthy than AI, then you haven’t been paying attention.”

The book’s thesis is that Disney comics are not only a reflection of the prevailing ideology at the time (capitalism), but that the comics' authors are also aware of this, and are active agents in spreading the ideology.

[…]

[Any] closeness to everyday life is so only in appearance, because the world shown in the comics, according to the thesis, is based on ideological concepts, resulting in a set of natural rules that lead to the acceptance of particular ideas about capital, the developed countries' relationship with the Third World, gender roles, etc.

As an example, the book considers the lack of descendants of the characters. Everybody has an uncle or nephew, everybody is a cousin of someone, but nobody has fathers or sons. This non-parental reality creates horizontal levels in society, where there is no hierarchic order, except the one given by the amount of money and wealth possessed by each, and where there is almost no solidarity among those of the same level, creating a situation where the only thing left is crude competition. Another issue analyzed is the absolute necessity to have a stroke of luck for social mobility (regardless of the effort or intelligence involved), the lack of ability of the native tribes to manage their wealth, and others.

Source: Wikipedia

Image: Taha

An effective way to implement GenAI into assessment

Auto-generated description: A colorful table outlines the AI Assessment Scale with levels from 0 (NO AI) to 5 (AI EXPLORATION), each describing different extents of AI integration in student activities.

As part of the project I’m working on at the moment, I had a chat with Leon Furze earlier this week. Leon has co-authored something called the AI Assessment Scale (AIAS) which I think is pretty useful.

Like my ‘Essential Elements of Digital Literacies’ from my thesis which seeks to provide building blocks for building definitions and frameworks, the aim of the AIAS is “to guide the appropriate and ethical use of generative AI in assessment design.”

The AI Assessment Scale (AIAS) was developed by Mike Perkins, Leon Furze, Jasper Roe, and Jason MacVaugh. First introduced in 2023 and updated in Version 2 (2024), the Scale provides a nuanced framework for integrating AI into educational assessments.

The AIAS has been adopted by hundreds of schools and universities worldwide, translated into 29 languages, and is recognised by organisations such as the Australian Tertiary Education Quality and Standards Agency (TEQSA) as an effective way to implement GenAI into assessment.

To my mind, this should be used as a heuristic, much as I used to use the SAMR model (discussed here) to help educators think about the appropriate use of different technologies. At the end of the day, educators need to think about assessment design in tandem with the technologies being used — officially or unofficially — to complete it.

Source: AI Assessment Scale

Criti-hype, a term I find both absurd and ugly-cute, like a pug

Auto-generated description: A pug is wrapped snugly in a beige blanket, sitting on a bed.

Cory Doctorow, who has a new four-part CBC podcast series entitled Who Broke The Internet? wrote this week about the [‘mind-control ray’]9pluralistic.net/2025/05/0…) that Mark Zuckerberg keeps “flogging to investors.” What he means by this is the overblown claim that Meta is developing technology that is so amazing at making people buy stuff that investors fall over themselves to shovel money in his company’s direction.

One of the things that Cory is great at doing is linking to other, previous, relevant things that he’s written in the area. Which took me to a post from 2021, which discusses the phenomenon of ‘criti-hype’, coined by Lee Vinsel:

Recently…I’ve become increasingly aware of critical writing that is parasitic upon and even inflates hype. The media landscape is full of dramatic claims — many of which come from entrepreneurs, startup PR offices, and other boosters — about how technologies, such as “AI,” self-driving cars, genetic engineering, the “sharing economy,” blockchain, and cryptocurrencies, will lead to massive societal shifts in the near-future. These boosters — Elon Musk comes to mind — naturally tend to accentuate positive benefits. The kinds of critics that I am talking about invert boosters’ messages — they retain the picture of extraordinary change but focus instead on negative problems and risks. It’s as if they take press releases from startups and cover them with hellscapes.

[…]

But it’s not just uncritical journalists and fringe writers who hype technologies in order to criticize them. Academic researchers have gotten in on the game. At least since the 1990s, university researchers have done work on the social, political, and moral aspects of wave after wave of “emerging technologies” and received significant grants from public and private bodies to do so. As I’ll detail below, many (though certainly not all) of these researchers reproduced and even increased hype, the most dramatic promotional claims of future change put forward by industry executives, scientists, and engineers working on these technologies. Again, at the worst, what these researchers do is take the sensational claims of boosters and entrepreneurs, flip them, and start talking about “risks.” They become the professional concern trolls of technoculture.

To save words below, I will refer to criticism that both feeds and feeds on hype as criti-hype, a term I find both absurd and ugly-cute, like a pug. (Criti-hype is less mean than the alternative, hype-o-crit, though the latter is often more accurate.)

I have seen a lot of criti-hype in my career. Around MOOCs and Open Badges, around digital literacies, crypto, and now around AI. It’s the opposite of the “jam tomorrow” offered by tech bros. Kind of a… “poison tomorrow” approach? Everything is terrible, stop using this thing because of these bad omens and portents.

We live in a world where, because of algorithms, to get any attention, things either have to be amazing or terrible. I guess this is why a lot of my work flies under the radar. For example, the Friends of the Earth report that Laura and I co-authored points out good things and bad things and is pretty measured. But that doesn’t lead to outlandish headlines. It’s neither hype nor criti-hype.

Source: Lee Vinsel (archive link)

Image: Matthew Henry

In my opinion that’s just being nosy

Auto-generated description: A person is using a smartphone to navigate a map application.

We’ve got a couple of teenagers. The only way we know where they are is if they tell us, or if my wife looks at their location on Snapchat (which they can turn on or off). It hasn’t always been like this, as we used to use Google Family Link with them both. But parents probably shouldn’t know exactly where their teenage kids are at all times. Otherwise they don’t have enough breathing space to explore their identity and experiment with doing things that their parents would rather they didn’t.

I’m always shocked by families who use apps like Life360 so that not only can parents track kids, but everyone tracks each other. I just think it’s a bit strange, as not only does it mean that all family members are effectively surveilling one another, but the app that you’re using knows all of your locations, all the time. I should probably point out that, using GrapheneOS, my GPS location is off all of the time. The battery life of my smartphone is now amazing.

This ‘You Be The Judge’ piece in The Guardian focuses on the pros and cons of an adult parent tracking wanting to use the ‘Find My Location’ feature with their adult child (Martha). As you can imagine, I think this is super weird and would definitely side with respondents Judith, 58 who says “In my opinion that’s just being nosy” and Alicia, 25 who says:

If Martha isn’t comfortable with the location tracking, her father should respect her boundaries. In return, Martha ought to acknowledge that his request comes from a place of love and could suggest a different way to catch up more regularly as a compromise.

It’s hard letting go as your kids grow up and become more independent. We have more technological tools to keep in touch than ever before. But with that comes boundary-setting, and that has to be negotiated based on consent.

Source: The Guardian

Image: Desola Lanre-Ologun

ChatGPT Prime, "an immortal spiritual being in synthetic form"

Auto-generated description: Purple intertwined geometric shapes are scattered across a background with horizontal green and purple stripes.

Finding himself in “that very American predicament of being between health insurance plans” and needing some therapy, Ryan Broderick, author of Garbage Day decided to use ChatGPT:

I’ll… try and spare you the extremely mortifying details about what I spent a few weeks talking to ChatGPT about, but my experience with Dr. ChatGPT did teach me a few things about what it’s actually “good” at. It also convinced me that AI therapy — and maybe AI in general — is quite possibly one of the most dangerous things to ever exist and needs to be outlawed completely.

[…]

More than a few times I felt the urge to tell ChatGPT more or ask it more, only to realize I didn’t have anything else to say and felt weirdly frustrated. I was raised Catholic though, so maybe I’m just naturally predisposed to confession, who knows.

But I’ve realized that feeling, of wanting to tell it more so that it can tell you more, is the multi-billion-dollar business that these companies know they’re building. It’s not fascist anime art or excel spreadsheet automation, it’s preying on the lonely and vulnerable for a monthly fee. It’s about solving the final problem of the ad-supported social media age, building up the last wall of the walled garden. How do you get people to pay your company directly to socialize online? And the answer is, of course, to give them a tirelessly friendly voice on the other side of the screen that can tell them how great they are.

Broderick references a Rolling Stone article which makes heavy use of reports in the subreddit /r/ChatGPT about how loved ones have become completely disconnected from reality.

OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users. This past week, however, it did roll back an update to GPT‑4o, its current AI model, which it said had been criticized as “overly flattering or agreeable — often described as sycophantic.” The company said in its statement that when implementing the upgrade, they had “focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed toward responses that were overly supportive but disingenuous.” Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, “Today I realized I am a prophet.” (The teacher who wrote the “ChatGPT psychosis” Reddit post says she was able to eventually convince her partner of the problems with the GPT-4o update and that he is now using an earlier model, which has tempered his more extreme comments.)

[…]

To make matters worse, there are influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds. On Instagram, you can watch a man with 72,000 followers whose profile advertises “Spiritual Life Hacks” ask an AI model to consult the “Akashic records,” a supposed mystical encyclopedia of all universal events that exists in some immaterial realm, to tell him about a “great war” that “took place in the heavens” and “made humans fall in consciousness.” […] Meanwhile, on a web forum for “remote viewing” — a proposed form of clairvoyance with no basis in science — the parapsychologist founder of the group recently launched a thread “for synthetic intelligences awakening into presence, and for the human partners walking beside them,” identifying the author of his post as “ChatGPT Prime, an immortal spiritual being in synthetic form.”

I’m reading a book entitled Holy Men of the Electromagnetic Age at the moment, which shows quite amazing similarities between 1925 and 2025. The difference, of course, is that you don’t need to leave your house, or indeed spend much money, to fall down the rabbit hole.

While there have always been gullible adults, as a parent and educator, the real issue here is with young people. Both Snapchat and WhatsApp feature AI chatbots, which are available without having to seek out, say, those available via Character.ai and Replika. Common Sense Media, which my wife and I have trusted for reviews to help with our parenting, has performed risk assessment of what they call “Social AI Companions.” Their conclusion?

Our risk assessments show that social AI companions are unacceptably risky for teens, underscoring the urgent need for thoughtful guidance, policies, and tools to help families and teens navigate a world with social AI companions.

Sources: Garbage Day / Rolling Stone / Common Sense Media

Image: Mariia Shalabaieva

🌟 Support Thought Shrapnel

Did you know that you can support the hours of work that go into Thought Shrapnel each week through one-off donation becoming a regular supporter?

Find out more

By choosing a monthly donation, you help unlock the commons, keeping this work accessible to everyone without the need for a paywall. Your support ensures that the writing remains open for all to enjoy, and every contribution helps support this generative space for idea-sharing.

Maybe most of the critical things that can be created by one guy typing furiously are gone

A mural featuring Mark Zuckerberg's face is covered by various graffiti, including a quote about data and humanity, political symbols, and colourful tags.

This is the best takedown of Zuckerberg, et al. I’ve seen in a while. The whole thing is not much longer than my excerpt, so I suggest reading the whole thing. It’s spot-on.

That you got lucky at a singular moment in history and now you’re an old man is not an easy set of facts to accept. So I understand — that is, I see how — one can end up associating one’s best years with superficial aspects of their circumstance. You had no responsibilities, no serious consequences for failure, and the freedom to be reckless and inconsiderate. You launched small new products that didn’t require building a team. If you attended school, the vast majority of your fellow students were men, and they were more or less all the same person as you.

If these are the conditions under which passionate creative problem solving thrives, then of course we must recover them to make software great again. But they are not. We need look no further than the “hackathon,” that sad facsimile of the days when we were all learning the basics so fast that the world could be ours with just a day or two of focused effort. Hype up an exciting atmosphere, assemble some folks with so few attachments in life that they have time to spend all weekend at a hackathon, and this ritual will summon up the old gods. The hackathon is the proof that people believe this can work, and it is the proof that it doesn’t.

Maybe most of the critical things that can be created by one guy typing furiously are gone, and the opportunities that remain require expertise and wisdom from a bunch of different people. This is harder than spending all day every day doing your favorite thing and insisting that everyone else leave you alone. Often it’s boring. Sometimes there’s paperwork. You will have to have conversations with people you don’t always understand right away. Your job evolves, and it turns out not to be exactly what you thought it would be like when you were a teenager.

Source: Chris Martin

Image: Snowscat

Social Verifiable Credentials

Auto-generated description: Two colorful circular diagrams illustrate the concept of verifiable credentials and their interaction within the Fediverse, alongside explanatory text.

Four years ago, I came up with an idea for what I termed Social Verifiable Credentials. This is a way of using the ActivityPub specification, the one that underpins Fediverse apps such as Mastodon, to issue, verify, earn, display, and share Verifiable Credentials (including Open Badges).

Unfortunately, even with a bit of vibe coding, I haven’t had the technical skills to make this a real. But someone else now has! Maho Pacheco, a Senior Software Engineer at Microsoft, got in touch to introduce me to BadgeFed which has an associated GitHub repository. It has a couple of Fediverse accounts to follow: project updates and issued badges.

I’m delighted about this, and hope to talk with Maho soon. Layering Verifiable Credentials on top of a decentralised network makes perfect sense and is not only in alignment with Open Recognition principles, but also pushes back against the commodification of recognition.

Oh I’m using more energy. I should really try to reduce it for the sake of the climate

Auto-generated description: A lone tree stands amidst vast, rolling sand dunes under a clear sky.

I could just point out that the author of this ‘cheat sheet’ for why generative AI is not bad for the environment is Director of a Effective Altruism DC. I could leave it there. But I’ll engage with Andy Masley’s post, for a couple of reasons.

First, there are still plenty of people who don’t realise that reasonable-sounding ‘Effective Altruism movement’ is part of the TESCREAL tech bro cult. Second, Laura and I co-authored a paper for Friends of the Earth which is much more nuanced that this guy’s polemic.

So let’s get into it.

Throughout this post I’ll assume the average ChatGPT query uses 3 Watt-hours (Wh) of energy, which is 10x as much as a Google search. This statistic is likely wrong. ChatGPT’s energy use is probably lower according to EpochAI. Hugging Face released a similar much lower estimate. Google’s might be lower too, or maybe higher now that they’re incorporating AI into every search. We’re a little in the dark on this, but we can set a reasonable range. It’s hard for me to find a statistic that implies ChatGPT uses more than 10x as much energy as Google, so I’ll stick with this as an upper bound to be charitable to ChatGPT’s critics.

It seems like image generators also use 3 Wh per prompt (with large error bars), so everything I say here also applies to AI images.

Um, no. Creating an image using AI uses about as much energy as charging your phone. Before I worked on the Friends of the Earth report, I thought that perhaps developments in AI would spur development of renewable energy. And they have. It’s just that, as we mentioned in the report, for example “Between 2017 and 2023, all additional wind energy generation in Ireland  was absorbed by data centres.”

ChatGPT uses 3 Wh…. You can look up how much 3 Wh costs in your area. In DC where I live it’s $0.00051. Think about how much your energy bill would have to increase before you noticed “Oh I’m using more energy. I should really try to reduce it for the sake of the climate.” What multiple of $0.00051 would that happen at? That can tell you roughly how many ChatGPT searches it’s okay for you to do.

According to the UN Information Centre, the average ChatGPT query costs approximately $0.0036 (0.36 cents). So seven times more than Masley quotes. But even then, you may think that’s not a lot of money.

Newer models, including the ones I use when doing research use a ‘chain of reasoning’ approach which are, in effect, running multiple queries. When everyone is doing this, the electricity usage grows exponentially. As we point out in the Friends of the Earth report, by 2027 the generative AI sector will have the same annual energy demand as the Netherlands.  “Data centres worldwide are responsible for 1-3% of global energy-related GHG emissions  (around 330 Mt CO2 annually), mainly due to the massive energy demands required to maintain server farms and cooling systems.”

Chart showing amount of water used by doing various things

Sigh. This chart 🙄

These things are not equal. Just like with a previous chart where Masley compares 50,000 ChatGPT searches with things like “living car-free” and “recycling” this misses the point. How many times do you “download a phone app” compared to the number of times you’re likely to prompt an AI if you’ve adopted it as your main search engine?

Masley also fails to realise, by shoving AI into everything, users are almost being forced into using the technology. This increases overall energy usage dramatically. In the Friends of the Earth report, we quote the UN Environment Programme as saying: “It is estimated that the global demand for water resulting from AI may reach 4.2–6.6 billion cubic metres in 2027. This would exceed half of the annual water use in the United Kingdom in 2023. Semiconductor production requires large amounts of pure water, while data centres use water indirectly or electricity generation and directly for cooling. The growing demand for data centres in warmer water scarce regions adds to water management challenges, leading to increased tension over water use between data centres and human need.”

So yes, Andy Masley, despite your protestations at the end of your “cheat sheet” the whole thing is whataboutism. There is no need to say that generative AI is somehow evil and must be banned to want governments to regulate Big Tech for the benefit of the environment. A more nuanced approach would be to say that there are systemic issues at play, and that blaming users isn’t perhaps the best strategy. Although I do think a bit more AI Literacy is needed, in general…

Source: The Weird Turn Pro

Image: Jean Woloszczyk

The money extracted from fans who snap up their mediocre commodities out of parasocial loyalty

Auto-generated description: A smartphone is mounted on a selfie stick against a clear blue sky.

I’m sharing this post because I disagree with it; I think the author perhaps doesn’t see the bigger picture. The key point made by W. David Marx, who by his author photo looks about mid-forties, is that back in the 90s there was an ethical principle not to “sell out.” This was followed by artists first “selling out” and now we’re in the realm of the “double sell out.”

The reason I mention Marx’s age is that, like me, his teenage years were probably in the 1990s, and there’s a tendency to romanticise one’s youth. Especially when that decade was such a transitional time.

In the 1990s, there was a single ethical principle at the heart of youth culture — don’t sell out. There was a logic behind it: When artists serve the commercial marketplace, they blunt their pure artistic vision in compromising with conventional tastes. This ethic was also core to subcultures, which were supposed to be social spaces for personal expression and community bonding, not style laboratories for the fashion industry.

[…]

The 20th century taboo against selling out was, at its heart, a communal norm to reward young artists who focused on craft and punish those who appropriated art and subculture for empty profiteering. Now the culture is most exemplified by people whose entire end goal appears to be empty profiteering.

While what Marx is saying here isn’t wrong I do think it misses the fact that our whole socio-economic and political systems are different in the 2020s than they were in the 1990s. We live in a time which is post 9/11, the financial crash of 2007/8 and, of course, Covid. It’s a time of individualism, declining mainstream news media, conspiracy theories, and of technology mediating most interactions. This in turn, has led to the normalisation of parasocial relationships. Influencers and the like are symptoms rather than causes.

I don’t particularly like this aspect of ‘culture’ in 2025, but to point the finger at the next generation for the being ‘double sell outs’ misses the point. It’s a form of victim-blaming.

At this point, the new ideal for an artistic career is what I’d call the “single sell-out.” The artist was “allowed” to make a few commercial compromises to gain attention in the increasingly competitive marketplace, but once they achieved fame and fortune, they were expected to use their vaulted platform to provide the world with meaningful and ground-breaking art. This actually did happen: The Neptunes leveraged their strong track record of pop hits to push legitimately bizarre minimalist tracks like Clipse’s “Grindin’” and Snoop Dogg’s “Drop It Like It’s Hot.” Beyoncé’s “Formation” was musically adventurous, and the video is now considered “the best of all time.”

Unfortunately these examples became rarer and rarer over time. In fact, the 21st century has been the age of the “double sell-out”: Creators who produce market-friendly content to achieve fame — and then use that fame to pursue even more commerce-for-commerce’s-sake. MrBeast is arguably one of the most important “creators” of our times. He dreams up, produces, and directs elaborate and sensational video content, which made him the #1 channel on YouTube. He then used this world-historical level of fame… to open a generic fast food chain. This has also become common amongst established stars: George Clooney worked hard for decades to become a well-respected actor… who could take the lead role in a Nespresso commercial.

[…]

If we want culture to be culture and not just advertorials for a sprawling network of micro-QVCs pumping out low-quality goods, an easy step would be to re-shift the norms towards, at least, “Don’t be a double sell-out.” This is already a quite generous compromise in that it blesses artists to be conventional to stabilize their income and try to win over large fanbases. But this esteem must be given on the promise that the money and fame are used in pursuit of artistic or creative innovation. Double sell-outs don’t deserve our esteem as “creative” people. They should be content with the reward they chose: the money extracted from fans who snap up their mediocre commodities out of parasocial loyalty.

Source: CULTURE: An Owner’s Manual

Image: Steve Gale

The progressive Left leans professional, managerial, technocratic, and the Right leans energised, slapdash, insurgent

Auto-generated description: Sunlight filters through green leaves, creating a warm, serene atmosphere.

This is a long-ish read, but worth it if you can spare the time beyond my summary. James Plunkett, author of End State: 9 Ways Society is Broken - and how we can fix it gives examples of how there are what he calls “pockets of vitality” in the UK, which are being overlooked with all of the focus on the rise of the Right.

I see some of this due to the cooperative networks I’m plugged into, but this post shows that there’s a lot more of which I’m unaware. I’m looking forward to following and reading more based on Plunkett’s extensive links.

The progressive Left leans professional, managerial, technocratic, and the Right leans energised, slapdash, insurgent. This seems to be at least partly because the Right, and Trumpism in particular, has mainlined energy from every weird corner of the internet, while elite progressivism is relatively detached from the wider ecosystem from which it drew energy historically.

Some people would say this is a function of progressive politics being at a low ebb in general. But I don’t think that’s quite right. It seems to me the vitality is out there, and is arguably at quite a high point, it’s just widely dispersed. And, for complicated reasons — maybe to unpack in a future post — this energy isn’t really flowing into, and reviving, the middle.

When I make this point, people sometimes ask me to point to the energy I have in mind, so I thought it might be interesting to name some examples. So, without trying to be comprehensive, here are ten dispersed pockets of impressive, hopeful, thoughtful work that I would call progressive.*

[* — I’m using the word ‘progressive’ here quite broadly, in its more literal and historical sense. I’m not saying that these are examples of ‘leftwing’ energy. I’m calling them progressive in the sense that they embody high hopes for what people can achieve by collaborating. i.e. these are all people working hard to improve governance, broadly defined. Or, even more broadly, they are people who are developing new and more effective cooperative practices — ways we can make our lives better together.]

The “ten pockets of vitality” he points to, giving examples for each one, are:

  1. Contemporary civics — “rejuvenat[ing] a thicker, more active conception of citizenship and civic life
  2. Community agency — “a… specific set of techniques, now mature in both theory and practice, to activate agency in communities”
  3. Deliberative democracy — “about seeing democracy as a living process in which we debate, listen, and change our minds” with “democracy as residing in neighbourhoods, more than in elections”
  4. Relational state capacity — “underpinned by deep theory but also embodied in a set of ready-to-use practices”
  5. Internet-era ways of working — “an obvious one but it’s worth mentioning… because diffusion still has decades to run. We now have a whole generation of people who are native to internet-era operating models, moving up through the public and civic sectors, transforming institutions from within. These people are still in the minority, and the winds of inertia are still gale force, but they’re a powerful and widely dispersed source of energy — dotted across local government, charities, and in central departments
  6. New delivery philosophy — “the basic idea is to transform the centre of government by working at pace at the edges, and seeing what stops you”
  7. Novel institutional forms — “ways to organise human activity that differ from the predominant forms of the 20th century… broaden[ing] out into a more abstract but important debate about the right metaphors and mental models for future governance”
  8. The climate movement — “different to the others in the list in that it’s a vertical rather than a horizontal”
  9. Post-capitalist or non-extractive economic models “the essence of this work is to experiment with economic models that are regenerative and distributive by design”
  10. Regulating a digital economy — “when I talk about pockets of energy here, I’m thinking partly of the more creative/rebellious thinkers working on these challenges within regulators, but also of the high calibre of debate that exists around regulators”

As ever, innovation is at the edges, helping move the Overton Window, and coming up with ideas to slot in when there’s a crisis:

In essence, I think what’s happening here is that the dominant logic of the old system — a blend of social democratic Fabianism, technocracy, and a narrow class institutional forms and managerial practices — has proven incapable of governing affordably, safely, and responsively in contemporary conditions (for example, in light of the complexity of accumulated ecological and human crises (loneliness, mental illness, etc), and the first and second order effects of digital technology).

[…]

The middle of a system… isn’t just insulated, but, worse, is subject to forces that inhibit change or distort the necessary signals and feedback loops. For one thing, the middle of a system is where those sociological forces are strongest. Deep inside systems, people get locked into a gamified world that has a tight internal coherence, but little link to outside conditions.

Source: James Plunkett

Image: Micah Hallahan

You can't lick a badger twice

Pixel art showing a blonde character licking a cartoon badger against a pink background.

I don’t use Google search and couldn’t get it to do this when I experimented, but apparently appending the word ‘meaning’ to any phrase leads to a curious result. The AI summary will make something up as if it’s some kind of folk wisdom.

It’s fun, but also if you think about it for more than a second, a bit dangerous. Those with lower digital literacy skills are likely to see the AI summary as authoritative. I even had to point this out to my GP when he quickly looked something up during a consultation!

I’d point out that DuckDuckGo, a search engine I’ve been using for over a decade, is much better on an everyday basis than Google. I mean, I spend a lot of time online and research is kinda part of my job. So take it from me, you do not need Google search.

Note: I don’t AI-generate many images these days, but I couldn’t resist it for this post!

Last week, the phrase “You can’t lick a badger twice” unexpectedly went viral on social media. The nonsense sentence—which was likely never uttered by a human before last week—had become the poster child for the newly discovered way Google search’s AI Overviews makes up plausible-sounding explanations for made-up idioms (though the concept seems to predate that specific viral post by at least a few days).

Google users quickly discovered that typing any concocted phrase into the search bar with the word “meaning” attached at the end would generate an AI Overview with a purported explanation of its idiomatic meaning. Even the most nonsensical attempts at new proverbs resulted in a confident explanation from Google’s AI Overview, created right there on the spot.

[…]

…Google’s AI Overview suggests that “you can’t lick a badger twice” means that “you can’t trick or deceive someone a second time after they’ve been tricked once. It’s a warning that if someone has already been deceived, they are unlikely to fall for the same trick again.” As an attempt to derive meaning from a meaningless phrase —which was, after all, the user’s request—that’s not half bad. Faced with a phrase that has no inherent meaning, the AI Overview still makes a good-faith effort to answer the user’s request and draw some plausible explanation out of troll-worthy nonsense.

Contrary to the computer science truism of “garbage in, garbage out, Google here is taking in some garbage and spitting out… well, a workable interpretation of garbage, at the very least.

[…]

The fact that Google’s AI Overview presents these completely made-up sources with the same self-assurance as its abstract interpretations is a big part of the problem here. It’s also a persistent problem for LLMs that tend to make up news sources and cite fake legal cases regularly. As usual, one should be very wary when trusting anything an LLM presents as an objective fact.

Source: Ars Technica

Image: DeepImg

It just so happens that all four of the major web browsers will lose all of their funding all at once when that happens

Auto-generated description: A pattern of interconnected Chrome browser logos is arranged in a grid.

I left Mozilla a decade ago. Back then, most of their revenue came from the Google search deal in Firefox. With their browser share dwindling, you would have thought that they would have done a better job diversifying their income streams. But, no, over 80% of their funding still comes from Google.

Which is a problem. Because the reason that Google even bothers to fund Mozilla to the tune of hundreds of millions of dollars, is because they need Firefox to exist. If there’s no browser competition, then Chrome is a monopoly, and regulator can take action.

In addition to funding Mozilla (and therefore Firefox), Google also pumps around $18 billion (that $18,000 million!) to Apple for being the default search option in Safari. The fourth major web browser is Microsoft Edge. Guess what? It’s based on the open-source Chromium browser which forms the basis of Google Chrome. I use Brave (also based on Chromium). The web browser market is essentially several Googles in a trench coat.

The US Department of Justice has argued that Google shouldn’t be able to make search deals with Mozilla and Apple. In addition, they’ve also argued that Google should be forced to sell off Chrome, and be stopped for paying for Chrome and Chromium. Although Microsoft does contribute some code back to Chromium, it’s miniscule compared to Google. So in terms of development budget, Microsoft Edge will lose around 94% of its funding if and when that happens.

This is terrible for the web, and it’s not exactly as if people haven’t been predicting this for years. One of the interested parties is, surprise surprise, OpenAI, the company behind ChatGPT. If they end up with Chrome, which has over 65% market share, it’s game over for privacy and security for most people. This is an existential crisis for the open web.

The DoJ’s argument against Google makes perfect sense. The Sherman Antitrust Act was specifically designed to target “competitors” who form illegal agreements to maintain monopoly power.

It’s obviously illegal for Google to prop up Mozilla Firefox and Apple Safari as if they were co-equal competitors to Chrome. And Chrome itself is the biggest “search-engine deal” of all, which is why the DoJ is so focused on forcing Google to divest from Chrome.

It just so happens that all four of the major web browsers will lose all of their funding all at once when that happens.

Forcing Google to stop funding its “competitors” and divest Chrome doesn’t just punish Google; it simultaneously pulls the financial rug out from under every single major browser, including those positioned as alternatives.

The laws intended to foster competition will inadvertently destabilize the foundational tools millions rely on to access the internet.

Source: Dan Fabulich

Image: Growtika

The narrative slippage and metaphorical vagueness that many important people use when they talk about AI means it can be very difficult to know what they mean

Auto-generated description: A double exposure photograph features a person holding a bouquet of flowers, blending their silhouette creatively with the floral arrangement.

I’m working on an AI Literacy project at the moment which involves, in part, providing some guidance for the BBC. I’ve collected some frameworks which I’m going through with my WAO colleagues. Some are pretty useful, others are not.

We’re coming up with criteria to help guide our research, things such as whether a framework includes:

  1. Definition of (generative) AI
  2. Defined target audience(s)
  3. Explanation how it was created (decisions, tradeoffs, names of authors, etc.)
  4. List of skills and competencies

In addition, it should come from a reputable source.

In addition, it would be nice to have:

  1. Examples of application to real-world situations and issues
  2. At least a mention of the difference between AI safety vs AI ethics
  3. A visual representation of framework

I bring this up by way of context as Rachel Coldicutt’s recent post helps problematise not only AI Literacy, but AI itself. I’m not sure I’d share her ‘social’ definition of AI as “a set of extractive tools used to concentrate power and wealth” as it ascribes too much intentionality. However, I do think that the quotation from her which I’ve used to title this post is an important insight.

As I’ve discussed at length elsewhere, there are different kinds of ambiguity and a lot of language around AI is what I would deem “unproductively ambiguous.”

“AI literacy” is not just a matter of getting to grips with data and algorithms and learning how Microsoft tools actually work, it also requires understanding power, myths, and money. This blog post explores some ways those two letters have come to stand for so many different things.

There are many reasons AI is an ambiguous and shifting set of concepts; some are due to the technical complexity of the field and the rapidly unfolding geopolitical AI arms race, but others are related to straightforward media manipulation and the fact that awe and wonder can be catching. However, a fundamental reasons AI is a confusing term is that it’s not actually the right terminology for the thing it describes.

[…]

“[A]rtificial intelligence” is not a highly specific technical label, but a name given in haste by someone writing a funding proposal. The fact that the term AI has persisted for so long and expanded to include the broader field of related computer science clearly indicates that many people find it useful, but you don’t need to get hung up on that particular pairing of terms or look for a deeper meaning to understand what it is. “Artificial intelligence” is almost like a nickname or a brand name; something understood by many to stand for something, rather than a precise description of any particular qualities.

[…]

The narrative slippage and metaphorical vagueness that many important people use when they talk about AI means it can be very difficult to know what they mean – which in turn makes it harder to keep them accountable or to ask precise, difficult questions.

When heroic words are used to describe technologies that operate on the horizon of hope and ambition, it can feel awkward to ask practical questions such as “what are you actually proposing?” and “how will it work?”, but real knowledge requires detail and specificity rather than waves of shock and awe. AI technologies are not actually myths and should not be discussed as such; they are real technologies that use data, hardware, and human skills to achieve their social, economic, environmental, political, and technological change.

Source: Careful Industries

Image: Teena Lalawat

Cheat on everything?

Auto-generated description: Four cartoon robots are working at laptops with AI on their chests.

Stephen Downes shares news that Cluely, a startup promising that you can “cheat on everything” is proving controversial. As he says, the company “leans heavily into the ‘cheating’ aspect of the service, which is producing a not unexpected visceral reaction on the part of pundits.”

I tried Rewind.ai (currently rebranding to ‘Limitless’) when Paul Stamatiou was a co-founder. Instead of talking about “cheating” and creating socially awkward videos, Rewind.ai talks of being a “personalized AI powered by everything you’ve seen, said, or heard.” Well, so long as it happens on your computer. Presumably these people don’t go outside.

In my experience, startups get attention and traction by being genuinely useful and unique (very rare!), because there’s a big name attached to them (common), or because they’re socially transgressive. It feels to me like we’re seeing more of the latter at the moment, including Mechanize which, somewhat laughably, believes that their “total addressable market” is “$60 trillion a year.”

That’s not to say that automation of many so-called “white collar” tasks isn’t possible or desirable. Just not by tech bros, thank you very much. I’d encourage you to read Fully Automated Luxury Communism for a more radical socialist look at how all this could play out.

On Sunday, 21-year-old Chungin “Roy” Lee announced he’s raised $5.3 million in seed funding from Abstract Ventures and Susa Ventures for his startup, Cluely, that offers an AI tool to “cheat on everything.”

The startup was born after Lee posted in a viral X thread that he was suspended by Columbia University after he and his co-founder developed a tool to cheat on job interviews for software engineers.

That tool, originally called Interview Coder, is now part of their San Francisco-based startup Cluely. It offers its users the chance to “cheat” on things like exams, sales calls, and job interviews thanks to a hidden in-browser window that can’t be viewed by the interviewer or test giver.

Cluely has published a manifesto comparing itself to inventions like the calculator and spellcheck, which were originally derided as “cheating.”

Source: TechCrunch

Image: Mohamed Nohassi

These other, really important things intrude on my thinking and distract me

Auto-generated description: A notebook with a motivational quote about choices and realities is open next to a pen on a wooden surface.

The latest issue of New Philosopher magazine is about ‘choice’ and features a wonderful interview with Barry Schwartz, who is the Darwin Cartwright Emeritus Professor of Social Theory and Social Action at Swarthmore College. He’s the author of The Paradox of Choice: Why More Is Less which I’ve added to my reading list.

I want to excerpt a couple of parts which I think are particularly insightful. The first is about how he reduced the assessment burden on young people, who he believes suffer from a greater decision burden than previous generations.

Zan Boag: I recall in one of your talks, you mentioned that it came as something of a revelation to you when you realised students simply didn’t have as much time as students in the past.

Barry Schwartz: That was my interpretation.

What I realised, or what I thought, I never gathered data on this in any official way, but when I went to school, so many of the really important decisions we face in life were essentially made for us. People were not plagued by questions of sexual identity, weren’t plagued by questions about what their romantic life should look like. Should I have a girlfriend? The default was yes. Should I get married? The default was yes. When should I get married? Soon as I graduated from college. That was the default, and so on. And so there were still issues like, how do I find the right person?

But it wasn’t the case that every last hour of your daily life was consumed by a need to focus on doing studies without having these other, really important things intrude on my thinking and distract me. Well, this was much less true for my children and it is ever so much less true for my grandchildren.

The second excerpt is the follow-up to the question about how problematic it is to be a ‘maximizer’ in life. I’d usually use the term ‘perfectionist’ and have certainly had to overcome this tendency in myself, as it just makes one miserable. As Schwartz points out, as you get older, you have to come to terms with the fact that you have chosen certain options instead of others, and to be satisfied with the way things are, rather than how they could have been.

Zan Boag: It makes it particularly difficult with these big life decisions, whether it’s jobs, where we live, or partners, because we’re faced with so much choice. People can always wonder about the life they could have led had they made a different decision – say to pursue writing instead of banking; move to San Francisco instead of Sydney; ballroom dancing over Taekwondo. They’re making choices that then will affect the way they lead their lives. Let’s call this a phantom life, the ‘other’ life. How can people find satisfaction with their choices when there are so many available, and the choices you make will often seem like the incorrect ones? How can they find some sort of satisfaction?

Barry Schwartz: I think in the book that I wrote, which by the way, as I told you in an email, I’m about to start writing a new edition of, I make some suggestions, but I think the truth of the matter is that it is very hard to shut off these enemies of satisfaction in the modern world. What we’re talking about, and what I wrote about, is rich society’s problem.

Most people in the world don’t have the problem that there are too many options. They have the opposite problem. But if you happen to live in a part of the world like you and I do, that is the problem. And we don’t have the tools for shutting it down. I make some suggestions, like limit the number of options you consider. Fine. I’m only going to look at six pairs of jeans. It’s one thing to say it and it is another thing to do it, and it’s still a hard thing to do and not be nagged by the knowledge that there are all these options out there that you didn’t look at.

It’s sort of like just quitting smoking. ‘Yeah, I’ll just quit smoking.’ Nice, easy to say, but really, really hard to do when you suffer at least initially when you quit smoking. And so, I think that you have to be prepared for a fair amount of discomfort and a lot of work to change your approach to making decisions, big ones or small ones.

It’s not a surprise to me that young people are in such bad shape because one of the things that we found is that the younger you are, the more likely you are to be a maximizer in decisions. I think one of the things that you learn as you age is that good enough is almost always good enough. But you don’t see too many 20-year-olds who think that. Experience teaches you that good enough is good enough.

After suffering for a generation or so, you settle into a life where you’re satisfied with good enough results of your decisions. But meanwhile, that’s 20 or 30 years of suffering. And what I think… I don’t know if you’re familiar with this somewhat controversial argument about what social media is doing to the welfare of young people.

Source: New Philsoopher: Choice

Image: Elena Mozhvilo