Someplace where they promise to wear slippers to kick you to death with so it doesn’t hurt so much

Bin bag and slippers next to a door

These days, I spread my social media attention between Mastodon, Bluesky, and LinkedIn. I’m not entirely sure what I’m doing on either of the latter two, if I’m perfectly honest.

LinkedIn is a horrible place that makes me feel bad. But it’s the only real connection I’ve got to some semblance of a ‘professional’ network. Bluesky, on the other hand, just seems like a pale imitation of what Twitter used to be. I’m spending less time there recently.

As Warren Ellis suggests in the following, if you’re going to jump ship from somewhere to somewhere else, it’s probably a good idea that you’re going to be treated well, long-term.

Seeing a lot of people in my RSS announcing they’ve deleted various social media products. Usually to announce they’re on BlueSky or Substack Notes or whatever today’s one is. I am not on any of the new ones and just left the old ones by the side of the road. Some say these accounts should be deleted so you’re not part of the overall user count, but I honestly don’t care that much. And doing all that just to state you’re signing up someplace where they promise to wear slippers to kick you to death with so it doesn’t hurt so much… well, good luck.

Source: Warren Ellis

Image: Sven Brandsma

The rise of mass social platforms has been at the cost of a truly independent, truly open internet

Stormtroopers from Star Wars

Some wise words from Dan Sinker about how we need to reclaim the internet — and why.

I’ve been thinking recently about how anti-fascist writing circulated in Germany after Hitler’s rise. Called tarnschriften, or “hidden writings,” these pocket-sized essays, news updates, and how-tos were hidden inside the covers of mundane, everyday materials.

Get a few pages in to Der Kanarienvogel, “a practical handbook on the natural history, care, and breeding of the canary” and you’re no longer reading about how “the canary is one of the loveliest creatures on earth,” but instead getting the latest updates on the anti-Nazi resistance efforts of the German Communist Party.

[…]

We need to build new things in new ways independent of the oligarchs that now control the government after already controlling much of our lives.

That means moving away from the platforms that have dominated the way we’ve connected, collaborated, and disseminated information for the last couple decades. The rise of mass social platforms has been at the cost of a truly independent, truly open internet. But it’s still there. You can still build anything on it, free of platforms and the overreach of monopolists and oligarchs.

It also means reacquainting ourselves with offline connections. We’ve built for scale for so long (in our software and in our focus on swelling our own follower counts) that we’ve forgotten the power of a handful of people around a table. It’s time to stop chasing scale and start chasing the right people. Spread information table to table, person to person. 1:1 is everything right now.

And while we’re talking offline, let’s talk about making physical media again: music that can’t be taken away with a keystroke, movies that don’t involve a subscription, and news, writing, art and more that can be copied and printed and handed person-to-person—inside seed packets or not.

We have to become the media that has collapsed. Pick up the pieces and build anew. Build robust. Build independent.

Source: Dan Sinker

Image: Bryan McGowan

No breathless whispering of Mark Andreessen across some gilded dinner table

Screenshot of Readwise Reader app

I received Craig Mod’s most recent newsletter in which he referenced a previous issue from last year. In that prior issue, he talked about ‘digital reading in 2024’ in which I mostly focused on his discussion of the mobile phone-size BOOX Palma e-ink tablet.

However, he also talked about a company called Readwise which he advises. They’ve got a product, “a fabulous long form reading, meta-data-editing, article-organizing platform called Reader” which I’ve been experimenting with today. My workflow is usually based on Pocket, but it feels a bit disorganised and out-of-date in 2025. Reader has features such as the ability to highlight and import anything from the web, automatic article summaries, PDF import, all while also acting as a feed reader and somewhere you can send newsletters.

I have no affiliation, but it’s impressed me today. While Craig Mod likes his BOOX Palma, I prefer my full-size BOOX Note Air 2 e-ink tablet and Google Pixel Fold. Both are Android-based, and so both will be perfect for the Reader app. I’ll perhaps follow-up when I’ve got my workflow more set up. (It’s $9.99/month once my month-long free trial finishes, but I should be able to get 50% off as a student!)

The Readwise Reader app imports long form articles with aplomb. Parses them almost always perfectly, and paginates fabulously. It also OCRs non-insanely-typeset PDFs into device-sized typographic goodness.

[…]

Some things I adore about Readwise Reader: Solid typography, excellent pagination (seriously, I love how they paginate articles — vertically, sensibly, for easy highlighting across page boundaries), being able to double-tap on a paragraph to highlight the whole thing (much easier than fiddling with sentence highlights, and often you want paragraph context anyway), and built in “ghost reader” functions which provide LLM-based summaries (useful to quickly remember why you saved a particular article) and also LLM-based dictionary / encyclopedia definitions (which have so far been pretty good? although I’d love to be able to load my own dictionaries into the system). I also love that Reader’s web app feels like a kind of “control center” that allows for easy editing of article metadata and more. Install the Obsidian plugin, and you have a full repository of reading history and notes, in Markdown, on your local machine. Reader also has Chrome / Safari plugins that make for one-tap adding to your article Inbox. If you copy a URL and open the Reader App, it’ll automagically ask if you want to add that article to your queue. Lots of nice affordances.

[…]

Readwise, too, is an interesting company. Bootstrapped. No breathless whispering of Mark Andreessen across some gilded dinner table. Just a real company making real money by selling useful services around reading. What a thing!

Source: Roden

It’s possible that OpenAI may some day been seen as the WeWork of AI

Chat interface with a question about the yellow umbrella protests and a detailed response about China's governance.

My LinkedIn and Bluesky account has been full of pretty much two things today: the 80th International Holocaust Remembrance Day, and a new Chinese AI model called DeepSeek r1.

There have been many, many hot takes about the latter. I’m not here to do anything other than point out how awesome it is that this runs offline, is Open Source, and has been trained for 100x less than the equivalent models provided by American companies such as OpenAI, Meta, et al. I also included the image at the top because how much this has to conform to the official Chinese government ideology is, of course, one of the first thing that any self-respecting techie will want to test.

As usual, if you’re going to read someone’s opinion about all of this, Ryan Broderick is your guy. Here’s part of what he said in his newsletter Garbage Day which, if you’re not subscribing to at this point, I’m not sure what you’re doing with your life.

Now, we don’t yet know how the American AI industry will react to DeepSeek, but OpenAI’s Sam Altman announced on Saturday that free ChatGPT users are getting access to a more advanced model. Likely as a way to quickly respond to the DeepSeek hype. Meta are also frantically beefing up their own AI tools. But it’s hard to imagine how American AI companies can compete after they spent the last four years insisting that they need infinite money to buy infinite computing power to accomplish what is now open source. DeepSeek r1 can even run without an internet connection. So it’s possible that OpenAI, the biggest money sink of all, may, as cognitive scientist and AI critic Gary Marcus wrote today, “some day been seen as the WeWork of AI.” And that some day might be sooner than you think. The mood is changing fast. El Salvador’s hustle bro millennial dictator Nayib Bukele posted on X over the weekend, “So, [more than] 95% of the cost of developing new AI models is purely overhead?”

But, like TikTok, it’s doubtful that American tech oligarchs are actually capable of accepting how screwed they are because AI is not just a massive pyramid scheme to them. It has ballooned out into a psuedo-religion. And Andreessen has spent the last week frantically posting through it, doing his best impression of a doomsday evangelist trying to convince his flock that, yes, he knew that the roadmap was changing and that, yes, the promised revelation is still coming.

“A world in which human wages crash from AI — logically, necessarily — is a world in which productivity growth goes through the roof, and prices for goods and services crash to near zero,” he wrote on X, quivering in his shell. “Everything you need and want for pennies.” Everything, it seems, also includes AI.

Source: Garbage Day

Image: Alexios Mantzarlis

The jobs of the future will involve cleaning up environmental and political and epistemological disaster

THERE ARE NO JOBS ON A DEAD PLANET. Global climate change strike - No Planet B - Global Climate Strike 09-20-2019

I saw something recently which suggested that, in the US at least, the number of jobs for software developers peaked in 2019 and has been going down ever since. Good job everyone didn’t retrain as programmers, then.

There are any number of think tanks and policy outlets which tell you what they think the future of work, society, economy, etc. will be like. Of course, none of these organisations is neutral and, at the end of the day, all have a worldview to foist upon the rest of us. The World Economic Forum is one of these bodies and, as Audrey Watters discusses in her latest missive, it predicts the most ridiculous things.

I remember reading Fully Automated Luxury Communism by Aaron Bastani when it came out, pre-pandemic. I was optimistic about the role of technology, including AI, as a way of providing everyone’s needs. But the way that it’s actually being rolled-out, especially post-pandemic, when the hypercapitalists and neo-fascists have removed their masks, has left me somewhat more fearful.

It’s a broad generalisation, but you’ve essentially got two options in your working life: you can be part of the problem, or you can be part of the solution. Sadly, there’s a lot of money to be made in being part of the problem.

Reports issued by the World Economic Forum and the like are a kind of “futurology” – speculation, predictive modeling, planning for the future. “Futurology” and its version of “futurism” emerged in the mid-twentieth century as an attempt to control (and transform) the Cold War world through new kinds of knowledge production and social engineering, new technologies of knowledge production and social engineering to be more precise. (This futurism is different than the Marinetti version, the fascist version. Different-ish.) As Jenny Andersson writes in her history of post-war “future studies,” The Future of the World, these “predictive techniques rarely sought to produce objective representation of a probable future; they tried, rather, to find potential levers with which to influence human action.” These techniques, such as the Delphi method popularized by RAND, are highly technocratic — maybe even “cybernetic”? — and are deeply, deeply intertwined with not just economic forecasting, but with military scenario planning.

[…]

Futurology has always tried to sell a shiny, exciting vision for tomorrow — that is, as I argued above, what it was designed to do. But all this — all this — feels remarkably grim, despite the happy drumbeat. Without a radical adjustment to these plans for energy usage and for knowledge automation, jobs of the future seem likely entail things much less glamorous (or highly paid) than the invented work that get touted in headlines (and here again, the call for this “masculine energy” sort of shit invoked the explicitly fascist elements of futurism).

[…]

The jobs of the future will involve cleaning up environmental and political and epistemological disaster. They will involve care, for the human and more-than-human world. Of course, that’s always been the work. That’s always been the consequence, always the fallout — the caretakers of the world already know.

Source: Second Breakfast (paywalled for members)

Image: Markus Spiske

Making and remaking the instruments of our own domination

View of Refik Anadol’s Large Nature Model: Coral at the United Nations Headquarters, New York, September 21, 2024.

In this searing essay by R.H. Lossin, the first of an eventual two-parter, she takes aim at the absurdity of using generative AI for anything other than propping up the existing, dominant culture. Citing Raymond Williams' Culture and Materialism, Lossin explains that AI is the perfect tool for continually remaking cultural hegemony, for creating a normative ‘vibe’ which prevents reflection on what is really going on underneath the surface.

This is the first time, I think, that I’ve come across e-flux, which “spans numerous strains of critical discourse in art, architecture, film, and theory, and connects many of the most significant art institutions with audiences around the world.” Suffice to say, I’ve subscribed, so there will be more from this outlet featured on Thought Shrapnel over the coming weeks and months.

“Hegemony,” wrote Raymond Williams, “is the continual making and remaking of an effective dominant culture.” The concept of hegemony was used by Williams as a way to rescue culture from a reductive and one-way formulation of base and superstructure, where the base—Fordist manufacturing for example—is the cause of the superstructure or all things “merely cultural.” Rather, hegemony places literature, paintings, films, dance, television, music, and so on at the center of how a dominant culture rules or how a ruling class dominates. This is not to assert that art is propaganda for capitalism (although sometimes it is). Nor is it to revert to theories of “art for art’s sake” and the normative metaphysics of liberal cultural criticism (Art’s social value is its independence from politics. What about “beauty”? etc.). According to Williams’s theory of hegemony, art is one way of enlisting our desire in the “making and remaking” our own domination. But desire is unstable and, as an important part of maintaining a dominant culture, art is also, potentially, a means of its unmaking.

Hegemony, it should be noted, is not non-violent. It is always backed up by force, but it allows power to maintain itself without constant recourse to the police or justice system. Within the boundaries of an imperial power at least, hegemony allows ruling classes to govern with the enthusiastic consent and participation of subjects who assume that, for all of its problems, this social order is worth preserving in some form. Hegemony is most effective when it is experienced as sentiment (this movie is “fun to watch,” that immersive experience is “cool”) and understood as common sense (technology is not the problem, it is just used badly by capitalists).

[…]

As datasets continue to increase quantitatively, their fascist exclusions are concealed by the extent of their extraction, but they are no more universal than the universalism of, say, the European Enlightenment. The repetitive, homogenous output of image generators and their non-relation to distinct inputs, even the uneasy intuition that you’ve seen it somewhere already, demonstrates the extent of this exclusion. In a structure that mimics the extractive devastation required to power these screen dreams, the more data it collects the more thoroughly decimated the informational landscape becomes. Rather than the adage “garbage in, garbage out,” favored by computer scientists and statisticians, AI’s transformation of inputs into visual objects is a matter of “value in, garbage out.” Art collection in, garbage out; literature in, garbage out; apples in, garbage out; human subject in, garbage out; Indigenous lifeways in, garbage out.

We are aware of the capacity of capitalism to co-opt oppositional cultural practices. However, not everything is equally visible to the dominant gaze. Because “the internal structures” of hegemony—such as artistic production and institutional promotion—“have continually to be renewed, recreated, and defended,” writes Williams, “they can be continually challenged and in certain respects modified.” The dominant culture will always overlook certain “sources of actual human practice,” and this leaves us with what Williams calls residual and emergent practices. Practices that have escaped, momentarily, or been forgotten by this oppressive selection process; fugitive practices that offer some extant, counterhegemonic possibilities. This is precisely why the “democratic” tendency of ever-expanding datasets is disturbing rather than comforting. It is also why a defense against the oppressive expansion of generative AI needs to be sought outside of a neural network in actual social relationships.

Source: e-flux

Image: Loey Felipe (taken from the article)

Attribute substitution and human decision-making

Scrabble letters spelling out the word 'SUBSTITUTE' with the letter 'E' replaced by a blank

A few years ago, on one of my much-neglected ‘other’ blogs, I exhorted readers to sit with ambiguity for a longer than they would do normally. In that post, I focused on innovation projects. But our lack of tolerance for ambiguity is everywhere.

In this article, Adam Mastroianni discusses ‘attribute substitution’. It’s an heuristic, a shorthand way that our brains work so that we can answer easier questions rather than harder ones. Although it can lend us a bias towards action, it’s kind of the opposite of living a reflective life influenced by historical insight and philosophical analysis.

The cool thing about attribute substitution is that it makes all of human decision making possible. If someone asks you whether you would like an all-expenses-paid two-week trip to Bali, you can spend a millisecond imagining yourself sipping a mai tai on a jet ski, and go “Yes please.” Without attribute substitution, you’d have to spend two weeks picturing every moment of the trip in real time (“Hold on, I’ve only made it to the continental breakfast”). That’s why humans are the only animals who get to ride jet skis, with a few notable exceptions.

The uncool thing about attribute substitution is that it’s the main source of human folly and misery. The mind doesn’t warn you that it’s replacing a hard question with an easy one by, say, ringing a little bell; if it did, you’d hear nothing but ding-a-ling from the moment you wake up to the moment you fall back asleep. Instead, the swapping happens subconsciously, and when it goes wrong—which it often does—it leaves no trace and no explanation. It’s like magically pulling a rabbit out of a hat, except 10% of the time, the rabbit is a tarantula instead.

I think a lot of us are walking around with undiagnosed cases of attribute substitution gone awry. We routinely outsource important questions to the brain’s intern, who spends like three seconds Googling, types a few words into ChatGPT (the free version) and then is like, “Here’s that report you wanted.”

[…]

Confusion, like every emotion, is a signal: it’s the ding-a-ling that tells you to think harder because things aren’t adding up. That’s why, as soon as we unlock the ability to feel confused, we also start learning all sorts of tricks for avoiding it in the first place, lest we ding-a-ling ourselves to death. That’s what every heuristic is—a way of short-circuiting our uncertainty, of decreasing the time spent scratching our heads so we can get back to what really matters (putting car keys in our mouths).

I think it’s cool that my mind can do all these tricks, but I’m trying to get comfortable scratching my head a little longer. Being alive is strange and mysterious, and I’d like to spend some time with that fact while I’ve got the chance, to visit the jagged shoreline where the bit that I know meets the infinite that I don’t know, and to be at peace sitting there a while, accompanied by nothing but the ring of my own confusion and the crunch of delicious car keys.

Source: Experimental History

Image: Brett Jordan

Every billionaire really is a policy failure

A closeup of a US hundred dollar bill (Benjamin Franklin side).

I don’t really understand people who look at billionaires as anything other than an aberration of the system. They are not, in any way, people to be looked up to, imitated, or praised.

What probably makes it easier for me is that I see pretty much every form of hierarchical organisation-for-profit as something to be avoided. The CEO who employs downward pressure on wages, resists unionisation, and enjoys the fruits of other people’s labour, is merely different in terms of scale.

If multi-millionaires exist out of the normal cycle of everyday life, billionaires certainly do. That alone makes them spectacularly unfit to be anywhere near the levers of power, to dictate economic policy, or to make pronouncements that anyone in their right mind should listen to.

It’s a mind-bogglingly large sum of money, so let’s try to make it meaningful in day-to-day terms. If someone gave you $1,000 every single day and you didn’t spend a cent, it would take you three years to save up a million dollars. If you wanted to save a billion, you’d be waiting around 2,740 years… All this shows how the personal wealth of billionaires cannot be made through hard work alone. The accumulation of extreme wealth depends on other systems, such as exploitative labor practices, tax breaks, and loopholes that are beyond the reach of most ordinary people.

[…]

The notion that a billionaire has worked hard for every penny of their wealth is simply fanciful. The median U.S. salary is $34,612, but even if you tripled that and saved every penny for a lifetime, you still wouldn’t accumulate anywhere close to a billion dollars. Here, it’s also worth looking at Oxfam’s extensive study on extreme wealth, which found that approximately one-third of global billionaire fortunes were inherited. It’s not about working harder, smarter, or better. There are many factors built into our economic system that help extreme wealth to multiply fast. It’s a matter of being well-placed to benefit from the structures that favor capital and produce a profit off the back of exploitation.

[…]

Jeff Bezos could give every single one of his 876,000 employees a $105,000 bonus and he’d still be as rich as he was at the start of the pandemic.

[…]

It’s true that the billionaire class creates jobs and that wages have the potential to drive the economy, but that argument falters when workers barely have enough to survive. The potential to generate tax dollars from billion-dollar profits is enormous. Oxfam found that if the world’s richest 1% paid just 0.5% more in tax, we could educate all 262 million children who are currently out of school and provide health care to save the lives of 3.3 million. But given generous tax cuts and easily exploitable loopholes like the ability to register wealth in offshore tax havens, this rarely comes to pass.

[…]

Some favor the adoption of universal social security measures, paid for via progressive taxes. It’s been argued that Universal Basic Income, Guaranteed Minimum Income, and Universal Basic Services could aid prosperity in a world grappling with growing populations, societal aging, and climate breakdown. Piecemeal proposals are not enough to remedy a crisis of poverty in the midst of plenty. And a fair world would not further the acceleration of either.

Source: Teen Vogue

Image: Adam Nir

When everything is automated in an information vacuum, conspiracies abound

Man sitting on wall wearing a face mask with his arm resting on an Uber Eats delivery bag

I think it’s important to pay attention to what’s happening in the so-called “gig economy” as it’s effectively what capitalists would do to all of us if they could get away with it. In this case, The Guardian looks at couriers working for apps such as Uber Eats, Just Eat and Deliveroo.

Sure enough, the couriers have no real idea what’s going on in terms of allocation of work. So they turn to workarounds and conspiracy theories. I can’t imagine this being good for anyone’s mental health.

The couriers wonder why someone who has only just logged on gets a gig while others waiting longer are overlooked. Why, when the restaurant is busy and crying out for couriers, does the app say there are none available?

“We can never work out the algorithm,” one of the drivers says, requesting anonymity for fear of losing work. They wonder if the app ignores them if they’ve done a few jobs already that hour, and experiment with standing inside the restaurant, on the pavement or in the car park to see if subtle shifts in geolocation matter.

“It’s an absolute nightmare,” says the driver, adding that they permanently lost access to one of the platforms over a matter of a “max five minutes” wait in getting to a restaurant while he finished another job for a different app. Sometimes he gets logged out for a couple of hours because his beard has grown, confusing the facial recognition software.

“It’s not at all like being an employee,” he says. He is regularly frustrated by having to challenge what appeared to be shortfall in pay per job – sometimes just 10p, but at other times a few pounds. “There’s nobody you can talk to. Everything is automated.”

[…]

“Every worker should understand the basis on which they are paid,” [James] Farrar said [who has a lot of experience with gig economy apps]. “But you’re being gamed into deciding whether to accept a job or not. Will I get a better offer? It’s like gambling and it’s very distressing and stressful for people.

“You are completely in a vacuum about how best to do the job and because people often don’t understand how decisions are being made about their work, it encourages conspiracies.”

Source: The Guardian

Image: Sargis Chilingaryan

Monetising our own attention

Stock price chart

It has been A Week. So I’ve only just caught up Jay Springett’s weeknote from last week, in which he talks about the $TRUMP memecoin. Money hasn’t had any intrinsic value since the major currencies left the gold standard decades ago. Memecoins are like cryptocurrencies on steroids.

TRUMP Coin has sort of got lost in the noise in UK media due to the Tiktok shut down. But its way way way more insane, and way more significant news. In the last 48 hours Trump’s net worth increased by FIFTY BILLION (ILLIQUID) DOLLARS. Just days before he becomes president.

Jay quotes himself about real time attention markets and ‘economic entertainment’. It’s fascinating, especially if you read books like Clay Shirky’s Here Cones Everybody back in the day:

The rise of real time attention markets, economic entertainment, prediction markets (and the coming era of Power Fandoms) are a kind of revenge of late 90’s early 00’s Utopianism. The idea of cognitive surplus. We’re starting to see the kinds of swarm/group intelligences predicted by Shirky / Tapscott – but distorted through contemporary capitalism’s relentless logic. It took super liquid markets and meme coins for them to emerge.

He and some others have been discussing what all this means which led to a post by RM which channels the TV show Black Mirror. Even reading about this kind of stuff makes me feel about a million years old:

Personally, seeing your “value” as a volatile ticker must be truly psychologically draining. Imagine scaling that to a presidency. One day, your market cap soars; the next, an unpopular move collapses the coin. It’s like living in a Black Mirror episode where “market cap” equals self-worth and “24h volume” measures relevance.

[…]

Is this the world’s most ingenious social experiment, rewriting power, brand, and money dynamics? Or an accidental time bomb threatening presidential credibility? Unlike stocks reacting to politics, this directly monetizes an individual’s persona, allowing real-time buying and selling of reputation.

What does all of this mean in practice? I have no idea.

Source: thejaymo

Image: Maxim Hopman

Action stopping short of introducing compulsory national ID cards

Person holding black phone

It sounds like the UK government is preparing to bring in a dedicated app, initially for digital driving licenses — as is happening elsewhere in the world — but eventually for everything from tax payment to benefit claims and reminding people what their National Insurance number is.

This is a fascinating area for me, for a couple of reasons. First, the technology mentioned (“allowing users to hide their addresses in certain situations”) make me think this is very likely to be based on the Verifiable Credentials standard. This is the one that Open Badges, which I’ve been working on now for 14 years, is based.

Second, there’s a huge resistance in this country to the idea of ID cards. That means initiatives such as this can aim for the kind of utility which ID cards would provide, but have to present in such a way that is not ‘ID card-like’. Perhaps an app that focuses on providing immediate value in several area will help with this.

Third, and finally, I’m delighted that it seems that the GOV.UK team which will be behind this have decided not to go with a solution based on Google/Apple wallets. It would have been a terrible decision to do that, akin to handing over the keys to the digital kingdom to non-state actors.

The virtual wallet is understood to have security measures similar to many banking apps, and only owners of respective licences will be able to access it through inbuilt security features in smartphones, such as biometrics and multi-factor authentication.

The voluntary digital option is to be introduced later this year, according to the Times. Possible features include allowing users to hide their addresses in certain situations, such as in bars or shops, and using virtual licences for age verification at supermarket self-checkouts.

The government is said to be considering integrating other services into the app, such as tax payments, benefits claims and other forms of identification such as national insurance numbers, but will stop short of introducing compulsory national ID cards, which were pushed for by former prime minister Tony Blair and William Hague.

Source: The Guardian

Image: Robin Worrall

At least until we’re dead, education’s purpose to help us survive and thrive, not just get a job

A glass sphere on a log

Next time someone even suggests that education is merely the means of eventually finding ‘employment’ I’m just going to 301 redirect them to this magnificent rant by my extraordinarily talented colleague, Laura Hilliger.

I will be brief because some of my readers are not here for educational philosophy. For decades many in my network have championed actual education, the long-stretch goal of which is essentially self-actualisation. This is a term popularised by Maslow, but even Aristotle was pontificating about our human states of becoming. Education is, briefly, not only acquiring skills but realising our free will, potential and unique unicorn properties so that we can survive the shitshow that is existence. At least until we’re dead, education’s purpose to help us survive and thrive, not just get a job.

In society, education is both contrasted and conflated with other terms like learning, training or skill development. The field is semantically messy, and at the end of the day many don’t care about actual education. For society writ-large, the purpose of education is not self-actualisation, but rather compliance, conformance and control. I’m not talking about educators, you fluffy, beautiful bandits of resistance leaders, I’m talking about the systems around and through which people have access to education. Learning to learn, being intellectually curious, bravely looking the human condition in the face – these are not economically responsible endeavours. Thus, they have traditionally been reserved for the privileged (and the possessed).

Source: Freshly Brewed Thoughts

Image: Look Up Look Down Photography

The time to prepare is now

Repeating image of four skulls with increasing doubling, blurring, ghosting, pixelation, and horizontal glitching.

Matt Web thinks that countries need to be thinking about building a ‘strategic fact reserve’. It’s an interesting proposition but also… how has it come to this?!

[I]f I were to rank AI (not today’s AI but once it is fully developed and integrated) I’d say it’s probably not as critical as infrastructure or capacity as energy, food or an education system.

But probably it’s probably on par with GPS. Which underpins everything from logistics to automating train announcements to retail.

[…]

I think we’re all assuming that the Internet Archive will remain available as raw feedstock, that Wikipedia will remain as a trusted source of facts to steer it; that there won’t be a shift in copyright law that makes it impossible to mulch books into matrices, and that governments will allow all of this data to cross borders once AI becomes part of national security.

Everything I’ve said is super low likelihood, but the difficulty with training data is that you can’t spend your way out of the problem in the future. The time to prepare is now.

[…]

Probably the best way to start is to take a snapshot of the internet and keep it somewhere really safe. We can sift through it later; the world’s data will never be more available or less contaminated than it is today. Like when GitHub stored all public code in an Arctic vault (02/02/2020): a very-long-term archival facility 250 meters deep in the permafrost of an Arctic mountain. Or the Svalbard Global Seed Vault.

But actually I think this is a job for librarians and archivists.

Source: Interconnected

Image: Kathryn Conrad

A vector for deciding who is disposable

A bird sitting on top of a dirt hill

I grew up under a government led by Margaret Thatcher. Thatcherism was a rejection of solidarity, the welfare state, unions, and a belief in neoliberalism, austerity, and British nationalism. It an absolute breath of fresh air, therefore, when in 1997 as a 16 year old I witnessed ‘New’ Labour sweeping to victory in the General Election.

What followed was revolutionary, at least in the place I grew up: Surestart centres, investment in public services, and a real sense of togetherness throughout society. They lost power 15 years ago, and the period of Tory rule up to the middle of last year introduced Austerity 2.0, the polarisation of society, and chronic underfunding of the NHS and other essential services.

It’s surprising, therefore, that the first six months of Keir Starmer’s Labour government hasn’t felt like much of a change from the Tory status quo. Perhaps the most obvious example of this is the recent announcement that AI will be ‘mainlined into the veins’ of the UK, using rhetoric one would expect from the right wing of politics. As I read one person on social media as saying, this would have been very different had Starmer and co been seeking the support of the TUC and the Joseph Rowntree Foundation.

I’ve been listening to Helen Beetham’s new podcast in which she interviews Dan McQuillan, author of Resisting AI: An Anti-Fascist Approach to Artificial Intelligence. It’s not one of those episodes where you can be casually doing something else and half-listening, which is why I haven’t finished it yet. It has, however, prompted me to explore Dan’s blog, which is where I came across this post on ‘AI as Algorithmic Thatcherism’, written in late 2023,

It’s extraordinarily disingenous for the government to say that the move proposed is going to ‘create jobs’, as the explicit goal of ‘efficiency’ is to remove bottlenecks. Those are usually human-shaped. Maybe we should stop speedrunning towards dystopia? We need to prepare for post-capitalism; it’s just a shame that our government is doubling down on hypercapitalism.

One thing that these models definitely do, though, is transfer control to large corporations. The amount of computing power and data required is so incomprehensibly vast that very few companies in the world have the wherewithal to train them. To promote large language models anywhere is privatisation by the back door. The evidence so far suggests that this will be accompanied by extensive job losses, as employers take AI’s shoddy emulation of real tasks as an excuse to trim their workforce. The goal isn’t to “support” teachers and healthcare workers but to plug the gaps with AI instead of with the desperately needed staff and resources.

Real AI isn’t sci-fi but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations. AI is Thatcherism in computational form. Like Thatcher herself, real world AI boosts bureaucratic cruelty towards the most vulnerable. Case after case, from Australia to the Netherlands, has proven that unleashing machine learning in welfare systems amplifies injustice and the punishment of the poor. AI doesn’t provide insights as it’s just a giant statistical guessing game. What it does do is amplify thoughtlessness, a lack of care, and a distancing from actual consequences. The logics of ranking and superiority are buried deep in the make up of artificial intelligence; married to populist politics, it becomes another vector for deciding who is disposable.

[…]

Shouldn’t we be resisting this gigantic, carbon emitting version of automated Thatcherism before it’s allowed to trash our remaining public services? It might be tempting to wait for a Labour victory at the next election; after all, they claim to back workplace protections and the social contract. Unfortunately they aren’t likely to restrain AI; if anything, the opposite. Under the malign influence of true believers like the Tony Blair Institute, whose vision for AI is a kind of global technocratic regime change, Labour is putting its weight behind AI as an engine of regeneration. It looks like stopping the megamachine is going to be down to ordinary workers and communities. Where is Ned Ludd when you need him?

Source: danmcquillan.org

Image: Mike Newbry

The time has come now for many, many people to forge post-capitalist lives, careers, professions, and futures

Traffic cone in long grass

You may have noticed that nostalgia is, well, a vibe at the moment. Why is that? Because the present kinda sucks. Why does it suck? Because we live in completely unequal societies, increasingly ruled by demagogues.

Umair Haque, who used to be omni-present pre-pandemic on Medium seems to now have his own Ghost-powered publication and has written about post-capitalism. It’s long, with short paragraphs, and lots of italicising. But he knows what he’s talking about.

I’ve excerpted the key points, but I’d recommend clicking through and looking at the bullet point list of things he suggests reorientating one’s life and career towards. It was pretty reaffirming for me, with a January of not really enough work on, to know that getting a corporate job isn’t really a long-term solution.

The idea of late capitalism means all that. It means that people are immiserated, exploited, ruined, left desperate. That inequality soars. That there’s no future. That societies lose hope. But instead of coming together and having some kind of constructive revolution, and here we don’t have to agree with Marx, they have a fascist meltdown, which I think we can all agree is a Bad Thing.

People turn on one another. Societies shut down. Companies turn ultra-predatory. Cronyism runs rampant. Economies slide into depression. And instead of some form of positive collective action, the answer to all this tends to be conflict, and maybe even World War.

That’s late capitalism. It’s not just “this is dystopia” or “everything sucks” or even “I’m exploited to the bone.” It has that historical meaning, the very specific one: instead of doing anything positive, making wise decisions, people turn regressive, lose their thinking minds, turn on each other, and instead of the sort of class war Marx envisioned, turn to demagogues who end up starting very real ones instead.

[…]

If you’re middle aged, I’d bet that the above is already beginning to happen to you. You’re being forced out, at least if you’re in a corporate career. Every mistake isn’t just “I could lose the promotion,” it turned into “I could lose this job,” and now it’s, “that’s the end of my career, because I’ll never find another one.”

Understand that and face it. It is true. This trend of forcing middle aged people out—no matter what their accomplishments are—is here to stay now. It is never going away. This is what the “job market” is and will be for the rest of our lives, and probably beyond, because what did we learn earlier? Late capitalism recurs. It isn’t even a “stage,” as Marx’s descendants thought, but something more like a chronic condition. And we, unfortunately, have it.

[…]

The time has come now for many, many people to forge post-capitalist lives, careers, professions, and futures. They might not know it yet. Their despair and bewilderment is a reflection of how little this guiding principle is discussed, understood, or talked about. That doesn’t mean that they all have to go out and be activists or revolutionaries, lol, not at all, we just discussed how being a creator is something that’s post-capitalist.

[…]

What does it mean to “be a post-capitalist"? Many of us are starting to find out. It means running a network, community, organization, thingie, maybe a business, in certain dimensions but not along strictly profit-maximizing capitalist lines, but more humanistic ones, in a sense, and that’s not a bad thing, when you think about it.

Source: the issue.

Image: Kevin Jarrett

One of the most disconnecting forces is our expectations of how others should be

A man sitting at a table talking to a woman

Years ago, I read The Art of Travel by Alain de Botton. It was so long ago that it was the first time I’d been introduced to Seneca’s observation that you can travel, but you can’t escape yourself.

This article by Phillipa Perry — whose books How to Stay Sane and The Book You Wish Your Parents Had Read (and Your Children Will be Glad That You Did) I’d highly recommend — points out that many of our problems stem from (how we conceptualise) our relationships with others.

Often, we believe the solution to our problems lies outside ourselves, believing that if we leave the job, the relationship, everything will be fine. Of course, that can sometimes be true and it’s important to be alert to situations which are truly damaging. But the path towards feeling more connected to others usually starts from within. We must examine how we talk to ourselves, uncover the covert beliefs we live by, and confront the darker aspects of our psyche. One of the most disconnecting forces is our expectations of how others should be – but learning to accept people and things we cannot change can help us become more sanguine.

Source: The Guardian

A certain brand of artistic criticism and commentary has become surprisingly rare

A skeleton, presumably representing Death, lifting his cloak to show some people a rainbow on a screen

Good stuff from Erik Hoel about, effectively, the need for more cultural criticism around the use of technology in society. Any article that appropriately quotes Neil Postman is alright by me, and the art (included here) from Alexander Naughton which accompanies the article? Wow.

[L]ately some decisions have been explicitly boundary-pushing in a shameless “Let’s speedrun to a bad outcome” way. I think most people would share the worry that a world where social media reactivity stems mainly from bots represents a step toward dystopia, a last severing of a social life that has already moved online. So news of these sorts of plans has come across to me about as sympathetically as someone putting on their monocle and practicing their Dr. Evil laugh in public.

Why the change? Why, especially, the brazenness?

Admittedly, any answer to this question will ignore some set of contributing causal factors. Here in the early days of the AI revolution, we suddenly have a bunch of new dimensions along which to move toward a dystopia, which means people are already fiddling with the sliders. That alone accounts for some of it.

But I think a major contributing cause is a more nebulous cultural reason, one outside tech itself, in that a certain brand of artistic criticism and commentary has become surprisingly rare. In the 20th century a mainstay of satire was skewering greedy corporate overreach, a theme that cropped up across different media and genres, from film to fiction. Many older examples are, well, obvious.

Source: The Intrinsic Perspective

The feedback has to be orders of magnitude faster than the situation being controlled

A roller coaster lit up at night with red lights

Tom Watson wrote up a workshop he ran on organisational resilience recently, quoting and linking to one of Roger Swannell’s weeknotes about feedback loops. The full quotation from Swannell, taken from his blog, reads:

One of the insights I found interesting is that for feedback loops to work effectively, the feedback has to be orders of magnitude faster than the situation being controlled. So, if we’re shipping fortnightly, then the feedback would have to be hourly in order for us to have any sense of what effect we’re having. In practice, it’s usually the other way round and feedback is much slower than the situation.

Watson goes on to discuss this in terms of organisational resilience, mapping on single-loop _(Are we doing things right?__, double-loop (Are we doing the right things?), and triple-loop learning (How do we decide what’s right?) onto the “Anticipate, Prepare, Respond, Adapt” approach to organisational resilience.

Interestingly, the three things he suggests to help build organisational resilience (continual monitoring, open working, monthly reflections) are things central to our co-op:

[H]ere’s a couple of ideas to try that can help move our approach to learning forward.

  1. Make sure your monitoring and metrics allow you to answer the question “Are we doing things right?” in a timely manner. Short timeframes are generally better. Align to any decisions you need to make.

  2. Embrace open working - devote 20 -30 minutes a week to allow you and your team to reflect on what is going well, what isn’t, what is challenging, what people are seeing.

  3. Put in monthly/quarterly sessions - maybe an hour where you explore the question “Why do we do it this way?” on a specific topic as a team. Use the weeknotes to start the culture of open reflection, use them to identify common topics that might be coming up.

Doing these 3 things will move you from being only in the Response phase, into anticipate and prepare phases. Or if you prefer from single to double loop learning.

Sources: Tomcw.xyz / Roger Swannell

Image: Aleksandr Popov

Who wants to have to speak the language of search engines to find what you need?

Students at computers with screens that include a representation of a retinal scanner with pixelation and binary data overlays and a brightly coloured datawave heatmap at the top.

It’s about a decade since I gave up on Google search. While I use Google services extensively for work and other areas of my life, search and personal email are not two of them. Instead, I use DuckDuckGo and, more recently, Perplexity Pro.

The latter is excellent, bypassing advertising and paid placements, acting as a natural language search agent for synthesising information. I tend to use it for information that would take several searches. Yesterday, for example, I gave it the following query: “I need a tool that can automatically take screenshot of a web page and then stitch them together. It should then make an animated gif, scrolling through the page from top to bottom. The website requires a login, so ideally it should be a Chrome browser extension." It gave me several options, approaching my request from multiple angles as there wasn’t a solution that did exactly what I needed.

Although this article in MIT Technology Review mentions Perplexity, it weirdly focuses mainly on Google and OpenAI. There’s no mention that you can choose between LLMs in Perplexity (I use Claude 3.5 Haiku) and the two issues it raises are copyright and hallucinations, rather than sustainability and privacy. Claude 3.5 Haiku is one of the lighter weight models when it comes to environmental impact, but it still consumes a lot more energy (and water, to cool the data centres) than a single DuckDuckGo search.

And then, when it comes to privacy, while it’s great that an LLM can personalise results based on what it already knows about you, there’s an amount of trust there that I’m increasingly wary of giving to companies like OpenAI. I cancelled and then resubscribed to ChatGPT last week. I’m not sure how long I can stomach the Sam Altman circus.

Ultimately, agentic search, where you ask a question in natural language and it shows you the sources it used to synthesise the answer, is the future. Perplexity seems pretty fair in this regard, pulling in my colleague Laura’s post as part of a response about the way that technology has shifted power over the last century. For me, this kind of thing is even more of a reason to work in the open.

There’s a critical digital literacies issue here, one that’s hinted in the last paragraph of the article (included below) and discussed in Helen Beetham’s podcast episode with Dan MacQuillan, author of Resisting AI: an Anti-fascist Approach to Artificial Intelligence. When “the answer” is presented to you, there’s less incentive to do your own work in finding your own interpretation. I think that is definitely a risk. Although, given that the internet is a giant justification machine already, I’m entirely sure it will necessarily make things worse — just perhaps make people a bit lazier.

The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.

More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.

[…]

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.

Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?

Source: MIT Technology Review

Image: Kathryn Conrad

AI slop as engagement bait

gray bucket on wooden table

A couple of months ago, I wrote a short post trying to define ‘AI slop’. It was the kind of post I write so that I can, myself, link back to something in passing as I write about related issues. It made me smile, therefore, that the (self-proclaimed) “world’s only lovable tech journalist” Mike Elgan included a link to it in a recent Computerworld article.

I’m surprised he didn’t link to the Wikipedia article on the subject, but then the reason I felt that I needed to write my post was that I didn’t feel the definitions there sufficed. I could have edited the article, but Wikipedia doesn’t include original content, and so I would have had to find a better definition instead of writing my own.

The interesting thing now is that I could potentially edit the Wikipedia article and include my definition because it’s been cited in Computerworld. But although I’ve got editing various pages of world’s largest online encyclopedia on my long-list of things to do, the reality is that I can’t be done with the politics. Especially at the moment.

According to Meta, the future of human connection is basically humans connecting with AI.

[…]

Meta treats the dystopian “Dead Internet Theory” — the belief that most online content, traffic, and user interactions are generated by AI and bots rather than humans — as a business plan instead of a toxic trend to be opposed.

[…]

All this intentional AI fakery takes place on platforms where the biggest and most harmful quality is arguably bottomless pools of spammy AI slop generated by users without content-creation help from Meta.

The genre uses bad AI-generated, often-bizarre images to elicit a knee-jerk emotional reaction and engagement.

In Facebook posts, these “engagement bait” pictures are accompanied by strange, often nonsensical, and manipulative text elements. The more “successful” posts have religious, military, political, or “general pathos” themes (sad, suffering AI children, for example).

The posts often include weird words. Posters almost always hashtag celebrity names. Many contain information about unrelated topics, like cars. Many such posts ask, “Why don’t pictures like this ever trend?”

These bizarre posts — anchored in bad AI, bad taste, and bad faith — are rife on Facebook.

You can block AI slop profiles. But they just keep coming — believe me, I tried. Blocking, reporting, criticizing, and ignoring have zero impact on the constant appearance of these posts, as far as I can tell.

Source: Computerworld

Image: pepe nero