Strava for Stoics?

Matt Webb is, like me, over 40 years of age. Although some would argue differently, it’s a time when you realise that your fastest days are behind you. So apps like Strava reminding you that you’re not quite as fast as you were a few years ago isn’t… particularly helpful.
There’s definitely a gap in the market for fitness apps for people who are no longer spring chickens and, although they like to challenge themselves occasionally, aren’t trying to smash it every time they go out for a run, cycle, or to the gym. I also appreciate Webb’s related point that there comes a time when reminders about things in life are just a bit painful. The opportunity not to be reminded about things would be nice.
Part of getting older is finding that my PBs each time I train up - personal bests - are not as quick as they were before. […]
I’m currently trying to increase my baseline endurance and went out for a 17 mi run a few days ago. Paused to take photos of the Thames Barrier and a rainbow, no stress. Beautiful. Felt ok when I finished – hey I made it back! Wild!
Then Strava showed me the almost identical run from exactly 5 years ago, I’d forgotten: a super steady pace, a whole minute per mile faster than this week’s tough muddle through. […]
Our quantified self apps (Strava is one) are built by people with time on their side and capacity to improve. A past achievement will almost certainly remind you of a happy day that can be visited again or, in the rear view mirror, has been joyfully surpassed. But for older people… And I’m not even that old, just past my physical peak… […]
I’m not asking Strava to hide these reminders. I’ve found peace (not as completely as I thought it turns out). But I don’t want to avoid those memories. Reminded, I do remember that time from 2020! It is a happy memory! I like to remember what I could do, even if it is slightly bittersweet! […]
And I’ll bet we’re all having that feeling a little bit, deep down, unnamed and unlocated and unconscious quite often, amplified by using these apps. So many of us are using apps that draw from the quantified self movement (Wikipedia) of the 2010s, in one way or another, and that movement was by young people. Perhaps there were considerations unaccounted for – getting older for one. There will be consequences for that, however subtle.
(Another blindspot of the under 40s: it is the most heartbreaking thing to see birthday notifications for people who are no longer with us. Please, please, surely there is a more humane way to handle this?)
So I can’t help viewing some of the present day’s fierce obsession with personal health and longevity or even brain uploads not as a healthy desire for progress but, amplified by poor design work, as an attempt to outrun death.
Source: Interconnected
Image: Tim Foster
How to Raise Your Artificial Intelligence

This is an absolutely incredible interview of Alison Gopnik (AG) and Melanie Mitchell (MM). Gopnik is a professor of psychology and philosophy and studies children’s learning and development, while Mitchell is a professor of computer science and complexity focusing on conceptual abstraction and analogy-making in AI systems.
There’s so much insight in here, so you’ll have to forgive me quoting it at length. I urge you to go and read the whole thing. The thing that really stood out for me was Gopnik’s philosophical insights based on her experience around child development. Fascinating.
AG: There is an implicit intuitive model that everyday people (including very smart people in the tech world) have about how intelligence works: there’s this mysterious substance called intelligence, and as you have more of it, you gain power and authority. But that’s just not the picture coming out of cognitive science. Rather, there’s this very wide array of different kinds of cognitive capacities, many of which trade off against each other. So being really good at one thing actually makes you worse at something else. To echo Melanie, one of the really interesting things we’re learning about LLMs is that things like grammar, which we might have thought required an independent-model-building kind of intelligence, you can get from extracting statistical patterns in data. LLMs provide a test case for asking, What can you learn just from transmission, just from extracting information from the people around you? And what requires independent exploration and being in the world?
[…]
MM: I like to tell people that everything an LLM says is actually a hallucination. Some of the hallucinations just happen to be true because of the statistics of language and the way we use language. But a big part of what makes us intelligent is our ability to reflect on our own state. We have a sense for how confident we are about our own knowledge. This has been a big problem for LLMs. They have no calibration for how confident they are about each statement they make other than some sense of how probable that statement is in terms of the statistics of language. Without some extra ability to ground what they’re saying in the world, they can’t really know if something they’re saying is true or false.
[…]
AG: Some things that seem very intuitive and emotional, like love or caring for children, are really important parts of our intelligence. Take the famous alignment problem in computer science: How do you make sure that AI has the same goals we do? Humans have had that problem since we evolved, right? We need to get a new generation of humans to have the right kinds of goals. And we know that other humans are going to be in different environments. The niche in which we evolved was a niche where everything was changing. What do you do when you know that the environment is going to change but you want to have other members of your species that are reasonably well aligned? Caregiving is one of the things that we do to make that happen. Every time we raise a new generation of children, we’re faced with this difficulty of here are these intelligences, they’re new, they’re different, they’re in a different environment, what can we do to make sure that they have the right kinds of goals? Caregiving might actually be a really powerful metaphor for thinking about our relationship with AIs as they develop. […]
Now, it’s not like we’re in the ballpark of raising AIs as if they were humans. But thinking about that possibility gives us a way of understanding what our relationship to artificial systems might be. Often the picture is that they’re either going to be our slaves or our masters, but that doesn’t seem like the right way of thinking about it. We often ask, Are they intelligent in the way we are? There’s this kind of competition between us and the AIs. But a more sensible way of thinking about AIs is as a technological complement. It’s funny because no one is perturbed by the fact that we all have little pocket calculators that can solve problems instantly. We don’t feel threatened by that. What we typically think is, With my calculator, I’m just better at math. […]
But we still have to put a lot of work into developing norms and regulations to deal with AI systems. An example I like to give is, imagine that it was 1880 and someone said, all right, we have this thing, electricity, that we know burns things down, and I think what we should do is put it in everybody’s houses. That would have seemed like a terribly dangerous idea. And it’s true—it is a really dangerous thing. And it only works because we have a very elaborate system of regulation. There’s no question that we’ve had to do that with cultural technologies as well. When print first appeared, it was open season. There was tons of misinformation and libel and problematic things that were printed. We gradually developed ideas like newspapers and editors. I think the same thing is going to be true with AI. At the moment, AI is just generating lots of text and pictures in a pretty random way. And if we’re going to be able to use it effectively, we’re going to have to develop the kinds of norms and regulations that we developed for other technologies. But saying that it’s not the robot that’s going to come and supplant us is not to say we don’t have anything to worry about. […]
Often the metaphor for an intelligent system is one that is trying to get the most power and the most resources. So if we had an intelligent AI, that’s what it would do. But from an evolutionary point of view, that’s not what happens at all. What you see among the more intelligent systems is that they’re more cooperative, they have more social bonds. That’s what comes with having a large brain: they have a longer period of childhood and more people taking care of children. Very often, a better way of thinking about what an intelligent system does is that it tries to maintain homeostasis. It tries to keep things in a stable place where it can survive, rather than trying to get as many resources as it possibly can. Even the little brine shrimp is trying to get enough food to live and avoid predators. It’s not thinking, Can I get all of the krill in the entire ocean? That model of an intelligent system doesn’t fit with what we know about how intelligent systems work.
Source: LA Review of Books
Building a quantum computer that can run reliable calculations is extremely difficult

Domenico Vicinanza, Associate Professor of Intelligent Systems and Data Science at Anglia Ruskin University, explains the difference between classical computing and quantum computing. The latters uses ‘qubits’ instead of bits, so instead of just being in the binary state of 0 or 1, then can be in either or both simultaneously.
Vicinanza gives the example of optimising flight paths for the 45,000+ flights, organised by 500+ airlines, using 4,000+ airports. With classical computing, this optimisation would attempted sequentially using algorithms. It would take too long. In quantum computing, every permutation can be tried at the same time.
Quantum computing deals in probabilities rather than certainties, so classical computing isn’t going away anytime soon. In fact, reading this article reminded me of using LLMs. They’re very useful, but you have to know how to use them — and you can’t necessarily take a single response at face value.
Quantum computers are incredibly powerful for solving specific problems – such as simulating the interactions between different molecules, finding the best solution from many options or dealing with encryption and decryption. However, they are not suited to every type of task.
Classical computers process one calculation at a time in a linear sequence, and they follow algorithms (sets of mathematical rules for carrying out particular computing tasks) designed for use with classical bits that are either 0 or 1. This makes them extremely predictable, robust and less prone to errors than quantum machines. For everyday computing needs such as word processing or browsing the internet, classical computers will continue to play a dominant role.
There are at least two reasons for that. The first one is practical. Building a quantum computer that can run reliable calculations is extremely difficult. The quantum world is incredibly volatile, and qubits are easily disturbed by things in their environment, such as interference from electromagnetic radiation, which makes them prone to errors.
The second reason lies in the inherent uncertainty in dealing with qubits. Because qubits are in superposition (are neither a 0 or 1) they are not as predictable as the bits used in classical computing. Physicists therefore describe qubits and their calculations in terms of probabilities. This means that the same problem, using the same quantum algorithm, run multiple times on the same quantum computer might return a different solution each time.
To address this uncertainty, quantum algorithms are typically run multiple times. The results are then analysed statistically to determine the most likely solution. This approach allows researchers to extract meaningful information from the inherently probabilistic quantum computations.
Source: The Conversation
Image: Sigmund
Playing stenographer in your little folding chair

It’s hard to avoid the drama unfolding at the start of the second Trump presidential term. I don’t even know, really, what’s going on — other than a lot of confusion and emotional violence. I guess the cruelty is the point.
Anyway, Ryan Broderick of much-quoted Garbage Day fame, has some advice for journalists which is advice for us all, really: the world has changed, so it’s time to adapt. That doesn’t mean “sell out” or “abandon your ethics.” Quite the opposite.
Welcome to 2025. No one reads your website or watches your TV show. Subscription revenue will never truly replace ad revenue and ad revenue is never coming back. All of your influence is now determined by algorithms owned by tech oligarchs that stole your ad revenue and they not only hate you, personally, but have aligned themselves with a president that also hates you, personally. The information vacuum you created by selling yourself out for likes and shares and Facebook-funded pivot-to-video initiatives in the 2010s has been filled in by random “news influencers,” some of which are literally using ChatGPT to write their posts. While many others are just making shit up to go viral. And the people taking over the country currently have spent the last decade, in public, I might add, crafting a playbook — one you dismissed — that, if successful, means the end of everything that resembles America. And that includes our free and open and lazy mainstream media. And they’re pretty confident it’ll succeed because, unlike you, they know how broken the internet is now and are happy to take advantage of it. While I’m sure it feels very professional to continue playing stenographer in your little folding chair at the White House, they’re literally replacing you with podcasters as we speak. So this is it. Adapt or die. Or at the very least, die with some dignity.
Source: Garbage Day
Image: wu yi
3-column blog themes

This is more of a bookmark than a post, but I’ve only just discovered the blog of Garry Newman (who some might know from Garry’s Mod). Looking at the HTML, it doesn’t look like he’s using a particular generator (e.g. WordPress) or a theme, so he must have custom-built it.
What I love about it is the logical unfolding from the meta-level navigation on the left, through the column of densely-listed posts in the middle, to the display of the individual post on the right.
If you’re reading this and know of a similar blog theme, on any platform, could you let me know?
Source: garry.net
Those who find the texture of your mind boring or offensive can close the tab

In his most recent Monday Memo, Dave Gray explained how he channeled Brad Diderickson by composing his newsletter verbally while walking. That conversational style and approach is interesting and engaging, and a good way to not sound ‘stilted’ when writing. Another way is to teach yourself to touch-type, so that the words that are coming out of your brain appear on the computer screen quickly.
I’m thinking about this due to a post that I saw via Hacker News where Henrik Karlsson gives some advice for a friend who wants to start a blog. There are 19 pieces of advice, and #4 is:
People tend to sound more like themselves in chat messages than in blog posts. So perhaps write in the chat, rapidly, to a friend.
And #6:
One reason chat messages are unusually lively is that the format encourages you to write from emotion. You are talking to someone you like and you want to resonate with them, you want to make them laugh. This creates a surge in the writing. It is lovely. When you write from your head, your style sinks back under the waves.
But the best bit of advice in the list is, I think, #18:
In real life, you can’t go on and on about your obsessions; you have to tame yourself to not ruin the day for others. This is a good thing. Otherwise, we’d be ripping each other’s arms off like chimpanzees. But a blog is a tiny internet house where you decide the norms. And since there are already countless places where you can’t be yourself, there is no need to build another one of those. The law of the land is that everything you think is funny is funny. Those who find the texture of your mind boring or offensive can close the tab—no need to worry about them. It is good for the soul to have a place where being just the way you are is normal. And it is a service to others, too. You’ll be surprised how many people are laughably similar to you and who wish there was a place where they felt normal. You can build that.
If you’re reading this and don’t put your words out on the internet on a regular basis, why not change that?
Source: Escaping Flatland
Image: Justin Morgan
Prices and wages are a political matter, not an 'economic' one

Cory Doctorow is such an amazing writer and speaker. He explains reasonably complex things so concisely and straightforwardly. This is one such explainer, where he discusses how issues which are usually described as being to do with the ‘economy’ are actually to do with power.
This kind of reframing is really useful, especially for people who, like the proverbial fish swimming in water, haven’t really thought about what it means to live within capitalism.
The cost and price of a good or service is the tangible expression of power. It is a matter of politics, not economics. If consumer protection agencies demand that companies provide safe, well-manufactured goods, if there are prohibitions on price-fixing and profiteering, then value shifts from the corporation to its customers.
But if labor and consumer groups act in solidarity, then they can operate as a bloc and bosses and investors have to eat shit. Back in 2017, the pilots' union for American Airlines forced their bosses into a raise. Wall Street freaked out and tanked AA’s stock. Analysts for big banks were outraged. Citi’s Kevin Crissey summed up the situation perfectly, in a fuming memo: “This is frustrating. Labor is being paid first again. Shareholders get leftovers”:
Limiting the wealth of the investor class also limits their power, because money translates pretty directly into political power. This sets up a virtuous cycle: the less money the investor class has to spend on political projects, the more space there is for consumer- and labor-protection laws to be enacted and enforced. As labor and consumer law gets more stringent, the share of the national income going to people who make things, and people who use the things they make, goes up – and the share going to people who own things goes down.
Seen this way, it’s obvious that prices and wages are a political matter, not an “economic” one. Orthodox economists maintain the pretense that they practice a kind of physics of money, discovering the “natural,” “empirical” way that prices and wages move. They dress this up with mumbo-jumbo like the “efficient market hypothesis,” “price discovery,” “public choice,” and that old favorite, “trickle-down theory.” Strip away the doublespeak and it boils down to this: “Actually, your boss is right. He does deserve more of the value than you do”:
Even if you’ve been suckered by the lie that bosses have a legal “fiduciary duty” to maximize shareholder returns (this is a myth, by the way – no such law exists), it doesn’t follow that customers or workers share that fiduciary duty. As a customer, you are not legally obliged to arrange your affairs to maximize the dividends paid by to investors in your corporate landlord or by the merchants you patronize. As a worker, you are under no legal obligation to consider shareholders' interests when you bargain for wages, benefits and working conditions.
The “fiduciary duty” lie is another instance of politics masquerading as economics: even if bosses bargain for as big a slice of the pie as they can get, the size of that slice is determined by the relative power of bosses, customers and workers.
Source: Pluralistic
Image: Angèle Kamp
It seems there have been better times to be alive

Marina Hyde reflects, in her inimical way, on the children’s commissioner’s report into why children became involved in last summer’s riots. Apparently, at least 147 were arrested and 84 charged, and almost all were boys. As she points out, it’s not exactly a great time to be a kid in the UK, is it?
Having been a teacher, and as a parent of two teenagers who survived the pandemic lockdowns, I can attest that the world looks pretty grim when they raise their heads from their phones and games controllers. Why would they bother? And what are we, as adults, doing about it?
Children might sometimes do very bad and stupid things, but they are not so stupid that they can’t see they live in a country where the gulf in opportunities is quite staggering. It’s droll to think that two months after the riots, we’d be listening to Keir Starmer’s blithe defence of his decision to take up the freebie loan of an £18m penthouse so his son could study for his GCSEs in peace and quiet. “Any parent would have made the same decision,” explained the prime minister. Any parent, if you please. I do wonder what on earth the parents of the rioting youngsters were doing making the choices they did. I would simply have let my teens spend the afternoon in an £18m penthouse instead. Anyway, speaking of guillotine-beckoning comments, perhaps it isn’t the most enormous surprise that the Channel 4 study found 47% of gen Z agreeing that “the entire way our society is organised must be radically changed through revolution”.
Again, it’s easy to dismiss, but if they believe these things, surely it’s on those of our generations who failed to make the status quo seem remotely appealing? Many of the behaviours of today’s teens and young adults are not simply thick / snowflakey / lazy, but rational responses to a world created by their elders, if not always betters. The childhood experience has deteriorated completely in the past 15 years or so. We have addicted children to – and depressed them with – smartphones, and done next to nothing about this no matter how much evidence of the most toxic harms mounts up. Children in the US are expected to tidy their rooms by generations who also expect them to rehearse active-shooter drills. We require young people to show gratitude for living in an iteration of capitalism in which they have not only no stake, but no obvious hope of getting a stake. It seems to them that there have been better times to be alive.
Source: The Guardian
Image: Wikimedia Commons
This is not the dystopia we were promised

Discovered via John Naughton’s Memex 1.1, this article by Henry Farrell explains how we’re not living in an Orwellian or Huxleyian dystopia, but one that resembles the writing of Philip K. Dick.
The reason for this analysis? Dick “was interested in seeing how people react when their reality starts to break down” and, if we’re honest, we can see this happening everywhere. It’s easy to point to Trump and other political examples, but it’s everywhere. People exist in their own little bubbles, interacting with other people who may or not be real, via algorithms they do not control.
This is not the dystopia we were promised. We are not learning to love Big Brother, who lives, if he lives at all, on a cluster of server farms, cooled by environmentally friendly technologies. Nor have we been lulled by Soma and subliminal brain programming into a hazy acquiescence to pervasive social hierarchies.
Dystopias tend toward fantasies of absolute control, in which the system sees all, knows all, and controls all. And our world is indeed one of ubiquitous surveillance. Phones and household devices produce trails of data, like particles in a cloud chamber, indicating our wants and behaviors to companies such as Facebook, Amazon, and Google. Yet the information thus produced is imperfect and classified by machine-learning algorithms that themselves make mistakes. The efforts of these businesses to manipulate our wants leads to further complexity. It is becoming ever harder for companies to distinguish the behavior which they want to analyze from their own and others’ manipulations.
This does not look like totalitarianism unless you squint very hard indeed. As the sociologist Kieran Healy has suggested, sweeping political critiques of new technology often bear a strong family resemblance to the arguments of Silicon Valley boosters. Both assume that the technology works as advertised, which is not necessarily true at all.
Standard utopias and standard dystopias are each perfect after their own particular fashion. We live somewhere queasier—a world in which technology is developing in ways that make it increasingly hard to distinguish human beings from artificial things. The world that the Internet and social media have created is less a system than an ecology, a proliferation of unexpected niches, and entities created and adapted to exploit them in deceptive ways. Vast commercial architectures are being colonized by quasi-autonomous parasites. Scammers have built algorithms to write fake books from scratch to sell on Amazon, compiling and modifying text from other books and online sources such as Wikipedia, to fool buyers or to take advantage of loopholes in Amazon’s compensation structure. Much of the world’s financial system is made out of bots—automated systems designed to continually probe markets for fleeting arbitrage opportunities. Less sophisticated programs plague online commerce systems such as eBay and Amazon, occasionally with extraordinary consequences, as when two warring bots bid the price of a biology book up to $23,698,655.93 (plus $3.99 shipping).
In other words, we live in Philip K. Dick’s future, not George Orwell’s or Aldous Huxley’s.
[…]
In his novels Dick was interested in seeing how people react when their reality starts to break down. A world in which the real commingles with the fake, so that no one can tell where the one ends and the other begins, is ripe for paranoia. The most toxic consequence of social media manipulation, whether by the Russian government or others, may have nothing to do with its success as propaganda. Instead, it is that it sows an existential distrust. People simply do not know what or who to believe anymore. Rumors that are spread by Twitterbots merge into other rumors about the ubiquity of Twitterbots, and whether this or that trend is being driven by malign algorithms rather than real human beings.
Source: Programmable Mutter
Image: Denys Nevozhai
The struggle for attention as the prime moral challenge of our time

I’m posting this from Andrew Curry mainly so I don’t forget the books referenced (already added to my Literal.club queue) and so that I can remember to check back on the Strother School of Radical Attention that he mentions.
Sadly, their awesome-looking courses are either in-person (US) or online at times where it’s the early hours of the morning here in the UK.
There’s something of a history now of writing about the commodification of attention as a feature of late stage capitalism. [Rhoda] Feng [writing in The American Prospect] spotlights a few. Tim Wu’s book The Attention Merchants traces this back to its origins in the late 19th century. James Williams’ Stand out of Our Light: Freedom and Resistance in the Attention Economy positions the struggle for attention as the prime moral challenge of our time. Williams had previously worked for Google. The more recent collection Scenes of Attention: Essays on Mind, Time, and the Senses “took a multidisciplinary approach”, exploring the interplay between attention and practices like pedagogy, Buddhist meditation, and therapy.
[…]
In her review, Feng critiques the framing here of attention as scarcity—essentially an economics version that goes back to Herbert Simon in the 1970s—he said, presciently, “that a wealth of information creates a poverty of attention.”
Frankly, it is more serious than that: as James Williams says in Stand Out of Our Light,
“the main risk information abundance poses is not that one’s attention will be occupied or used up by information … but rather that one will lose control over one’s attentional processes.” The issue, one might say, is less about scarcity than sovereignty.
There are pockets of resistance which are trying to reclaim our sovereignty. Feng points to Friends of Attention, new to me, whose Manifesto calls for the emancipation of attention.
Source: Just Two Things
Image: Markus Spiske
You’re Just a Row in an Excel Table

I’ve only been made redundant once in my career, but I could see it coming, prepared for it, and jumped straight into full-time consultancy. However, my weird brain still surfaces it sometimes in the early hours of the morning when I can’t get back to sleep, along with other ‘failures’ in life. (It wasn’t a failure; I didn’t ‘fail’.)
The thing is, though, that working for any hierarchical organisation, whether it’s for-profit, non-profit, or otherwise, means that you have very little power or say in how things operate. What I liked about this article was how well it explains the difference between how you enter and how you leave an organisation.
The Stoic philosophers tell us that in life you should prepare for death. I don’t think it’s unreasonable in our working lives to also prepare for endings, and do them on our own terms.
For those like me who’ve experienced layoffs, work has become just that—work. You do what’s assigned, and if your company squanders your potential or forces you to waste time on unnecessary projects, you simply stop caring. You collect your paycheck at the end of the month, and that’s it. This is the new modern work: no more striving to be 40% better every year.
[…]
I’ve wanted to write about this topic for a long time, but it’s been difficult to find the energy. The subject itself is a deep disappointment for me, and every time I reflect on layoffs, it makes me profoundly sad. It’s a stark reminder of how companies treat workers as disposable. Before you join, they go to great lengths to make you feel valued and excited to accept their offer. You meet multiple people, and some even offer signing bonuses. But when layoffs come, you’re reduced to a name on a list. During the exit interview, a random person from the company reads a prepared script and can’t answer your questions. The HR team that once worked to make you feel valued doesn’t even conduct an actual conversation with you. That random person becomes the last connection you have to a company you spent years at.
Source: Mert Bulan
Image: Mitchell Luo
Not being bored is why you always feel busy

Kai Brach cites Anne Helen Petersen about cultural tipping points relating to technology use. Petersen, in turn, quotes Kate Lindsay who discusses the lack of boredom in our lives — which exhausts our brains.
In my own life I’ve found that anxiety can be absolutely paralysing, stopping me from getting done the smallest tasks. I have techniques for getting around that, some of which involve supplements, but mainly in terms of just doing stuff. That lends a kind of momentum which allows me to get things done.
However, always doing things is tiring. It takes me back to a couple of posts: Who are you without the doing? and Taking breaks to be more human.
Or perhaps, as Anne Helen Petersen suggests in her latest piece, we’ve reached a cultural tipping point:
“The amount of space these technologies take up in our lives – and their ever-diminishing utility – has brought us to a sort of cultural tipping point. [Our feeds have completed their] years-long transformation from a neighborhood populated with friends to a glossy condo development of brands.”
The spaces we once inhabited feel increasingly alien, overtaken by algorithmic ghosts and corporate voices that leave us restless, overstimulated, yet empty and disconnected.
Petersen quotes Kate Lindsay’s writing about how boredom is missing in our lives – and it’s the perfect observation:
“Boredom is when you do the things that make you feel like you have life under control. Not being bored is why you always feel busy, why you keep ‘not having time’ to take a package to the post office or work on your novel. You do have time – you just spend it on your phone. By refusing to ever let your brain rest, you are choosing to watch other people’s lives through a screen at the expense of your own.”
Source: Dense Discovery
Image: Saketh
Someplace where they promise to wear slippers to kick you to death with so it doesn’t hurt so much

These days, I spread my social media attention between Mastodon, Bluesky, and LinkedIn. I’m not entirely sure what I’m doing on either of the latter two, if I’m perfectly honest.
LinkedIn is a horrible place that makes me feel bad. But it’s the only real connection I’ve got to some semblance of a ‘professional’ network. Bluesky, on the other hand, just seems like a pale imitation of what Twitter used to be. I’m spending less time there recently.
As Warren Ellis suggests in the following, if you’re going to jump ship from somewhere to somewhere else, it’s probably a good idea that you’re going to be treated well, long-term.
Seeing a lot of people in my RSS announcing they’ve deleted various social media products. Usually to announce they’re on BlueSky or Substack Notes or whatever today’s one is. I am not on any of the new ones and just left the old ones by the side of the road. Some say these accounts should be deleted so you’re not part of the overall user count, but I honestly don’t care that much. And doing all that just to state you’re signing up someplace where they promise to wear slippers to kick you to death with so it doesn’t hurt so much… well, good luck.
Source: Warren Ellis
Image: Sven Brandsma
The rise of mass social platforms has been at the cost of a truly independent, truly open internet

Some wise words from Dan Sinker about how we need to reclaim the internet — and why.
I’ve been thinking recently about how anti-fascist writing circulated in Germany after Hitler’s rise. Called tarnschriften, or “hidden writings,” these pocket-sized essays, news updates, and how-tos were hidden inside the covers of mundane, everyday materials.
Get a few pages in to Der Kanarienvogel, “a practical handbook on the natural history, care, and breeding of the canary” and you’re no longer reading about how “the canary is one of the loveliest creatures on earth,” but instead getting the latest updates on the anti-Nazi resistance efforts of the German Communist Party.
[…]
We need to build new things in new ways independent of the oligarchs that now control the government after already controlling much of our lives.
That means moving away from the platforms that have dominated the way we’ve connected, collaborated, and disseminated information for the last couple decades. The rise of mass social platforms has been at the cost of a truly independent, truly open internet. But it’s still there. You can still build anything on it, free of platforms and the overreach of monopolists and oligarchs.
It also means reacquainting ourselves with offline connections. We’ve built for scale for so long (in our software and in our focus on swelling our own follower counts) that we’ve forgotten the power of a handful of people around a table. It’s time to stop chasing scale and start chasing the right people. Spread information table to table, person to person. 1:1 is everything right now.
And while we’re talking offline, let’s talk about making physical media again: music that can’t be taken away with a keystroke, movies that don’t involve a subscription, and news, writing, art and more that can be copied and printed and handed person-to-person—inside seed packets or not.
We have to become the media that has collapsed. Pick up the pieces and build anew. Build robust. Build independent.
Source: Dan Sinker
Image: Bryan McGowan
No breathless whispering of Mark Andreessen across some gilded dinner table

I received Craig Mod’s most recent newsletter in which he referenced a previous issue from last year. In that prior issue, he talked about ‘digital reading in 2024’ in which I mostly focused on his discussion of the mobile phone-size BOOX Palma e-ink tablet.
However, he also talked about a company called Readwise which he advises. They’ve got a product, “a fabulous long form reading, meta-data-editing, article-organizing platform called Reader” which I’ve been experimenting with today. My workflow is usually based on Pocket, but it feels a bit disorganised and out-of-date in 2025. Reader has features such as the ability to highlight and import anything from the web, automatic article summaries, PDF import, all while also acting as a feed reader and somewhere you can send newsletters.
I have no affiliation, but it’s impressed me today. While Craig Mod likes his BOOX Palma, I prefer my full-size BOOX Note Air 2 e-ink tablet and Google Pixel Fold. Both are Android-based, and so both will be perfect for the Reader app. I’ll perhaps follow-up when I’ve got my workflow more set up. (It’s $9.99/month once my month-long free trial finishes, but I should be able to get 50% off as a student!)
The Readwise Reader app imports long form articles with aplomb. Parses them almost always perfectly, and paginates fabulously. It also OCRs non-insanely-typeset PDFs into device-sized typographic goodness.
[…]
Some things I adore about Readwise Reader: Solid typography, excellent pagination (seriously, I love how they paginate articles — vertically, sensibly, for easy highlighting across page boundaries), being able to double-tap on a paragraph to highlight the whole thing (much easier than fiddling with sentence highlights, and often you want paragraph context anyway), and built in “ghost reader” functions which provide LLM-based summaries (useful to quickly remember why you saved a particular article) and also LLM-based dictionary / encyclopedia definitions (which have so far been pretty good? although I’d love to be able to load my own dictionaries into the system). I also love that Reader’s web app feels like a kind of “control center” that allows for easy editing of article metadata and more. Install the Obsidian plugin, and you have a full repository of reading history and notes, in Markdown, on your local machine. Reader also has Chrome / Safari plugins that make for one-tap adding to your article Inbox. If you copy a URL and open the Reader App, it’ll automagically ask if you want to add that article to your queue. Lots of nice affordances.
[…]
Readwise, too, is an interesting company. Bootstrapped. No breathless whispering of Mark Andreessen across some gilded dinner table. Just a real company making real money by selling useful services around reading. What a thing!
Source: Roden
It’s possible that OpenAI may some day been seen as the WeWork of AI

My LinkedIn and Bluesky account has been full of pretty much two things today: the 80th International Holocaust Remembrance Day, and a new Chinese AI model called DeepSeek r1.
There have been many, many hot takes about the latter. I’m not here to do anything other than point out how awesome it is that this runs offline, is Open Source, and has been trained for 100x less than the equivalent models provided by American companies such as OpenAI, Meta, et al. I also included the image at the top because how much this has to conform to the official Chinese government ideology is, of course, one of the first thing that any self-respecting techie will want to test.
As usual, if you’re going to read someone’s opinion about all of this, Ryan Broderick is your guy. Here’s part of what he said in his newsletter Garbage Day which, if you’re not subscribing to at this point, I’m not sure what you’re doing with your life.
Now, we don’t yet know how the American AI industry will react to DeepSeek, but OpenAI’s Sam Altman announced on Saturday that free ChatGPT users are getting access to a more advanced model. Likely as a way to quickly respond to the DeepSeek hype. Meta are also frantically beefing up their own AI tools. But it’s hard to imagine how American AI companies can compete after they spent the last four years insisting that they need infinite money to buy infinite computing power to accomplish what is now open source. DeepSeek r1 can even run without an internet connection. So it’s possible that OpenAI, the biggest money sink of all, may, as cognitive scientist and AI critic Gary Marcus wrote today, “some day been seen as the WeWork of AI.” And that some day might be sooner than you think. The mood is changing fast. El Salvador’s hustle bro millennial dictator Nayib Bukele posted on X over the weekend, “So, [more than] 95% of the cost of developing new AI models is purely overhead?”
But, like TikTok, it’s doubtful that American tech oligarchs are actually capable of accepting how screwed they are because AI is not just a massive pyramid scheme to them. It has ballooned out into a psuedo-religion. And Andreessen has spent the last week frantically posting through it, doing his best impression of a doomsday evangelist trying to convince his flock that, yes, he knew that the roadmap was changing and that, yes, the promised revelation is still coming.
“A world in which human wages crash from AI — logically, necessarily — is a world in which productivity growth goes through the roof, and prices for goods and services crash to near zero,” he wrote on X, quivering in his shell. “Everything you need and want for pennies.” Everything, it seems, also includes AI.
Source: Garbage Day
Image: Alexios Mantzarlis
The jobs of the future will involve cleaning up environmental and political and epistemological disaster

I saw something recently which suggested that, in the US at least, the number of jobs for software developers peaked in 2019 and has been going down ever since. Good job everyone didn’t retrain as programmers, then.
There are any number of think tanks and policy outlets which tell you what they think the future of work, society, economy, etc. will be like. Of course, none of these organisations is neutral and, at the end of the day, all have a worldview to foist upon the rest of us. The World Economic Forum is one of these bodies and, as Audrey Watters discusses in her latest missive, it predicts the most ridiculous things.
I remember reading Fully Automated Luxury Communism by Aaron Bastani when it came out, pre-pandemic. I was optimistic about the role of technology, including AI, as a way of providing everyone’s needs. But the way that it’s actually being rolled-out, especially post-pandemic, when the hypercapitalists and neo-fascists have removed their masks, has left me somewhat more fearful.
It’s a broad generalisation, but you’ve essentially got two options in your working life: you can be part of the problem, or you can be part of the solution. Sadly, there’s a lot of money to be made in being part of the problem.
Reports issued by the World Economic Forum and the like are a kind of “futurology” – speculation, predictive modeling, planning for the future. “Futurology” and its version of “futurism” emerged in the mid-twentieth century as an attempt to control (and transform) the Cold War world through new kinds of knowledge production and social engineering, new technologies of knowledge production and social engineering to be more precise. (This futurism is different than the Marinetti version, the fascist version. Different-ish.) As Jenny Andersson writes in her history of post-war “future studies,” The Future of the World, these “predictive techniques rarely sought to produce objective representation of a probable future; they tried, rather, to find potential levers with which to influence human action.” These techniques, such as the Delphi method popularized by RAND, are highly technocratic — maybe even “cybernetic”? — and are deeply, deeply intertwined with not just economic forecasting, but with military scenario planning.
[…]
Futurology has always tried to sell a shiny, exciting vision for tomorrow — that is, as I argued above, what it was designed to do. But all this — all this — feels remarkably grim, despite the happy drumbeat. Without a radical adjustment to these plans for energy usage and for knowledge automation, jobs of the future seem likely entail things much less glamorous (or highly paid) than the invented work that get touted in headlines (and here again, the call for this “masculine energy” sort of shit invoked the explicitly fascist elements of futurism).
[…]
The jobs of the future will involve cleaning up environmental and political and epistemological disaster. They will involve care, for the human and more-than-human world. Of course, that’s always been the work. That’s always been the consequence, always the fallout — the caretakers of the world already know.
Source: Second Breakfast (paywalled for members)
Image: Markus Spiske
Making and remaking the instruments of our own domination

In this searing essay by R.H. Lossin, the first of an eventual two-parter, she takes aim at the absurdity of using generative AI for anything other than propping up the existing, dominant culture. Citing Raymond Williams' Culture and Materialism, Lossin explains that AI is the perfect tool for continually remaking cultural hegemony, for creating a normative ‘vibe’ which prevents reflection on what is really going on underneath the surface.
This is the first time, I think, that I’ve come across e-flux, which “spans numerous strains of critical discourse in art, architecture, film, and theory, and connects many of the most significant art institutions with audiences around the world.” Suffice to say, I’ve subscribed, so there will be more from this outlet featured on Thought Shrapnel over the coming weeks and months.
“Hegemony,” wrote Raymond Williams, “is the continual making and remaking of an effective dominant culture.” The concept of hegemony was used by Williams as a way to rescue culture from a reductive and one-way formulation of base and superstructure, where the base—Fordist manufacturing for example—is the cause of the superstructure or all things “merely cultural.” Rather, hegemony places literature, paintings, films, dance, television, music, and so on at the center of how a dominant culture rules or how a ruling class dominates. This is not to assert that art is propaganda for capitalism (although sometimes it is). Nor is it to revert to theories of “art for art’s sake” and the normative metaphysics of liberal cultural criticism (Art’s social value is its independence from politics. What about “beauty”? etc.). According to Williams’s theory of hegemony, art is one way of enlisting our desire in the “making and remaking” our own domination. But desire is unstable and, as an important part of maintaining a dominant culture, art is also, potentially, a means of its unmaking.
Hegemony, it should be noted, is not non-violent. It is always backed up by force, but it allows power to maintain itself without constant recourse to the police or justice system. Within the boundaries of an imperial power at least, hegemony allows ruling classes to govern with the enthusiastic consent and participation of subjects who assume that, for all of its problems, this social order is worth preserving in some form. Hegemony is most effective when it is experienced as sentiment (this movie is “fun to watch,” that immersive experience is “cool”) and understood as common sense (technology is not the problem, it is just used badly by capitalists).
[…]
As datasets continue to increase quantitatively, their fascist exclusions are concealed by the extent of their extraction, but they are no more universal than the universalism of, say, the European Enlightenment. The repetitive, homogenous output of image generators and their non-relation to distinct inputs, even the uneasy intuition that you’ve seen it somewhere already, demonstrates the extent of this exclusion. In a structure that mimics the extractive devastation required to power these screen dreams, the more data it collects the more thoroughly decimated the informational landscape becomes. Rather than the adage “garbage in, garbage out,” favored by computer scientists and statisticians, AI’s transformation of inputs into visual objects is a matter of “value in, garbage out.” Art collection in, garbage out; literature in, garbage out; apples in, garbage out; human subject in, garbage out; Indigenous lifeways in, garbage out.
We are aware of the capacity of capitalism to co-opt oppositional cultural practices. However, not everything is equally visible to the dominant gaze. Because “the internal structures” of hegemony—such as artistic production and institutional promotion—“have continually to be renewed, recreated, and defended,” writes Williams, “they can be continually challenged and in certain respects modified.” The dominant culture will always overlook certain “sources of actual human practice,” and this leaves us with what Williams calls residual and emergent practices. Practices that have escaped, momentarily, or been forgotten by this oppressive selection process; fugitive practices that offer some extant, counterhegemonic possibilities. This is precisely why the “democratic” tendency of ever-expanding datasets is disturbing rather than comforting. It is also why a defense against the oppressive expansion of generative AI needs to be sought outside of a neural network in actual social relationships.
Source: e-flux
Image: Loey Felipe (taken from the article)
Attribute substitution and human decision-making

A few years ago, on one of my much-neglected ‘other’ blogs, I exhorted readers to sit with ambiguity for a longer than they would do normally. In that post, I focused on innovation projects. But our lack of tolerance for ambiguity is everywhere.
In this article, Adam Mastroianni discusses ‘attribute substitution’. It’s an heuristic, a shorthand way that our brains work so that we can answer easier questions rather than harder ones. Although it can lend us a bias towards action, it’s kind of the opposite of living a reflective life influenced by historical insight and philosophical analysis.
The cool thing about attribute substitution is that it makes all of human decision making possible. If someone asks you whether you would like an all-expenses-paid two-week trip to Bali, you can spend a millisecond imagining yourself sipping a mai tai on a jet ski, and go “Yes please.” Without attribute substitution, you’d have to spend two weeks picturing every moment of the trip in real time (“Hold on, I’ve only made it to the continental breakfast”). That’s why humans are the only animals who get to ride jet skis, with a few notable exceptions.
The uncool thing about attribute substitution is that it’s the main source of human folly and misery. The mind doesn’t warn you that it’s replacing a hard question with an easy one by, say, ringing a little bell; if it did, you’d hear nothing but ding-a-ling from the moment you wake up to the moment you fall back asleep. Instead, the swapping happens subconsciously, and when it goes wrong—which it often does—it leaves no trace and no explanation. It’s like magically pulling a rabbit out of a hat, except 10% of the time, the rabbit is a tarantula instead.
I think a lot of us are walking around with undiagnosed cases of attribute substitution gone awry. We routinely outsource important questions to the brain’s intern, who spends like three seconds Googling, types a few words into ChatGPT (the free version) and then is like, “Here’s that report you wanted.”
[…]
Confusion, like every emotion, is a signal: it’s the ding-a-ling that tells you to think harder because things aren’t adding up. That’s why, as soon as we unlock the ability to feel confused, we also start learning all sorts of tricks for avoiding it in the first place, lest we ding-a-ling ourselves to death. That’s what every heuristic is—a way of short-circuiting our uncertainty, of decreasing the time spent scratching our heads so we can get back to what really matters (putting car keys in our mouths).
I think it’s cool that my mind can do all these tricks, but I’m trying to get comfortable scratching my head a little longer. Being alive is strange and mysterious, and I’d like to spend some time with that fact while I’ve got the chance, to visit the jagged shoreline where the bit that I know meets the infinite that I don’t know, and to be at peace sitting there a while, accompanied by nothing but the ring of my own confusion and the crunch of delicious car keys.
Source: Experimental History
Image: Brett Jordan
Every billionaire really is a policy failure

I don’t really understand people who look at billionaires as anything other than an aberration of the system. They are not, in any way, people to be looked up to, imitated, or praised.
What probably makes it easier for me is that I see pretty much every form of hierarchical organisation-for-profit as something to be avoided. The CEO who employs downward pressure on wages, resists unionisation, and enjoys the fruits of other people’s labour, is merely different in terms of scale.
If multi-millionaires exist out of the normal cycle of everyday life, billionaires certainly do. That alone makes them spectacularly unfit to be anywhere near the levers of power, to dictate economic policy, or to make pronouncements that anyone in their right mind should listen to.
It’s a mind-bogglingly large sum of money, so let’s try to make it meaningful in day-to-day terms. If someone gave you $1,000 every single day and you didn’t spend a cent, it would take you three years to save up a million dollars. If you wanted to save a billion, you’d be waiting around 2,740 years… All this shows how the personal wealth of billionaires cannot be made through hard work alone. The accumulation of extreme wealth depends on other systems, such as exploitative labor practices, tax breaks, and loopholes that are beyond the reach of most ordinary people.
[…]
The notion that a billionaire has worked hard for every penny of their wealth is simply fanciful. The median U.S. salary is $34,612, but even if you tripled that and saved every penny for a lifetime, you still wouldn’t accumulate anywhere close to a billion dollars. Here, it’s also worth looking at Oxfam’s extensive study on extreme wealth, which found that approximately one-third of global billionaire fortunes were inherited. It’s not about working harder, smarter, or better. There are many factors built into our economic system that help extreme wealth to multiply fast. It’s a matter of being well-placed to benefit from the structures that favor capital and produce a profit off the back of exploitation.
[…]
Jeff Bezos could give every single one of his 876,000 employees a $105,000 bonus and he’d still be as rich as he was at the start of the pandemic.
[…]
It’s true that the billionaire class creates jobs and that wages have the potential to drive the economy, but that argument falters when workers barely have enough to survive. The potential to generate tax dollars from billion-dollar profits is enormous. Oxfam found that if the world’s richest 1% paid just 0.5% more in tax, we could educate all 262 million children who are currently out of school and provide health care to save the lives of 3.3 million. But given generous tax cuts and easily exploitable loopholes like the ability to register wealth in offshore tax havens, this rarely comes to pass.
[…]
Some favor the adoption of universal social security measures, paid for via progressive taxes. It’s been argued that Universal Basic Income, Guaranteed Minimum Income, and Universal Basic Services could aid prosperity in a world grappling with growing populations, societal aging, and climate breakdown. Piecemeal proposals are not enough to remedy a crisis of poverty in the midst of plenty. And a fair world would not further the acceleration of either.
Source: Teen Vogue
Image: Adam Nir