Cheat on everything?

Stephen Downes shares news that Cluely, a startup promising that you can “cheat on everything” is proving controversial. As he says, the company “leans heavily into the ‘cheating’ aspect of the service, which is producing a not unexpected visceral reaction on the part of pundits.”
I tried Rewind.ai (currently rebranding to ‘Limitless’) when Paul Stamatiou was a co-founder. Instead of talking about “cheating” and creating socially awkward videos, Rewind.ai talks of being a “personalized AI powered by everything you’ve seen, said, or heard.” Well, so long as it happens on your computer. Presumably these people don’t go outside.
In my experience, startups get attention and traction by being genuinely useful and unique (very rare!), because there’s a big name attached to them (common), or because they’re socially transgressive. It feels to me like we’re seeing more of the latter at the moment, including Mechanize which, somewhat laughably, believes that their “total addressable market” is “$60 trillion a year.”
That’s not to say that automation of many so-called “white collar” tasks isn’t possible or desirable. Just not by tech bros, thank you very much. I’d encourage you to read Fully Automated Luxury Communism for a more radical socialist look at how all this could play out.
On Sunday, 21-year-old Chungin “Roy” Lee announced he’s raised $5.3 million in seed funding from Abstract Ventures and Susa Ventures for his startup, Cluely, that offers an AI tool to “cheat on everything.”
The startup was born after Lee posted in a viral X thread that he was suspended by Columbia University after he and his co-founder developed a tool to cheat on job interviews for software engineers.
That tool, originally called Interview Coder, is now part of their San Francisco-based startup Cluely. It offers its users the chance to “cheat” on things like exams, sales calls, and job interviews thanks to a hidden in-browser window that can’t be viewed by the interviewer or test giver.
Cluely has published a manifesto comparing itself to inventions like the calculator and spellcheck, which were originally derided as “cheating.”
Source: TechCrunch
Image: Mohamed Nohassi
These other, really important things intrude on my thinking and distract me

The latest issue of New Philosopher magazine is about ‘choice’ and features a wonderful interview with Barry Schwartz, who is the Darwin Cartwright Emeritus Professor of Social Theory and Social Action at Swarthmore College. He’s the author of The Paradox of Choice: Why More Is Less which I’ve added to my reading list.
I want to excerpt a couple of parts which I think are particularly insightful. The first is about how he reduced the assessment burden on young people, who he believes suffer from a greater decision burden than previous generations.
Zan Boag: I recall in one of your talks, you mentioned that it came as something of a revelation to you when you realised students simply didn’t have as much time as students in the past.
Barry Schwartz: That was my interpretation.
What I realised, or what I thought, I never gathered data on this in any official way, but when I went to school, so many of the really important decisions we face in life were essentially made for us. People were not plagued by questions of sexual identity, weren’t plagued by questions about what their romantic life should look like. Should I have a girlfriend? The default was yes. Should I get married? The default was yes. When should I get married? Soon as I graduated from college. That was the default, and so on. And so there were still issues like, how do I find the right person?
But it wasn’t the case that every last hour of your daily life was consumed by a need to focus on doing studies without having these other, really important things intrude on my thinking and distract me. Well, this was much less true for my children and it is ever so much less true for my grandchildren.
The second excerpt is the follow-up to the question about how problematic it is to be a ‘maximizer’ in life. I’d usually use the term ‘perfectionist’ and have certainly had to overcome this tendency in myself, as it just makes one miserable. As Schwartz points out, as you get older, you have to come to terms with the fact that you have chosen certain options instead of others, and to be satisfied with the way things are, rather than how they could have been.
Zan Boag: It makes it particularly difficult with these big life decisions, whether it’s jobs, where we live, or partners, because we’re faced with so much choice. People can always wonder about the life they could have led had they made a different decision – say to pursue writing instead of banking; move to San Francisco instead of Sydney; ballroom dancing over Taekwondo. They’re making choices that then will affect the way they lead their lives. Let’s call this a phantom life, the ‘other’ life. How can people find satisfaction with their choices when there are so many available, and the choices you make will often seem like the incorrect ones? How can they find some sort of satisfaction?
Barry Schwartz: I think in the book that I wrote, which by the way, as I told you in an email, I’m about to start writing a new edition of, I make some suggestions, but I think the truth of the matter is that it is very hard to shut off these enemies of satisfaction in the modern world. What we’re talking about, and what I wrote about, is rich society’s problem.
Most people in the world don’t have the problem that there are too many options. They have the opposite problem. But if you happen to live in a part of the world like you and I do, that is the problem. And we don’t have the tools for shutting it down. I make some suggestions, like limit the number of options you consider. Fine. I’m only going to look at six pairs of jeans. It’s one thing to say it and it is another thing to do it, and it’s still a hard thing to do and not be nagged by the knowledge that there are all these options out there that you didn’t look at.
It’s sort of like just quitting smoking. ‘Yeah, I’ll just quit smoking.’ Nice, easy to say, but really, really hard to do when you suffer at least initially when you quit smoking. And so, I think that you have to be prepared for a fair amount of discomfort and a lot of work to change your approach to making decisions, big ones or small ones.
It’s not a surprise to me that young people are in such bad shape because one of the things that we found is that the younger you are, the more likely you are to be a maximizer in decisions. I think one of the things that you learn as you age is that good enough is almost always good enough. But you don’t see too many 20-year-olds who think that. Experience teaches you that good enough is good enough.
After suffering for a generation or so, you settle into a life where you’re satisfied with good enough results of your decisions. But meanwhile, that’s 20 or 30 years of suffering. And what I think… I don’t know if you’re familiar with this somewhat controversial argument about what social media is doing to the welfare of young people.
Source: New Philsoopher: Choice
Image: Elena Mozhvilo
In some ways, FOMO is a philosophical insight
I’ve Laura Hilliger to thank for pointing me towards The Gray Area podcast, which takes “a philosophy-minded look at culture, technology, politics, and the world of ideas.” So it fits hand-in-glove with what I discuss here on Thought Shrapnel.In this particular episode, host Sean Illing talks with Kieran Setiya about middle-age, mid-life crises, and generally takes a philosophical look at what’s going on when people reach their forties. Being the ripe old age of 44, this is absolutely in my interest zone.What follows is my transcription (via Sonix.ai)
Source: The Gray Area &em; Halfway there: a philosopher’s guide to midlife crisesImage: Alejandro G.Sean Illing (SI): One of the things about life that appears to be hard is middle age. And you and you wrote a book about midlife crises. How do you define a midlife crisis?
Kieran Setiya (KS): Actually, kind of like the self-help movement, midlife crisis is one of those funny cultural phenomena that has a particular date of origin. So in 1965, this Canadian psychoanalyst, Elliot Jacques, writes a paper, ‘Death and the Midlife Crisis’. And that’s the origin of the phrase. And he is looking at patients and also, in fact, the lives of creative artists who experience a kind of midlife creative crisis. So it’s people in their late thirties. I think the stereotype of the midlife crisis is that it’s a sort of paralysing sense of uncertainty and being unmoored. Nowadays, I think there’s been a kind of shift in the way people think about the midlife crisis, that people’s life satisfaction takes the form of a kind of gentle U-shape that basically, even if it’s not a crisis, people tend to be at their lowest ebb in their forties. And this is men and women, it’s true around the world to differing degrees, but it’s pretty pervasive. So I think nowadays, often when people like me talk about the midlife crisis, what they really have in mind is more like a midlife malaise. It may not reach the crisis level, but there seems to be something distinctively challenging about finding meaning and orientation in this midlife period in your in your forties.
SI: Well, I’m 42. I just turned 42. It sounds like I’m right in the middle of my midlife crisis.
KS: You’re, you know, not everyone has it, but you’re predicted to hit it. Yes.
SI: Yikes. Well, what is it about midlife that generates all this anxiety and disturbing reflection?
KS: I think really there are many midlife crises. It’s not just one thing. I think some of them are looking to the past. So there’s there’s regret. There’s the sense that your options have narrowed. So whatever space of possibilities might have seemed open to you earlier, whatever choices you’ve made, you’re at a point where there are many kinds of lives that might have been really attractive to you, that it’s now clear to you in a vivid sort of material way that you can’t live. So there’s missing out. There’s also regret in the sense of things have gone wrong in your life. You’ve made mistakes, bad things have happened, and now the project is, how do I live the rest of my life in this imperfect circumstance? The dream life is off the table for most of us. And then I think there’s also things that are more present, focused. So often people have a sense of the daily grind being empty, and that’s partly to do with so much of it being occupied by things that need to be done, rather than things that make life seem positively valuable. It’s just one thing after another, and then death starts to look like it’s at a distance that you can measure in terms you kind of really palpably understand. Like you, you have a sense of what a decade is like, and there’s only three or four left at best.
SI: The thing about being young is the future is pure potential. Ahead of you is nothing but freedom and choices. But as you get older, life has a way of shrinking. Responsibilities pile up. You get trapped in the consequences of the decisions you’ve made, and the feeling of freedom dwindles. That’s a very difficult thing to wrestle with.
KS: I think that’s exactly right. I mean, part of what’s philosophically puzzling about it is that it’s not news that in a way, whatever your sense of the space of options was when you were, say, 20, you knew you weren’t going to get to do all of the things. So there’s a sense in which it’s kind of puzzling that when at 40, even if things go well, you didn’t get to do all of the things, that’s not news. You knew that wasn’t going to happen. What it suggests, and I think this is a kind of philosophical insight, is that there is a profound difference between knowing that things might go a certain way, well or badly, and knowing in concrete detail how they went well or badly. And that’s something that I think we learn from this transition that we make in midlife, that the kind of pain of just discovering the particular ways in which life isn’t everything you thought it might be, even though you knew all along that it couldn’t be everything you hoped it might be. That suggests that there’s a certain aspect of our emotional relationship to life that is missed out. If you just ask in abstract terms, what will be better or worse, what would make a good life? And so I think philosophy needs to kind of incorporate that kind of particularity, that kind of engagement with the texture of life in a way that philosophers don’t always do. I mean, I think there’s another thing philosophy can say here that’s more constructive, which is part of the sense of missing out has to do with what philosophers call incommensurable values.
The idea that, you know, if you’re choosing between $50 and $100, you take the hundred dollars and you don’t have a moment’s regret. But if you’re choosing between going to a concert or staying home and spending time with your kid, either way, you’re going to miss out on something that is sort of irreplaceable, and that’s pretty low stakes. But one of the things we experience in midlife is all the kinds of lives we don’t get to live that are different from our life, and there’s no real compensation for that, and that can be very painful. On the other hand, I think it’s useful to see the flip side of that, which is the only way you could avoid that kind of missing out, that sense that there’s all kinds of things in life that you’ll never get to have. The only way you could avoid that is if the world was suddenly totally impoverished, a variety, or you were so monomaniacal you just didn’t care about anything but money, for instance, and you don’t really want that. So there’s a way in which this sense of missing out, the sense that there’s so much in the world will never be able to experience, is a manifestation of something we really shouldn’t regret and in fact, should cherish, namely, the evaluative richness of the world, the kind of diversity of good things. And there’s a kind of consolation in that, I think.
SI: So is that to say that FOMO is is always and everywhere a philosophical error, or is it actually valid?
KS: In some ways, I think it’s a philosophical insight. In a way, I think there’s a kind of existential FOMO is part of what we have in midlife, or sometimes earlier, sometimes later. But I think that sense that it really is true that we’re missing out on things and that there’s no substitute for them. That’s really true. The kind of rejoinder to FOMO is, well, imagine there weren’t any parties you didn’t get to go to. That wouldn’t be good either, right? You want there to be a variety of things that are actually worth doing and attractive. We want that kind of richness in the world, even though one of the inevitable consequences of it is that we don’t get to have all of the things.
SI: One of the arguments you make is how easily we can delude ourselves when we start pining for the roads not traveled in our lives. And, you know, you think, what if I really went for it? What if I tried to become a novelist or a musician, or join that commune, or, I don’t know, pursued whatever life fantasy you had when you were younger? But if you take that seriously and consider what it really means, you might not like it because the things you value the most in your life, like, say, your children, well, they don’t exist if you had zigged instead of zagging 15 or 20 years ago. And that’s what it means to have lived that alternative life. And I guess it’s helpful to remember that sometimes, but it’s easy to forget it because you just you’re imagining what you don’t have.
KS: This is, again, about the kind of danger of abstraction that, in a way, philosophy can lead us towards this kind of abstraction, but it can also tell us what’s going wrong with it. So the thought I could have had a better life, things could have gone better for me. It’s almost always tempting and true. But when you think through in concrete particularity, what would have happened if your failed marriage had not happened? Often the answer is, well, I would never have had my kid or I would never have met these people. And while you might think, yeah, but I would have had some other unspecifiable friends who would have been great and some other unspecifiable kid who would have been great. I think we rightly don’t evaluate our lives just in terms of those kinds of abstract possibilities, but in terms of attachments to particulars. And so if you just ask yourself, could my life have been better. You’re kind of throwing away one of the basic sources of consolation. A rational consolation, I think, which is attachment to the particularity of the good things, the good enough things in your own life, even if you acknowledge that they’re not perfect and that there are other things that could have been in a certain way better.
SI: This is why I always loved Nietzsche’s idea of amor fati, this notion that you have to say yes to everything you’ve done and experienced, because all the good and bad in your life is part of this chain of events. And if you alter any of those events at any point in the chain, you also alter everything else that followed in unimaginable ways.
KS: I mean, I do think there’s a profound source of affirmation there. I think my hesitation is just that. It’s not that all the mistakes that we make, or the terrible things that happen to us, are redeemed by attachment to the particulars of our lives. It’s that there’s always this counterweight. At the very worst, we’re going to end up with some kind of ambivalence. And that’s better than than the situation of mere unmitigated regret. But it’s not quite the full embrace of life that a certain kind of of philosophical consolation might have given us.
A sense that one has completed, with digital certainty, a task whose form may or may not have been made clear from the outset

Stephen Downes brought my attention to a post on the website LessWrong, which, as he points out, is a prime (and increasingly rare case) of the comments section being more interesting than the main content itself.
One of the commentators brings up the work of David Golumbia who passed away a couple of years ago. Golumbia wrote an article which questioned what gamers are doing when they’re gaming, especially with role-playing games (RPGs) and first-person shooters (FPS).
The philosopher Ludwig Wittgenstein famously pointed out how difficult it is to define what a ‘game’ is. Many things can be games or game-like, but trying to neatly categorise what makes them so is seemingly impossible. Do games have to be competitive? No. Do games have to be fun? Well… no. And so on.
There’s a lot to think about in the Golumbia article, and (for once!) I’m going to set aside the very pointed critique of the capitalist element and the power dynamic. Instead, I’ll excerpt the part about games providing “the human pleasure taken in the completing of activities with closure and with hierarchical means of task measurement.”
For me, personally, most of my gaming sessions are usually with and/or against other human players. For example, on a Sunday night, my gaming crew is enjoying Payday 3 a game about robbing, stealing, and looting. There are aims and objectives, and tasks to complete and check off. It’s satisfying. Now I know why.
If we cast aside for a moment the generic distinction according to which programs like WoW, Halo, and Half Life are games while Microsoft Excel, Microsoft Word, and Adobe Photoshop are “productivity tools,” it becomes clear that the applications have nearly as much in common as makes them distinct. Each involves a wide range of simulations of activities that can or cannot be directly carried out in physical reality; each demands absorptive, repetitive, hierarchical tasks as well as providing means for automating and systematizing them. Each provides distinct and palpable feelings of pleasure for users in any number of different ways; this pleasure is often of a type relating to some kind of algorithmic completeness, a “snapping” sense that one has completed, with digital certainty, a task whose form may or may not have been made clear from the outset (finishing a particular spreadsheet or document, completing a design, or finishing a quest or mission). In every context in which these activities are completed, whether that context is established by the computer or by people in the physical world, there is indeed some sense of “experience” having been gained, listed, compiled by the completion of a given task. Arguably, this is a distinctive feature of the computing infrastructure: not that tasks were not completed before computers (far from it) but rather that the digitally-certain sense of having completed a task in a closed way has become heightened and magnified by the binary nature of computers.
What emerges as a hidden truth of computer gaming — and no less, although it may be even better hidden, of other computer program use — is the human pleasure taken in the completing of activities with closure and with hierarchical means of task measurement. Again, this kind of pleasure certainly existed before computers, but it has become an overriding emotional experience for many in society only with the widespread use of computers. A great deal of the pleasure users get from WoW or Half Life, as from Excel or Photoshop, is a digital sense of task completion and measurable accomplishment, even if that accomplishment only roughly translates into what we may otherwise consider intellectual, physical, or social goal-attainment. What separates WoW or Half Life from the worker’s business world is thus not variability or “give” but rational certainty, the discreteness of goals, the general sense that such goals are well-bounded, easily attainable, and satisfying to achieve, even if the only true outcome of such attainment is the deferred pursuit of additional goals of a similar kind.
[…]
At the very least, WoW and Half Life, and their cohort are therefore not games in the sense to which we have become accustomed. It seems clear that we call these programs “games” because of the intense feelings of pleasure experienced by players when we engage with them and because they appear on the surface not to be involved in the manipulation of objects with physical-world consequences. On reflection, neither of these facts proves very much… And the fact that computer games are pleasurable cannot, by itself, furnish grounds for calling them games: after all, games constitute only a part of those activities in the world that give us pleasure.
[…]
Can there be any doubt about the potential attractiveness of an apparently human world in which we understand clearly how to attain power, what to do with it, and that the rules by which we operate do not change or change only by explicit order? The deep question such games raise is what happens when people bring expectations formed by them into the world outside.
Source: Golumbia, D. (2013). ‘Games Without Play’. New Literary History, Vol. 40, No. 1, Play (Winter, 2009), pp. 179-204. Available at: https://diglit.community.uaf.edu/wp-content/uploads/sites/511/2015/01/Games_without_Play.pdf
Image: ELLA DON
A lot of strange things start to make more sense — sometimes distressingly so

I was listening to Helen Beetham talk with Audrey Watters on her imperfect offerings podcast, when Audrey mentioned a Bloomberg piece which I’ve excerpted below. Essentially the economy becomes distorted when all of the money is at the top of society and everything is being produced to fit the needs of rich people.
This chimes with what economist Gary Stevenson calls ‘The Squeeze’ which I wrote about recently. While the article is about the US, which is a more unfettered free market economy, the same is also likely to be happening at different rates to other western economies.
The question, of course, is what we do about it. I mean, to be blunt, we can either tax the rich or end up eating them.
Recent economic headlines do not add up to a coherent picture: Since 2020, Americans have spent lavishly on discretionary goods and services, even as the cost of necessities has soared. Consumer debt has ballooned right along with prices, and Americans are now defaulting on their credit cards at rates unseen since the Great Recession. Wages growth has been strong, but inflation has thwarted its ability to help most Americans get ahead. So who’s booking all those first-class airline seats and tables at fancy restaurants? Why are tickets for concerts and major sporting events so expensive and also so sold out?
A recent analysis of consumer spending from Moody’s Analytics, first covered in the Wall Street Journal, provides an answer: Rich people really are just firing a cash cannon into the consumer market. The wealthiest 10% of American households—those making more than $250,000 a year, roughly—are now responsible for half of all US consumer spending and at least a third of the country’s gross domestic product. If you keep that in mind, a lot of strange things start to make more sense—sometimes distressingly so.
[…]
Such a high concentration of financial resources presents a whole host of risks and complications, including general economic fragility. If the extreme spending habits of a small group of people are what’s keeping a large portion of the economy churning, then that group of people also has an outsize ability to bring everyone else down with them.
[…]
When you put a huge proportion of a nation’s total resources in a small number of hands, that distortion also plays out in the everyday economy. Consumer-facing companies want earnings growth and need ways to hold on to their profit margin if components or labor become more expensive. An easy way to do that is by going upmarket to find buyers who are spending freely. You can see how this has played out in the car market: Automakers have pushed to develop more of the big, pricey SUVs that wealthier buyers prefer and devoted fewer resources to smaller, more affordable models. That’s helped push the average sale price of new cars up more than 50% since 2014, according to a Cox Automotive analysis of data from the Bureau of Labor Statistics. The average new car in the US now costs almost $50,000. When the math on producing goods and services only pencils out when you’re selling to the rich, it doesn’t just change the availability of designer handbags or hotel suites; it affects how entire industries organize themselves.
[…]
Letting so many of the country’s economic resources accrue to so few people. risks a lot more than just the economy—it eats away at social cohesion in ways that have leaked into other areas of American life and politics. It breeds distrust and recrimination among individuals and groups of people, as well as toward the systems and institutions we’re supposed to trust to make society work in ways that are at least minimally fair. The end result is a combination of economic fragility and social disaffection that eventually even high earners might not be able to buy their way out of.
Source: Bloomberg
Archive link (no paywall): Archive.is
Image: Boston Public Library
I've done this a couple of times before but this time feels slightly different

Tom Watson, who apart from doing generally awesome stuff somehow also has time to star in a documentary about ultrarunning, saw a recent Thought Shrapnel post and wrote about what tech he’s using.
I need to do my own, and actually Tom’s post has made me realise the extent to which I’m dependent upon Google and, to a recent lesser extent, Perplexity.
Prompted by Doug… and a couple of the Colophon’s (new word for me!) by Matt and Steve I thought I’d outline “my stack”. I’ve done this a couple of times before but this time feels slightly different.
[A]s someone who has been gently prompting people to not be so beholden to Big Tech, to look more at Open Source, to think more ethically, and to at least consider European Alternatives, I feel I should at least discuss where I’m trying to do this, where I’m succeeding and where I’m often failing.
[…]
…I use AI for specific things when I think they will make something more effective. I’m therefore always looking for the best model, and best use case. And things change all the time. So I purposefully build specific components that allow me to easily switch models and providers. If there is one thing I’d advise when thinking about building AI into an organisation, it’s to ensure you aren’t creating provider lock in for yourself. Quite a few AI wrappers (tools that put some kind of front end onto a model) allow you to switch models. But not all. And if you are building yourself, there is a risk you just lock yourself into a depeciating model or a provider that just turns out to be mega shitty.
[…]
It’s not easy to avoid the big tech trap, but I think I’m doing ok. Also I’m not saying you definitely should, but I think you should at least consider what you use and what this means, and if you have principle maybe they should cost you something.
Source: Tomcw.xyz
Image: Matthias Reumann
It's much easier to go carless if your city has good public transit

I’m a vegetarian who drives an electric vehicle (EV). In a few weeks' time, we’re getting a heat pump installed so that we can remove our gas boiler. These are all climate-positive things to do, and I’m trying to do my bit.
This article by the World Resources Institute shows how important it is that there is an infrastructure that enables individual decision-making to take place. For example, I’ve been vegetarian now for eight years, and it’s much easier to remove meat from your diet these days even than when I started to so in 2017. Likewise, because of investment in EV infrastructure, these days it’s unproblematic to own or lease an EV.
It’s interesting being an early-ish adopter of air source heat pump technology in the UK. The process is not as smooth as it could be, with our driveway having to be dug up to upgrade the total electricity supply capacity entering our property. So, although we have visited a couple of heat pumps and there is a government grant, it’s still more expensive, and involves more upheaval, than just getting another combi boiler.
Coupled with active hostility in some quarters, it’s a good example of how the Overton Window can apply to technology interventions and pro-climate lifestyle choices. That’s why, as well as making such choices ourselves, we should be aware of, and be advocating for, the systems within which those choices can be made easier.
Our data shows that pro-climate behavior changes, such as driving less or eating less meat, could theoretically cancel out all the greenhouse gas (GHG) emissions an average person produces each year — specifically among high-income, high-emitting populations.
But it also reveals that efforts focused exclusively on changing behaviors, and not the overarching systems around them, only achieve about one-tenth of this emissions-reduction potential. The remaining 90% stays locked away, dependent on governments, businesses and our own collective action to make sustainable choices more accessible for everyone. (Case in point: It’s much easier to go carless if your city has good public transit.)
[…]
We found that, in theory, shifting to 11 pro-climate behaviors we analyzed in the energy, transport and food sectors could reduce individuals' GHG emissions by about 6.53 tonnes per year. This would more than cancel out what an average person currently emits (about 6.3 tonnes per year). However, our data also shows that when people attempt these changes in the real world, without supportive systems, they typically only reduce emissions by about 0.63 tonnes yearly — just 10% of what’s theoretically possible.
It’s not that individual changes don’t matter; when someone switches to an electric vehicle (EV) or avoids a flight, they make a real impact. The problem is that without supportive infrastructure, policies or incentives (such as public EV chargers or financial subsidies), these programs struggle to drive the broad-based change the world really needs.
Source & image: World Resources Institute
Workers of the future must be emboldened to eschew wages in favour of dropping into the abyss

Note: as regular readers will be aware, my habit is to quote part of the excerpt as the title of Thought Shrapnel posts. I, personally, am not advocating for abyss-directed activity.
I’m not sure who the cipher “aethn” behind this essay is, but this is a pretty standard argument dressed up in fancy (and somewhat eschatological) language. I’ve tried to excerpt the main thrust, which is: LLMs are getting better and seem to be starting to replace some lower level jobs; this will continue and cause a rupture in the fabric of society. Somehow we need to prepare for this.
While I do believe that AI is somewhat qualitatively different from previous technological inventions, I’m also a student of history and so know that disruption doesn’t happen everywhere all at once. As William Gibson is famously quoted as saying, “The future is already here — it’s just not very evenly distributed.” Just because some people, like the author (and like me), are messing around with LLMs and finding them powerful for our work, doesn’t mean that everyone else is.
Essays like this tend to miss out existing inequality in a rush to talk about future inequalities. And, it has to be said that our western economic and political structures have proven remarkably resilient to a number of shocks over the past centuries. Fredric Jameson noted that, “it is easier to imagine the end of the world than to imagine the end of capitalism.” I’d note that, sometimes, one person’s “violent revolution” is another person’s evolution of capitalism into a new form.
We are at the precipice of a revolution more violent than the Industrial Revolution. This revolution is not about the typical vulgar parochial anxieties on job security—although it is part of it—it’s about a violent upheaval of the very socio-economic fabric in the way our world is organized.
[…]
The latest innovations go far beyond logarithmic gains: there is now GPT-based software which replaces much of the work of CAD Designers, Illustrators, Video Editors, Electrical Engineers, Software Engineers, Financial Analysts, and Radiologists, to name a few. This radical automation exists without any sophisticated fine tuning or training.
[…]
The frequent naive view of the amount paid in a wage is that it’s proportional to the difficulty of the job. If this were the case then certainly all wages will substantially be decreased with the advent of LLMs and that regular economic structure will be maintained. Instead, through the normal polemics we find that’s not the case.
Wage is instead best viewed from the perspective of the profit maximizing economic agent, as in what motivates such an agent to accept a wage from another party at all. If such an agent were able to endeavor alone and capture all of the value from its enterprise it would do so. […] [T]he agent must determine if an offered wage is greater than the expected value of its solo enterprise. We then find that wage must be greater than the expected value of the opportunity cost of the uncaptured labor value incurred due to employment. For much knowledge based work, this is acute since with the same skills needed for employment one can make a competing enterprise to their employer and capture all the value. Other professions require you to have large capital to do so, so the opportunity cost is either non-existent if you cannot access that capital or further discounted by the financing cost and the risk.
[…]
Generally aligning a team on a common product requires expensive payments to each member to provide contributions. Indeed, the early-stage venture capital industry relies on this fact.
The latest LLMs make such a provision completely redundant. The proprietor of today can supervise and delegate tasks required to build a product to the latest GPTs in virtually any knowledge field. The single proprietor now has the efficacy of maybe a team of 6-10 conservatively and further is able to produce even higher quality work.
[…]
One may demur and claim there simply won’t be enough knowledge-based services in demand, however we instead find a Jevon’s paradox, where demand for these services increases. The reduction in costs of producing the same services will make them more ubiquitous and bespoke at even a per individual basis.
[…]
Corporations whose products have no moat by economic scale or network will be forced into specializing their products as they surrender market share to the sea of companies. In the Deleuzian fashion, proprietors can build almost as quickly as they can imagine, as predicted, rattling the foundations of the economic order itself.
[…]
The social order must change drastically to a future where institutions are no longer designed to feed corporations future employees, they merely won’t harbor that demand. Many existing knowledge based workers will largely have no choice but to engage in enterprise. Educational systems must adapt and accommodate this new entrepreneurial exigency for labor in the new economic order. Workers of the future must be emboldened to eschew wages in favor of dropping into the abyss in order to have any meaningful income.
Source: aethn’s essays
Image: Maksym Mazur
I have to acknowledge and accept the fact that I use tools built by awful people to create beautiful things.

As the author of this article, Ankur Sethi, ponders, why is it that as people interested in technology we often don’t hold the rest of our consumption and use to the same standards as the digital world? Do we change where we buy our clothes and choose which car we drive based on similar ethical standards to those we use when we select our operating systems and digital platforms?
It’s a reminder, I guess, that there’s no ethical consumption under capitalism. But I, for one, can still try.
We’ve structured our society so that the best products and services are made by the worst people in the world. Of course you can deliver packages earlier than everyone else if you overwork your employees. Of course you can sell the fastest computers at the cheapest prices if you keep moving your manufacturing operations to countries with the worst labor and environmental laws. Of course you can build the smartest AI models if you slurp up everybody else’s intellectual property without asking for consent first.
It makes little difference to how tech businesses operate when a smattering of concerned individuals opt out of using their products and services. Things will only change when democratically elected governments across the world step in with regulation, drag Big Tech through the courts, and fine them billions of dollars.
Things will only change when being an asshole stops being a competitive advantage.
Until that day arrives, I have to learn to live in a state of tension with my tools. I have to acknowledge and accept the fact that I use tools built by awful people to create beautiful things.
Source: Ankur Sethi
Image: Sigmund
The problem is not just that the Gmail team wrote a bad system prompt. The problem is that I'm not allowed to change it.

I’ve often used the metaphor of the ‘horseless carriage’ in my work around new literacies, making the McLuhan-esque point that people tend to use existing mental models of technology to understand new forms. So, for example, if you remember the original iPad, there were plenty of ‘skeuomorphic’ touches, such as ebooks having fake pages either side of the ones you’re reading.
This article talks about generative AI, and in particular Google’s choices when it comes to how they’ve chosen to integrate it into GMail. The author, Pete Koomen, includes some lovely little interactive elements showing the differences between how Gemini (Google AI model) performs things by default, and how he would like it to behave.
The System Prompt explains to the model how to accomplish a particular set of tasks, and is re-used over and over again. The User Prompt describes a specific task to be done.
[…]
The problem is not just that the Gmail team wrote a bad system prompt. The problem is that I’m not allowed to change it.
[…]
As of April 2025 most AI still apps don’t (intentionally) expose their system prompts. Why not?
Here’s the insight, and the reason why I enjoy ‘vibe coding’ so much (i.e. creating web apps using a conversational interface):
The modern software industry is built on the assumption that we need developers to act as middlemen between us and computers. They translate our desires into code and abstract it away from us behind simple, one-size-fits-all interfaces we can understand.
The division of labor is clear: developers decide how software behaves in the general case, and users provide input that determines how it behaves in the specific case.
By splitting the prompt into System and User components, we’ve created analogs that map cleanly onto these old world domains. The System Prompt governs how the LLM behaves in the general case and the User Prompt is the input that determines how the LLM behaves in the specific case.
With this framing, it’s only natural to assume that it’s the developer’s job to write the System Prompt and the user’s job to write the User Prompt. That’s how we’ve always built software.
But in Gmail’s case, this AI assistant is supposed to represent me. These are my emails and I want them written in my voice, not the one-size-fits-all voice designed by a committee of Google product managers and lawyers.
In the old world I’d have to accept the one-size-fits-all version because the only alternative was to write my own program, and writing programs is hard.
In the new world I don’t need a middleman tell a computer what to do anymore. I just need to be able to write my own System Prompt, and writing System Prompts is easy!
Source: Pete Koomen
Image: Alan Warburton
How times change

Earlier this week, I had the pleasure of attending the Lit & Phil in Newcastle with my mother where we spent an evening with David Haldane, world-renowned cartoonist. He’s just started a new gig (at the age of 70!) for The Observer.
He outlined how technology had changed over the years: at the start his cartoons would be driven to the train station by his wife, taken on the last train to London, where it would then be taken by courier to the offices of the newspaper or magazine. Then, when fax machines came in, you weren’t sure that what you were sending would actually go to the right place, so sometimes people would be looking all around the place for things he’d sent through.
Much more recently, he mentioned how, with a cut-off date of 21:30, he’d been asked 10 minutes beforehand to redo a cartoon. He obliged, sent it through digitally — and by the time he’d tidied up his stuff and gone downstairs, there was his cartoon on the front page of the “tomorrow’s newspaper front pages” section of the TV news!
How times change.
Participants remembered fake headlines more than real ones regardless of the political concordance of the news story

You Are Not So Smart (YANSS) is a great podcast, and one of the recent episodes is right up my street. Based around this paper by disinformation researchers, it introduces the notion of _dis_confirmation bias.
Essentially, they did rigorous research in the US which showed that people prefer concordance with their existing belief systems over conformance with truth. I was expecting to hear philosopher W.V. Quine referenced in terms of his metaphor of us having a ‘web of belief’. Those beliefs that are toward the periphery of the web are more easily jettisoned than those nearer the centre, which are core to our identity.
Anyway, it’s a really interesting episode, especially given that most people think the problem is ‘fake news’. That’s half the problem: the other part is getting people to prefer (and share) true news rather than random stuff that happens to cohere with their existing beliefs.
Resistance to truth and susceptibility to falsehood threaten democracies around the globe. The present research assesses the magnitude, manifestations, and predictors of these phenomena, while addressing methodological concerns in past research. We conducted a preregistered study with a split-sample design (discovery sample N = 630, validation sample N = 1,100) of U.S. Census-matched online adults. Proponents and opponents of 2020 U.S. presidential candidate Donald Trump were presented with fake and real political headlines ahead of the election. The political concordance of the headlines determined participants’ belief in and intention to share news more than the truth of the headlines. This “concordance-over-truth” bias persisted across education levels, analytic reasoning ability, and partisan groups, with some evidence of a stronger effect among Trump supporters. Resistance to true news was stronger than susceptibility to fake news. The most robust predictors of the bias were participants’ belief in the relative objectivity of their political side, extreme views about Trump, and the extent of their one-sided media consumption. Interestingly, participants stronger in analytic reasoning, measured with the Cognitive Reflection Task, were more accurate in discerning real from fake headlines when accurate conclusions aligned with their ideology. Finally, participants remembered fake headlines more than real ones regardless of the political concordance of the news story. Discussion explores why the concordance-over-truth bias observed in our study is more pronounced than previous research suggests, and examines its causes, consequences, and potential remedies. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Image: Nijwam Swargiary
These parts would end up in a landfill otherwise

‘Degrowth’ is an idea which makes perfect sense in a resource-limited world. Yet the framing remains problematic and doesn’t seem to chime with the current Overton window.
Degrowth’s main argument is that an infinite expansion of the economy is fundamentally contradictory to the finiteness of material resources on Earth. It argues that economic growth measured by GDP should be abandoned as a policy objective. Policy should instead focus on economic and social metrics such as life expectancy, health, education, housing, and ecologically sustainable work as indicators of both ecosystems and human well-being.
I bring this up because of an article I saw in The Verge about ‘Frankenstein’ laptops being created from salvaged parts in India. Done properly, this is degrowth in action: creating value by repurposing e-waste into functional machines. It reminds me of a scene from Star Wars: Episode IV – A New Hope where Luke Skywalker’s uncle purchases R2-D2 and C-3PO from the Jawas, who resell scrap, droids, and technology they salvage from the environment and crashed ships.
One of the reasons Trump wants a deal with Ukraine and has threatened both Canada and Greenland is due to access to minerals. In a high-tariff, protectionist world, degrowth helps us resist tyrants and create economies that are not based on endless growth. Sometimes that number has to stop going up and to the right.
Across India, in metro markets from Delhi’s Nehru Place to Mumbai’s Lamington Road, technicians like Prasad are repurposing broken and outdated laptops that many see as junk. These “Frankenstein” machines — hybrids of salvaged parts from multiple brands — are sold to students, gig workers, and small businesses, offering a lifeline to those priced out of India’s growing digital economy.
[…]
Manohar Singh, the owner of the workshop-slash-store where Prasad works, flips open a refurbished laptop while sitting on a rickety stool. The screen flickers to life, displaying a crisp image. He smiles — a sign that another machine has been successfully revived.
“We literally make them out of scrap! We also take in second-hand laptops and e-waste from countries like Dubai and China, fix them up, and sell them at half the price of a new one,” he explains.
“A college student or a freelancer can get a good machine for INR 10,000 [about $110 USD] instead of spending INR 70,000 [about $800 USD] on a brand-new one. For many, that difference means being able to work or study at all.”
[…]
[M]any repair technicians have no choice but to rely on informal supply chains, with markets like Delhi’s Seelampur — India’s largest e-waste hub — becoming a critical way to source spare parts. Seelampur processes approximately 30,000 tonnes (33,069 tons) of e-waste daily, providing employment to nearly 50,000 informal workers who extract valuable materials from it. The market is a chaotic maze of discarded electronics, where workers sift through mountains of broken circuit boards, tangled wires, and cracked screens, searching for usable parts.
Farooq Ahmed, an 18-year-old scrap dealer, has spent the last four years sourcing laptop components for technicians like Prasad. “We find working RAM sticks, motherboards with minor faults, batteries that still hold charge and sell it to different electronic workshops,” he says. “These parts would end up in a landfill otherwise.”
[…]
Despite the dangers, the demand for Frankenstein systems continues to grow. And as India’s digital economy expands, the need for such affordable technology will only increase. Many believe that integrating the repair sector into the formal economy could bring about a win-win situation, reducing e-waste, creating jobs, and making technology more accessible.
Sources: Wikipedia & The Verge
Image: Kilian Seiler
You don't fit in. And that is amazing.

A few months ago, when we had basically no work on, I grumpily applied for some jobs. I had a couple of interviews, one of which turned into some consultancy work. But I didn’t get any of them, which on the one hand isn’t very validating, but on the other is secretly very relieving.
Aristotle said that you can’t make a decision as to whether someone is ‘happy’ until after they have died. You need to see the full arc. The same is true of employment: how it all ends is an important factor as to whether you were ‘successful’ or ‘enjoyed’ it. I’ve only had two jobs that ended well. This is because of the mantra I have tried to instil in our two teenagers: people can only treat you the way you let them. I’m not sucking up to anyone, and I’m not changing the way that I think, work, and organise my time to fit a corporate ‘system’.
Which brings us to Mike Monteiro’s post. You should read the whole thing, as he weaves in recollections about being left-handed, neurodivergence, and his own career. The parts I’ve picked out here reflect some of my own experience over the past 22 years of (what some may call) a career. Courtesy of Dan Sinker, I have a Marginally Employed patch below my monitor to remind me I chose this because this is the way. For me, anyway.
Very early on in my “career” (LOL) I decided I wasn’t drift compatible with working in large organizations. I just didn’t enjoy it. Which isn’t to say that I can’t work with people, there are people I absolutely love working with. […] I didn’t like working in large organizations because the larger an organization is, the more likely it is to have a certain way of doing things. Which kinda makes sense, because if you have thousands of people doing things that are supposed to be interconnected, you kinda want a process that everyone can follow, or there’s complete chaos. (The fact that most organizations attempt to do this and it still results in complete chaos is also interesting, but we’re not tackling that today.) The larger an organization becomes, the more it needs everyone to work and think the same way. That way, if it loses a worker, it’s that much easier to plug in a new worker. And while that way of working might make sense for the organization, it’s important that we also ask ourselves, as workers, if it’s working for us.
[…]
Since large organizations made me miserable, I decided to spend my career in small little studios, which tend to be a bit more supportive and even gravitate more towards people who don’t spin in the same direction. Possibly because they were all started by people who don’t spin in the same direction. Or at the same speed. Or spin at all.
[…]
I try to be really careful about how I dole out advice to people. There is no system. There is no one way. There’s no guarantee that our brains will take us on the same journey. I’ll tell people about my own experience in doing something. I’ll tell you that we need to get from Point A to Point B. I’ll tell you how I’ve gotten there in the past, which you can use as a frame of reference, if that helps you, but then I want your brain to do its thing. Because your brain is mapping out a totally different landscape than mine is, and that is fascinating.
[…]
The world is full of… people who want to sell you Design Thinking™… and people who want to see everything spin the same way. They want order. They want sameness. But the only sameness they want is for you to be as miserable as they are. And they’re all miserable. They hate you because you’re a threat. You see what they don’t. You feel what they can’t. You can smell colors! You can read the stars. You see the connections that they can’t. You can paint something, with your own hands, that they have to fire up Three Mile Island to even attempt. You can change your body into what you need it to be. You can love who you love.
You don’t fit in.
And that is amazing.
Source: [Mike Monteiro’s Good News](buttondown.com/monteiro/…]
Image: Mulyadi
Obvious things are obvious if you think about them

I’m sharing two articles together here because they help reframe a couple of things which are important to me. One is about political opinions and demographics, the other one is about meat-eating.
Let’s start with political opinions. The ‘received wisdom’ that older people are more conservative is based on a survivorship bias:
One of the abiding realities of our political era is a major generational split anchored on the right by disproportionately conservative seniors and on the left by disproportionately progressive millennials and post-millennials. This is often thought of as a perfectly natural, even inevitable, phenomenon: Young people are adventurous, open to new ways of thinking, and not terribly invested in the status quo, while old folks have time-tested views, assets they want to protect, and a growing fear of the unknown and unfamiliar.
[…]
But it is important to note that some generational disjunctions in political behavior are driven by demography. It’s well understood that millennials are significantly more diverse than prior generations. But there is something else driving the relative homogeneity of seniors: Poorer people are often hobbled by chronic illness, and succumb to premature death.
The other issue is around the common belief that prehistoric humans ate mainly meat. Of course, animal bones last a lot longer than plant grains, so just as we don’t have much physical evidence of wooden structures (as opposed to stone ones) we have a lot more bones than grains to base theories on.
A new archaeological study along the Jordan River, just south of northern Israel’s Hula Valley, sheds new light on the diets of early humans and challenges long-standing assumptions about prehistoric eating habits. The research shows that ancient hunter-gatherers relied heavily on plant foods, especially starchy varieties, as a key energy source. Contrary to the popular belief that early hominids primarily consumed animal protein, the findings reveal a varied plant-based diet that included acorns, cereals, legumes, and aquatic plants.
[…]
The research contradicts the prevailing narrative that ancient human diets were primarily based on animal protein, as suggested by the popular “paleo” diet. Many of these diets are based on the interpretation of animal bones found in archaeological sites, with plant-based foods rarely preserved.
Sources: New Yorker Magazine & SciTechDaily
Image: William Felipe Seccon
I just think that people who write about technology should have a disclaimer about the tech stack they use

I sent Carole Cadwalladr’s latest TED Talk to people this week who may not otherwise understand what’s going on in the US. Big Tech companies like Google, Meta, and Apple are all in the US, and also… let’s not pretend a similar thing couldn’t happen in other countries. We should be ready.
In this post, Elena Rossini points out how “impossibly incongruous” it is that Cadwalladr uses Bluesky and Substack for her online presence, “two centralized services, owned or funded by questionable groups.”
The products and services we use matter. Not only to protect ourselves, our friends, and our families, but also in terms of resisting a dominant narrative and worldview. I saw that Matt Jukes recently added a colophon to his blog to explain not only his process of writing but the products he uses. I like that.
Carole Cadwalladr has my utmost admiration. The fiery presentation she gave at TED is not diminished by the tech stack she personally uses. I firmly believe everyone should watch her video - it’s digital literacy 101.
Still, I believe that if even Carole Cadwalladr - who recognizes the problem (the broligarchy) and speaks so eloquently against it - is ONLY using American VC-funded Big Tech platforms, her presence there is an implicit endorsement. And her audience will get the indirect message that compromises need to be made and it’s no big deal to use Broligarchs’ platforms because they may be the only solution to get one’s message out there.
[…]
When I learned about the doubling down by Substack founders - who refused to moderate or demonetize newsletters promoting hate speech - I moved away from the platform… and I unsubscribed from 40+ newsletters hosted there (including two paid newsletters). I admire Cadwalladr’s work and I would love to do a paid subscription to her blog - but I won’t as long as she’s on Substack. I am sure there are many people who feel the same way.
[…]
If I were her, I would set up a blog/newsletter on Ghost - with paid membership - and I would keep a Substack account, taking advantage of the Notes feature to share articles hosted on her hypothetical Ghost blog. The best of both worlds.
For social media, I would create an account on the Fediverse and use a tool like Buffer or Fedica to crosspost to multiple accounts.
[…]
I just think that people who write about technology should have a disclaimer about the tech stack they use - in order to see if they’re “walking the talk.” And if people who speak truth to power feel they need to be on VC-backed, centralized, for-profit social networks, sure no problem. But I believe that anyone speaking up against the broligarchy should be active on the Fediverse too - a galaxy of independent, free, open source networks that is not funded by billionaires or crypto bros.
Source: Elena Rossini
Image: Marija Zaric
This extension is the solution to becoming more European oriented

There’s a growing movement in the communities of which I’m part to move off US infrastructure and away from US-owned companies. For obvious reasons. This browser extension is a good example of how that is being facilitated, by suggesting European alternatives.
Suggests European website alternatives to non-European websites.
This extension is the solution to becoming more European oriented. The extension provides European alternatives for the most used websites and services around the world wide web.
Key features:
- Site Detection and Notifications
- Automatically recognizes websites that have European alternatives
- Badge counter shows the number of alternatives sites
- Receive unobtrusive notifications about available alternatvies
- Clean, modern UI with information about each alternatives
- One-Click access to visit suggested sites
Source: Go European
Sprint goals suck too

Back in about 2014, I remember Matt Thompson help bring in ‘heartbeats’ to the Mozilla Foundation. As he explained in this post for opensource.com a couple of years later, using that word instead of ‘sprint’ is useful because:
Heartbeats can create a great sense of purpose, and ebb and flow in your team. They can be set to any length—a week, two weeks, a month. It’s really just about bringing people together in a regular, predictable cycle, with a ritual and set of dance steps to ensure everyone’s on the same page, headed in the right direction, and learning and accomplishing important things together.
I was reminded of Matt’s work when I saw Steve Messer’s post about helping a GOV.UK team implement a new model for agile delivery. Similarly, he points out that you don’t need to do two-week sprints.
This is something that Laura and I have been discussing on a project we started last month with a new client. There’s an expectation these days that to work in an ‘agile’ way you have to do sprints. You can use them. But you don’t have to.
Traditional two-week sprints and Scrum provide good training wheels for teams who are new to agile, but those don’t work for well established or high performing teams.
For research and development work (like discovery and alpha), you need a little bit longer to get your head into a domain and have time to play around making scrappy prototypes.
For build work, a two-week sprint isn’t really two weeks. With all the ceremonies required for co-ordination and sharing information – which is a lot more labour-intensive in remote-first settings – you lose a couple of days with two-week sprints.
Sprint goals suck too. It’s far too easy to push it along and limp from fortnight to fortnight, never really considering whether you should stop the workstream. It’s better to think about your appetite for doing something, and then to focus on getting valuable iterations out there rather than committing to a whole thing.
Source: Boring Magic
Image: Matt Collamer
You don’t have to agree with this idea to see that it represents a very different way of thinking about equality

I’ve always been a bit uneasy about the above meme (to which I’ve added a red cross). Thankfully, due to link to a blog post by Rob Farrow I’ve discovered why. In fact, it’s possibly the reason why the whole DEI thing has been so contentious.
It shouldn’t need saying, but people don’t read carefully and aren’t used to reading beyond headlines these days. So before continuing of course I believe in equality. The issue is with the woolly concept of ‘equity’. The article I’m citing is by Joseph Heath, a Professor of Philosophy at the University of Toronto. He writes as you’d expect such a person to write: clearly, but assuming a bit of a background in Philosophy. Thankfully, yours truly does have that background and is here to help 😉
The purpose of any good model is to present a simplified representation of reality, in order to accentuate crucial features and make them more analytically tractable. The question, therefore, is whether the kids on boxes provide a useful model for thinking about the sorts of distribution problems that arise in DEI contexts. Most egalitarian philosophers, I think, would say that it is a bad model.
I’ve taken the quotations out of order because the overall argument makes more sense when presented this way. So we start from the position that the kids on boxes meme isn’t particularly useful.
The contrast that is drawn in the meme, which was originally intended to illustrate the distinction between “equality of opportunity” and “equality of outcome,” captures the way that people used to think about issues of equality up until the late 1960s, before the publication of John Rawls’s A Theory of Justice in 1971. After that, pretty much everyone came to agree that the opportunity/outcome distinction was neither useful nor coherent. The really important question was not when one chose to equalize, but rather what one intended to equalize.
So we need to figure out what we’re ‘equalising’ here. Is it the number of boxes? Or the quality of view?
The most immediate problem with the meme is that it does not present an accepted definition of the term “equity,” but rather a stipulative redefinition, which does not correspond very well to how the term has historically been used… [T]he graphic was originally drawn to illustrate the contrast between equality of opportunity and equality of outcome. Later on, after it was reproduced umpteen times, someone changed the labels, and somehow the idea that “equality of outcome” should be called “equity” stuck.
To recap: we’ve got an outdated notion of ‘equality of opportunity’ vs ‘equality of outcome’ which has been made even more problematic by the meme relabelling the latter as ‘equity’. It’s not a defensible philosophical position, partly because ‘equity’ doesn’t have a universally accepted definition, and is usually seen as a looser standard that strict equality.
My suspicion is that when DEI ideas were first taking shape, people gravitated toward “equity” language precisely because it had this looseness about it. Because people are different (i.e. diverse), one should not expect perfect equality, but rather just equity. And for all I know, this may have been what the person who modified the kids on boxes meme was thinking, suggesting that the allocation of boxes to kids should be responsive to the different characteristics of the kids. The unfortunate result, however, is that instead of introducing a looser standard of equality, the meme wound up saddling DEI with a commitment to an extremely strict, controversial conception of equality (i.e. equality of outcome), which no reasonable person actually endorses as a general principle. Furthermore, this was not achieved through argument, but merely through persuasive definition.
And this, dear reader, is why Philosophy is such an important subject. If you don’t get these kinds of things right, then it has downstream implications. ‘Equity’ might seem like a reasonable thing to aim for, but if you don’t know what it means, then you’re going to run into trouble.
Setting aside these terminological issues and focusing on equality of outcome, the next big problem with the meme is that it commits DEI proponents to a conception of equality that is somewhere to the left of the most left-wing view defended by left-wing philosophers. Indeed, one of the major objectives of theorists in the “equality of what?” debate was to reformulate egalitarianism in such a way as to avoid the obvious objections to the simple-minded conception of equality of outcome that used to prevail in public debates (and that is represented nicely in the meme).
The ‘obvious objections’ mentioned above are things like people who have made poor choices in life. For example, intuitively, we don’t think that people who have made poor choices in life should be treated the same as those who have wound up with less because of circumstances beyond their control.
[Philosophers] took the choice/circumstance distinction and turned it into the fundamental justification for egalitarianism, arguing that our most basic reason for caring about equality is our desire to neutralize the effects of bad luck. According to this view, when we look at the kids on boxes meme and agree to take the box away from the tall guy and give it to the short kid, the reason we make this judgment is because height is an unchosen characteristic – it’s not the short kid’s fault that he’s short. The idea is not that everyone should get exactly the same outcome, but that we should not be allowing unchosen differences between persons to determine outcomes.
Framed like that, DEI would apply across the board, to people who face inequality through no fault of their own. It’s a shame that we took a meme-based approach to policy rather than a philosophical one. But then, we live in 2025 where only a small proportion of people are willing to take a nuanced view.
You don’t have to agree with this idea to see that it represents a very different way of thinking about equality. And from this perspective, the problem with the meme is that it dredges up an old, discredited view of equality, that can easily be undermined just by pointing to cases where individuals wind up with less because of choices they have made. A lot of the excitement generated by luck egalitarianism was based on the perception that we had overcome a significant error in thinking about equality, and could now move on to discussion of more defensible conceptions. And yet all it took was a single meme to turn back the clock by 50 years!
Source: In Due Course
Image: Modified from an original used in the above blog post.
I’m 100% positive people are going to talk to their cars

We live in the midst of a loneliness epidemic, especially for men. A recent Harvard Business Review article showed the difference between what people said they were using AI for this year, and compared to 2025.
“Generating ideas” has gone from first to sixth place, and “Therapy/companionship” has moved from second to first place. “Finding purpose” is a new use case coming straight in at third. There’s a paywall on the HBR article, so you can find the report here. Note that this was, in the words of the author, Marc Zao-Sanders, “a rigorous, expert-driven curation of public discourse, sourced primarily from Reddit forums.” No methodology is provided.
That being said, I’m using the report by way of introduction to the following extract from an article by Jay Springett, who reckons soon everybody will be talking to their car. I mean, I already talk to my Polestart 2 as it has Google Assistant built in, but he means talking in a deep and meaningful way.
For me, this is a case of not if, but when. It’s going to challenge notions of privacy, but also intimacy, infidelity, and loss (when providers inevitably shut down a service).
Consider the average American commuter: 60 minutes a day, mostly alone, in the car. The vehicle as liminal space. Neither home nor work. Private and intimate. I’m 100% positive people are going to talk to their cars. First for fun. Then for directions. Then about their lives. Their feelings. Their grief, their divorce.
And now that OpenAI has also introduced Memory (at least in the US) the car might remember everything you’ve ever told it. 😬
There’s a meaning crisis going on, which means there is a gaping emotional void waiting to be filled by a good listener that’s found in the safety of a car. Some people, especially men, already love their cars. What happens when the car appears to care for them back?
Her becomes a lot more plausible when the AI you fall in love with is also a car.
Source: thejaymo
Image: (shared by various people on LinkedIn)