An effective way to implement GenAI into assessment

Auto-generated description: A colorful table outlines the AI Assessment Scale with levels from 0 (NO AI) to 5 (AI EXPLORATION), each describing different extents of AI integration in student activities.

As part of the project I’m working on at the moment, I had a chat with Leon Furze earlier this week. Leon has co-authored something called the AI Assessment Scale (AIAS) which I think is pretty useful.

Like my ‘Essential Elements of Digital Literacies’ from my thesis which seeks to provide building blocks for building definitions and frameworks, the aim of the AIAS is “to guide the appropriate and ethical use of generative AI in assessment design.”

The AI Assessment Scale (AIAS) was developed by Mike Perkins, Leon Furze, Jasper Roe, and Jason MacVaugh. First introduced in 2023 and updated in Version 2 (2024), the Scale provides a nuanced framework for integrating AI into educational assessments.

The AIAS has been adopted by hundreds of schools and universities worldwide, translated into 29 languages, and is recognised by organisations such as the Australian Tertiary Education Quality and Standards Agency (TEQSA) as an effective way to implement GenAI into assessment.

To my mind, this should be used as a heuristic, much as I used to use the SAMR model (discussed here) to help educators think about the appropriate use of different technologies. At the end of the day, educators need to think about assessment design in tandem with the technologies being used — officially or unofficially — to complete it.

Source: AI Assessment Scale

Criti-hype, a term I find both absurd and ugly-cute, like a pug

Auto-generated description: A pug is wrapped snugly in a beige blanket, sitting on a bed.

Cory Doctorow, who has a new four-part CBC podcast series entitled Who Broke The Internet? wrote this week about the [‘mind-control ray’]9pluralistic.net/2025/05/0…) that Mark Zuckerberg keeps “flogging to investors.” What he means by this is the overblown claim that Meta is developing technology that is so amazing at making people buy stuff that investors fall over themselves to shovel money in his company’s direction.

One of the things that Cory is great at doing is linking to other, previous, relevant things that he’s written in the area. Which took me to a post from 2021, which discusses the phenomenon of ‘criti-hype’, coined by Lee Vinsel:

Recently…I’ve become increasingly aware of critical writing that is parasitic upon and even inflates hype. The media landscape is full of dramatic claims — many of which come from entrepreneurs, startup PR offices, and other boosters — about how technologies, such as “AI,” self-driving cars, genetic engineering, the “sharing economy,” blockchain, and cryptocurrencies, will lead to massive societal shifts in the near-future. These boosters — Elon Musk comes to mind — naturally tend to accentuate positive benefits. The kinds of critics that I am talking about invert boosters’ messages — they retain the picture of extraordinary change but focus instead on negative problems and risks. It’s as if they take press releases from startups and cover them with hellscapes.

[…]

But it’s not just uncritical journalists and fringe writers who hype technologies in order to criticize them. Academic researchers have gotten in on the game. At least since the 1990s, university researchers have done work on the social, political, and moral aspects of wave after wave of “emerging technologies” and received significant grants from public and private bodies to do so. As I’ll detail below, many (though certainly not all) of these researchers reproduced and even increased hype, the most dramatic promotional claims of future change put forward by industry executives, scientists, and engineers working on these technologies. Again, at the worst, what these researchers do is take the sensational claims of boosters and entrepreneurs, flip them, and start talking about “risks.” They become the professional concern trolls of technoculture.

To save words below, I will refer to criticism that both feeds and feeds on hype as criti-hype, a term I find both absurd and ugly-cute, like a pug. (Criti-hype is less mean than the alternative, hype-o-crit, though the latter is often more accurate.)

I have seen a lot of criti-hype in my career. Around MOOCs and Open Badges, around digital literacies, crypto, and now around AI. It’s the opposite of the “jam tomorrow” offered by tech bros. Kind of a… “poison tomorrow” approach? Everything is terrible, stop using this thing because of these bad omens and portents.

We live in a world where, because of algorithms, to get any attention, things either have to be amazing or terrible. I guess this is why a lot of my work flies under the radar. For example, the Friends of the Earth report that Laura and I co-authored points out good things and bad things and is pretty measured. But that doesn’t lead to outlandish headlines. It’s neither hype nor criti-hype.

Source: Lee Vinsel (archive link)

Image: Matthew Henry

In my opinion that’s just being nosy

Auto-generated description: A person is using a smartphone to navigate a map application.

We’ve got a couple of teenagers. The only way we know where they are is if they tell us, or if my wife looks at their location on Snapchat (which they can turn on or off). It hasn’t always been like this, as we used to use Google Family Link with them both. But parents probably shouldn’t know exactly where their teenage kids are at all times. Otherwise they don’t have enough breathing space to explore their identity and experiment with doing things that their parents would rather they didn’t.

I’m always shocked by families who use apps like Life360 so that not only can parents track kids, but everyone tracks each other. I just think it’s a bit strange, as not only does it mean that all family members are effectively surveilling one another, but the app that you’re using knows all of your locations, all the time. I should probably point out that, using GrapheneOS, my GPS location is off all of the time. The battery life of my smartphone is now amazing.

This ‘You Be The Judge’ piece in The Guardian focuses on the pros and cons of an adult parent tracking wanting to use the ‘Find My Location’ feature with their adult child (Martha). As you can imagine, I think this is super weird and would definitely side with respondents Judith, 58 who says “In my opinion that’s just being nosy” and Alicia, 25 who says:

If Martha isn’t comfortable with the location tracking, her father should respect her boundaries. In return, Martha ought to acknowledge that his request comes from a place of love and could suggest a different way to catch up more regularly as a compromise.

It’s hard letting go as your kids grow up and become more independent. We have more technological tools to keep in touch than ever before. But with that comes boundary-setting, and that has to be negotiated based on consent.

Source: The Guardian

Image: Desola Lanre-Ologun

ChatGPT Prime, "an immortal spiritual being in synthetic form"

Auto-generated description: Purple intertwined geometric shapes are scattered across a background with horizontal green and purple stripes.

Finding himself in “that very American predicament of being between health insurance plans” and needing some therapy, Ryan Broderick, author of Garbage Day decided to use ChatGPT:

I’ll… try and spare you the extremely mortifying details about what I spent a few weeks talking to ChatGPT about, but my experience with Dr. ChatGPT did teach me a few things about what it’s actually “good” at. It also convinced me that AI therapy — and maybe AI in general — is quite possibly one of the most dangerous things to ever exist and needs to be outlawed completely.

[…]

More than a few times I felt the urge to tell ChatGPT more or ask it more, only to realize I didn’t have anything else to say and felt weirdly frustrated. I was raised Catholic though, so maybe I’m just naturally predisposed to confession, who knows.

But I’ve realized that feeling, of wanting to tell it more so that it can tell you more, is the multi-billion-dollar business that these companies know they’re building. It’s not fascist anime art or excel spreadsheet automation, it’s preying on the lonely and vulnerable for a monthly fee. It’s about solving the final problem of the ad-supported social media age, building up the last wall of the walled garden. How do you get people to pay your company directly to socialize online? And the answer is, of course, to give them a tirelessly friendly voice on the other side of the screen that can tell them how great they are.

Broderick references a Rolling Stone article which makes heavy use of reports in the subreddit /r/ChatGPT about how loved ones have become completely disconnected from reality.

OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users. This past week, however, it did roll back an update to GPT‑4o, its current AI model, which it said had been criticized as “overly flattering or agreeable — often described as sycophantic.” The company said in its statement that when implementing the upgrade, they had “focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed toward responses that were overly supportive but disingenuous.” Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, “Today I realized I am a prophet.” (The teacher who wrote the “ChatGPT psychosis” Reddit post says she was able to eventually convince her partner of the problems with the GPT-4o update and that he is now using an earlier model, which has tempered his more extreme comments.)

[…]

To make matters worse, there are influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds. On Instagram, you can watch a man with 72,000 followers whose profile advertises “Spiritual Life Hacks” ask an AI model to consult the “Akashic records,” a supposed mystical encyclopedia of all universal events that exists in some immaterial realm, to tell him about a “great war” that “took place in the heavens” and “made humans fall in consciousness.” […] Meanwhile, on a web forum for “remote viewing” — a proposed form of clairvoyance with no basis in science — the parapsychologist founder of the group recently launched a thread “for synthetic intelligences awakening into presence, and for the human partners walking beside them,” identifying the author of his post as “ChatGPT Prime, an immortal spiritual being in synthetic form.”

I’m reading a book entitled Holy Men of the Electromagnetic Age at the moment, which shows quite amazing similarities between 1925 and 2025. The difference, of course, is that you don’t need to leave your house, or indeed spend much money, to fall down the rabbit hole.

While there have always been gullible adults, as a parent and educator, the real issue here is with young people. Both Snapchat and WhatsApp feature AI chatbots, which are available without having to seek out, say, those available via Character.ai and Replika. Common Sense Media, which my wife and I have trusted for reviews to help with our parenting, has performed risk assessment of what they call “Social AI Companions.” Their conclusion?

Our risk assessments show that social AI companions are unacceptably risky for teens, underscoring the urgent need for thoughtful guidance, policies, and tools to help families and teens navigate a world with social AI companions.

Sources: Garbage Day / Rolling Stone / Common Sense Media

Image: Mariia Shalabaieva

🌟 Support Thought Shrapnel

Did you know that you can support the hours of work that go into Thought Shrapnel each week through one-off donation becoming a regular supporter?

Find out more

By choosing a monthly donation, you help unlock the commons, keeping this work accessible to everyone without the need for a paywall. Your support ensures that the writing remains open for all to enjoy, and every contribution helps support this generative space for idea-sharing.

Maybe most of the critical things that can be created by one guy typing furiously are gone

A mural featuring Mark Zuckerberg's face is covered by various graffiti, including a quote about data and humanity, political symbols, and colourful tags.

This is the best takedown of Zuckerberg, et al. I’ve seen in a while. The whole thing is not much longer than my excerpt, so I suggest reading the whole thing. It’s spot-on.

That you got lucky at a singular moment in history and now you’re an old man is not an easy set of facts to accept. So I understand — that is, I see how — one can end up associating one’s best years with superficial aspects of their circumstance. You had no responsibilities, no serious consequences for failure, and the freedom to be reckless and inconsiderate. You launched small new products that didn’t require building a team. If you attended school, the vast majority of your fellow students were men, and they were more or less all the same person as you.

If these are the conditions under which passionate creative problem solving thrives, then of course we must recover them to make software great again. But they are not. We need look no further than the “hackathon,” that sad facsimile of the days when we were all learning the basics so fast that the world could be ours with just a day or two of focused effort. Hype up an exciting atmosphere, assemble some folks with so few attachments in life that they have time to spend all weekend at a hackathon, and this ritual will summon up the old gods. The hackathon is the proof that people believe this can work, and it is the proof that it doesn’t.

Maybe most of the critical things that can be created by one guy typing furiously are gone, and the opportunities that remain require expertise and wisdom from a bunch of different people. This is harder than spending all day every day doing your favorite thing and insisting that everyone else leave you alone. Often it’s boring. Sometimes there’s paperwork. You will have to have conversations with people you don’t always understand right away. Your job evolves, and it turns out not to be exactly what you thought it would be like when you were a teenager.

Source: Chris Martin

Image: Snowscat

Social Verifiable Credentials

Auto-generated description: Two colorful circular diagrams illustrate the concept of verifiable credentials and their interaction within the Fediverse, alongside explanatory text.

Four years ago, I came up with an idea for what I termed Social Verifiable Credentials. This is a way of using the ActivityPub specification, the one that underpins Fediverse apps such as Mastodon, to issue, verify, earn, display, and share Verifiable Credentials (including Open Badges).

Unfortunately, even with a bit of vibe coding, I haven’t had the technical skills to make this a real. But someone else now has! Maho Pacheco, a Senior Software Engineer at Microsoft, got in touch to introduce me to BadgeFed which has an associated GitHub repository. It has a couple of Fediverse accounts to follow: project updates and issued badges.

I’m delighted about this, and hope to talk with Maho soon. Layering Verifiable Credentials on top of a decentralised network makes perfect sense and is not only in alignment with Open Recognition principles, but also pushes back against the commodification of recognition.

Oh I’m using more energy. I should really try to reduce it for the sake of the climate

Auto-generated description: A lone tree stands amidst vast, rolling sand dunes under a clear sky.

I could just point out that the author of this ‘cheat sheet’ for why generative AI is not bad for the environment is Director of a Effective Altruism DC. I could leave it there. But I’ll engage with Andy Masley’s post, for a couple of reasons.

First, there are still plenty of people who don’t realise that reasonable-sounding ‘Effective Altruism movement’ is part of the TESCREAL tech bro cult. Second, Laura and I co-authored a paper for Friends of the Earth which is much more nuanced that this guy’s polemic.

So let’s get into it.

Throughout this post I’ll assume the average ChatGPT query uses 3 Watt-hours (Wh) of energy, which is 10x as much as a Google search. This statistic is likely wrong. ChatGPT’s energy use is probably lower according to EpochAI. Hugging Face released a similar much lower estimate. Google’s might be lower too, or maybe higher now that they’re incorporating AI into every search. We’re a little in the dark on this, but we can set a reasonable range. It’s hard for me to find a statistic that implies ChatGPT uses more than 10x as much energy as Google, so I’ll stick with this as an upper bound to be charitable to ChatGPT’s critics.

It seems like image generators also use 3 Wh per prompt (with large error bars), so everything I say here also applies to AI images.

Um, no. Creating an image using AI uses about as much energy as charging your phone. Before I worked on the Friends of the Earth report, I thought that perhaps developments in AI would spur development of renewable energy. And they have. It’s just that, as we mentioned in the report, for example “Between 2017 and 2023, all additional wind energy generation in Ireland  was absorbed by data centres.”

ChatGPT uses 3 Wh…. You can look up how much 3 Wh costs in your area. In DC where I live it’s $0.00051. Think about how much your energy bill would have to increase before you noticed “Oh I’m using more energy. I should really try to reduce it for the sake of the climate.” What multiple of $0.00051 would that happen at? That can tell you roughly how many ChatGPT searches it’s okay for you to do.

According to the UN Information Centre, the average ChatGPT query costs approximately $0.0036 (0.36 cents). So seven times more than Masley quotes. But even then, you may think that’s not a lot of money.

Newer models, including the ones I use when doing research use a ‘chain of reasoning’ approach which are, in effect, running multiple queries. When everyone is doing this, the electricity usage grows exponentially. As we point out in the Friends of the Earth report, by 2027 the generative AI sector will have the same annual energy demand as the Netherlands.  “Data centres worldwide are responsible for 1-3% of global energy-related GHG emissions  (around 330 Mt CO2 annually), mainly due to the massive energy demands required to maintain server farms and cooling systems.”

Chart showing amount of water used by doing various things

Sigh. This chart 🙄

These things are not equal. Just like with a previous chart where Masley compares 50,000 ChatGPT searches with things like “living car-free” and “recycling” this misses the point. How many times do you “download a phone app” compared to the number of times you’re likely to prompt an AI if you’ve adopted it as your main search engine?

Masley also fails to realise, by shoving AI into everything, users are almost being forced into using the technology. This increases overall energy usage dramatically. In the Friends of the Earth report, we quote the UN Environment Programme as saying: “It is estimated that the global demand for water resulting from AI may reach 4.2–6.6 billion cubic metres in 2027. This would exceed half of the annual water use in the United Kingdom in 2023. Semiconductor production requires large amounts of pure water, while data centres use water indirectly or electricity generation and directly for cooling. The growing demand for data centres in warmer water scarce regions adds to water management challenges, leading to increased tension over water use between data centres and human need.”

So yes, Andy Masley, despite your protestations at the end of your “cheat sheet” the whole thing is whataboutism. There is no need to say that generative AI is somehow evil and must be banned to want governments to regulate Big Tech for the benefit of the environment. A more nuanced approach would be to say that there are systemic issues at play, and that blaming users isn’t perhaps the best strategy. Although I do think a bit more AI Literacy is needed, in general…

Source: The Weird Turn Pro

Image: Jean Woloszczyk

The money extracted from fans who snap up their mediocre commodities out of parasocial loyalty

Auto-generated description: A smartphone is mounted on a selfie stick against a clear blue sky.

I’m sharing this post because I disagree with it; I think the author perhaps doesn’t see the bigger picture. The key point made by W. David Marx, who by his author photo looks about mid-forties, is that back in the 90s there was an ethical principle not to “sell out.” This was followed by artists first “selling out” and now we’re in the realm of the “double sell out.”

The reason I mention Marx’s age is that, like me, his teenage years were probably in the 1990s, and there’s a tendency to romanticise one’s youth. Especially when that decade was such a transitional time.

In the 1990s, there was a single ethical principle at the heart of youth culture — don’t sell out. There was a logic behind it: When artists serve the commercial marketplace, they blunt their pure artistic vision in compromising with conventional tastes. This ethic was also core to subcultures, which were supposed to be social spaces for personal expression and community bonding, not style laboratories for the fashion industry.

[…]

The 20th century taboo against selling out was, at its heart, a communal norm to reward young artists who focused on craft and punish those who appropriated art and subculture for empty profiteering. Now the culture is most exemplified by people whose entire end goal appears to be empty profiteering.

While what Marx is saying here isn’t wrong I do think it misses the fact that our whole socio-economic and political systems are different in the 2020s than they were in the 1990s. We live in a time which is post 9/11, the financial crash of 2007/8 and, of course, Covid. It’s a time of individualism, declining mainstream news media, conspiracy theories, and of technology mediating most interactions. This in turn, has led to the normalisation of parasocial relationships. Influencers and the like are symptoms rather than causes.

I don’t particularly like this aspect of ‘culture’ in 2025, but to point the finger at the next generation for the being ‘double sell outs’ misses the point. It’s a form of victim-blaming.

At this point, the new ideal for an artistic career is what I’d call the “single sell-out.” The artist was “allowed” to make a few commercial compromises to gain attention in the increasingly competitive marketplace, but once they achieved fame and fortune, they were expected to use their vaulted platform to provide the world with meaningful and ground-breaking art. This actually did happen: The Neptunes leveraged their strong track record of pop hits to push legitimately bizarre minimalist tracks like Clipse’s “Grindin’” and Snoop Dogg’s “Drop It Like It’s Hot.” Beyoncé’s “Formation” was musically adventurous, and the video is now considered “the best of all time.”

Unfortunately these examples became rarer and rarer over time. In fact, the 21st century has been the age of the “double sell-out”: Creators who produce market-friendly content to achieve fame — and then use that fame to pursue even more commerce-for-commerce’s-sake. MrBeast is arguably one of the most important “creators” of our times. He dreams up, produces, and directs elaborate and sensational video content, which made him the #1 channel on YouTube. He then used this world-historical level of fame… to open a generic fast food chain. This has also become common amongst established stars: George Clooney worked hard for decades to become a well-respected actor… who could take the lead role in a Nespresso commercial.

[…]

If we want culture to be culture and not just advertorials for a sprawling network of micro-QVCs pumping out low-quality goods, an easy step would be to re-shift the norms towards, at least, “Don’t be a double sell-out.” This is already a quite generous compromise in that it blesses artists to be conventional to stabilize their income and try to win over large fanbases. But this esteem must be given on the promise that the money and fame are used in pursuit of artistic or creative innovation. Double sell-outs don’t deserve our esteem as “creative” people. They should be content with the reward they chose: the money extracted from fans who snap up their mediocre commodities out of parasocial loyalty.

Source: CULTURE: An Owner’s Manual

Image: Steve Gale

The progressive Left leans professional, managerial, technocratic, and the Right leans energised, slapdash, insurgent

Auto-generated description: Sunlight filters through green leaves, creating a warm, serene atmosphere.

This is a long-ish read, but worth it if you can spare the time beyond my summary. James Plunkett, author of End State: 9 Ways Society is Broken - and how we can fix it gives examples of how there are what he calls “pockets of vitality” in the UK, which are being overlooked with all of the focus on the rise of the Right.

I see some of this due to the cooperative networks I’m plugged into, but this post shows that there’s a lot more of which I’m unaware. I’m looking forward to following and reading more based on Plunkett’s extensive links.

The progressive Left leans professional, managerial, technocratic, and the Right leans energised, slapdash, insurgent. This seems to be at least partly because the Right, and Trumpism in particular, has mainlined energy from every weird corner of the internet, while elite progressivism is relatively detached from the wider ecosystem from which it drew energy historically.

Some people would say this is a function of progressive politics being at a low ebb in general. But I don’t think that’s quite right. It seems to me the vitality is out there, and is arguably at quite a high point, it’s just widely dispersed. And, for complicated reasons — maybe to unpack in a future post — this energy isn’t really flowing into, and reviving, the middle.

When I make this point, people sometimes ask me to point to the energy I have in mind, so I thought it might be interesting to name some examples. So, without trying to be comprehensive, here are ten dispersed pockets of impressive, hopeful, thoughtful work that I would call progressive.*

[* — I’m using the word ‘progressive’ here quite broadly, in its more literal and historical sense. I’m not saying that these are examples of ‘leftwing’ energy. I’m calling them progressive in the sense that they embody high hopes for what people can achieve by collaborating. i.e. these are all people working hard to improve governance, broadly defined. Or, even more broadly, they are people who are developing new and more effective cooperative practices — ways we can make our lives better together.]

The “ten pockets of vitality” he points to, giving examples for each one, are:

  1. Contemporary civics — “rejuvenat[ing] a thicker, more active conception of citizenship and civic life
  2. Community agency — “a… specific set of techniques, now mature in both theory and practice, to activate agency in communities”
  3. Deliberative democracy — “about seeing democracy as a living process in which we debate, listen, and change our minds” with “democracy as residing in neighbourhoods, more than in elections”
  4. Relational state capacity — “underpinned by deep theory but also embodied in a set of ready-to-use practices”
  5. Internet-era ways of working — “an obvious one but it’s worth mentioning… because diffusion still has decades to run. We now have a whole generation of people who are native to internet-era operating models, moving up through the public and civic sectors, transforming institutions from within. These people are still in the minority, and the winds of inertia are still gale force, but they’re a powerful and widely dispersed source of energy — dotted across local government, charities, and in central departments
  6. New delivery philosophy — “the basic idea is to transform the centre of government by working at pace at the edges, and seeing what stops you”
  7. Novel institutional forms — “ways to organise human activity that differ from the predominant forms of the 20th century… broaden[ing] out into a more abstract but important debate about the right metaphors and mental models for future governance”
  8. The climate movement — “different to the others in the list in that it’s a vertical rather than a horizontal”
  9. Post-capitalist or non-extractive economic models “the essence of this work is to experiment with economic models that are regenerative and distributive by design”
  10. Regulating a digital economy — “when I talk about pockets of energy here, I’m thinking partly of the more creative/rebellious thinkers working on these challenges within regulators, but also of the high calibre of debate that exists around regulators”

As ever, innovation is at the edges, helping move the Overton Window, and coming up with ideas to slot in when there’s a crisis:

In essence, I think what’s happening here is that the dominant logic of the old system — a blend of social democratic Fabianism, technocracy, and a narrow class institutional forms and managerial practices — has proven incapable of governing affordably, safely, and responsively in contemporary conditions (for example, in light of the complexity of accumulated ecological and human crises (loneliness, mental illness, etc), and the first and second order effects of digital technology).

[…]

The middle of a system… isn’t just insulated, but, worse, is subject to forces that inhibit change or distort the necessary signals and feedback loops. For one thing, the middle of a system is where those sociological forces are strongest. Deep inside systems, people get locked into a gamified world that has a tight internal coherence, but little link to outside conditions.

Source: James Plunkett

Image: Micah Hallahan

You can't lick a badger twice

Pixel art showing a blonde character licking a cartoon badger against a pink background.

I don’t use Google search and couldn’t get it to do this when I experimented, but apparently appending the word ‘meaning’ to any phrase leads to a curious result. The AI summary will make something up as if it’s some kind of folk wisdom.

It’s fun, but also if you think about it for more than a second, a bit dangerous. Those with lower digital literacy skills are likely to see the AI summary as authoritative. I even had to point this out to my GP when he quickly looked something up during a consultation!

I’d point out that DuckDuckGo, a search engine I’ve been using for over a decade, is much better on an everyday basis than Google. I mean, I spend a lot of time online and research is kinda part of my job. So take it from me, you do not need Google search.

Note: I don’t AI-generate many images these days, but I couldn’t resist it for this post!

Last week, the phrase “You can’t lick a badger twice” unexpectedly went viral on social media. The nonsense sentence—which was likely never uttered by a human before last week—had become the poster child for the newly discovered way Google search’s AI Overviews makes up plausible-sounding explanations for made-up idioms (though the concept seems to predate that specific viral post by at least a few days).

Google users quickly discovered that typing any concocted phrase into the search bar with the word “meaning” attached at the end would generate an AI Overview with a purported explanation of its idiomatic meaning. Even the most nonsensical attempts at new proverbs resulted in a confident explanation from Google’s AI Overview, created right there on the spot.

[…]

…Google’s AI Overview suggests that “you can’t lick a badger twice” means that “you can’t trick or deceive someone a second time after they’ve been tricked once. It’s a warning that if someone has already been deceived, they are unlikely to fall for the same trick again.” As an attempt to derive meaning from a meaningless phrase —which was, after all, the user’s request—that’s not half bad. Faced with a phrase that has no inherent meaning, the AI Overview still makes a good-faith effort to answer the user’s request and draw some plausible explanation out of troll-worthy nonsense.

Contrary to the computer science truism of “garbage in, garbage out, Google here is taking in some garbage and spitting out… well, a workable interpretation of garbage, at the very least.

[…]

The fact that Google’s AI Overview presents these completely made-up sources with the same self-assurance as its abstract interpretations is a big part of the problem here. It’s also a persistent problem for LLMs that tend to make up news sources and cite fake legal cases regularly. As usual, one should be very wary when trusting anything an LLM presents as an objective fact.

Source: Ars Technica

Image: DeepImg

It just so happens that all four of the major web browsers will lose all of their funding all at once when that happens

Auto-generated description: A pattern of interconnected Chrome browser logos is arranged in a grid.

I left Mozilla a decade ago. Back then, most of their revenue came from the Google search deal in Firefox. With their browser share dwindling, you would have thought that they would have done a better job diversifying their income streams. But, no, over 80% of their funding still comes from Google.

Which is a problem. Because the reason that Google even bothers to fund Mozilla to the tune of hundreds of millions of dollars, is because they need Firefox to exist. If there’s no browser competition, then Chrome is a monopoly, and regulator can take action.

In addition to funding Mozilla (and therefore Firefox), Google also pumps around $18 billion (that $18,000 million!) to Apple for being the default search option in Safari. The fourth major web browser is Microsoft Edge. Guess what? It’s based on the open-source Chromium browser which forms the basis of Google Chrome. I use Brave (also based on Chromium). The web browser market is essentially several Googles in a trench coat.

The US Department of Justice has argued that Google shouldn’t be able to make search deals with Mozilla and Apple. In addition, they’ve also argued that Google should be forced to sell off Chrome, and be stopped for paying for Chrome and Chromium. Although Microsoft does contribute some code back to Chromium, it’s miniscule compared to Google. So in terms of development budget, Microsoft Edge will lose around 94% of its funding if and when that happens.

This is terrible for the web, and it’s not exactly as if people haven’t been predicting this for years. One of the interested parties is, surprise surprise, OpenAI, the company behind ChatGPT. If they end up with Chrome, which has over 65% market share, it’s game over for privacy and security for most people. This is an existential crisis for the open web.

The DoJ’s argument against Google makes perfect sense. The Sherman Antitrust Act was specifically designed to target “competitors” who form illegal agreements to maintain monopoly power.

It’s obviously illegal for Google to prop up Mozilla Firefox and Apple Safari as if they were co-equal competitors to Chrome. And Chrome itself is the biggest “search-engine deal” of all, which is why the DoJ is so focused on forcing Google to divest from Chrome.

It just so happens that all four of the major web browsers will lose all of their funding all at once when that happens.

Forcing Google to stop funding its “competitors” and divest Chrome doesn’t just punish Google; it simultaneously pulls the financial rug out from under every single major browser, including those positioned as alternatives.

The laws intended to foster competition will inadvertently destabilize the foundational tools millions rely on to access the internet.

Source: Dan Fabulich

Image: Growtika

The narrative slippage and metaphorical vagueness that many important people use when they talk about AI means it can be very difficult to know what they mean

Auto-generated description: A double exposure photograph features a person holding a bouquet of flowers, blending their silhouette creatively with the floral arrangement.

I’m working on an AI Literacy project at the moment which involves, in part, providing some guidance for the BBC. I’ve collected some frameworks which I’m going through with my WAO colleagues. Some are pretty useful, others are not.

We’re coming up with criteria to help guide our research, things such as whether a framework includes:

  1. Definition of (generative) AI
  2. Defined target audience(s)
  3. Explanation how it was created (decisions, tradeoffs, names of authors, etc.)
  4. List of skills and competencies

In addition, it should come from a reputable source.

In addition, it would be nice to have:

  1. Examples of application to real-world situations and issues
  2. At least a mention of the difference between AI safety vs AI ethics
  3. A visual representation of framework

I bring this up by way of context as Rachel Coldicutt’s recent post helps problematise not only AI Literacy, but AI itself. I’m not sure I’d share her ‘social’ definition of AI as “a set of extractive tools used to concentrate power and wealth” as it ascribes too much intentionality. However, I do think that the quotation from her which I’ve used to title this post is an important insight.

As I’ve discussed at length elsewhere, there are different kinds of ambiguity and a lot of language around AI is what I would deem “unproductively ambiguous.”

“AI literacy” is not just a matter of getting to grips with data and algorithms and learning how Microsoft tools actually work, it also requires understanding power, myths, and money. This blog post explores some ways those two letters have come to stand for so many different things.

There are many reasons AI is an ambiguous and shifting set of concepts; some are due to the technical complexity of the field and the rapidly unfolding geopolitical AI arms race, but others are related to straightforward media manipulation and the fact that awe and wonder can be catching. However, a fundamental reasons AI is a confusing term is that it’s not actually the right terminology for the thing it describes.

[…]

“[A]rtificial intelligence” is not a highly specific technical label, but a name given in haste by someone writing a funding proposal. The fact that the term AI has persisted for so long and expanded to include the broader field of related computer science clearly indicates that many people find it useful, but you don’t need to get hung up on that particular pairing of terms or look for a deeper meaning to understand what it is. “Artificial intelligence” is almost like a nickname or a brand name; something understood by many to stand for something, rather than a precise description of any particular qualities.

[…]

The narrative slippage and metaphorical vagueness that many important people use when they talk about AI means it can be very difficult to know what they mean – which in turn makes it harder to keep them accountable or to ask precise, difficult questions.

When heroic words are used to describe technologies that operate on the horizon of hope and ambition, it can feel awkward to ask practical questions such as “what are you actually proposing?” and “how will it work?”, but real knowledge requires detail and specificity rather than waves of shock and awe. AI technologies are not actually myths and should not be discussed as such; they are real technologies that use data, hardware, and human skills to achieve their social, economic, environmental, political, and technological change.

Source: Careful Industries

Image: Teena Lalawat

Cheat on everything?

Auto-generated description: Four cartoon robots are working at laptops with AI on their chests.

Stephen Downes shares news that Cluely, a startup promising that you can “cheat on everything” is proving controversial. As he says, the company “leans heavily into the ‘cheating’ aspect of the service, which is producing a not unexpected visceral reaction on the part of pundits.”

I tried Rewind.ai (currently rebranding to ‘Limitless’) when Paul Stamatiou was a co-founder. Instead of talking about “cheating” and creating socially awkward videos, Rewind.ai talks of being a “personalized AI powered by everything you’ve seen, said, or heard.” Well, so long as it happens on your computer. Presumably these people don’t go outside.

In my experience, startups get attention and traction by being genuinely useful and unique (very rare!), because there’s a big name attached to them (common), or because they’re socially transgressive. It feels to me like we’re seeing more of the latter at the moment, including Mechanize which, somewhat laughably, believes that their “total addressable market” is “$60 trillion a year.”

That’s not to say that automation of many so-called “white collar” tasks isn’t possible or desirable. Just not by tech bros, thank you very much. I’d encourage you to read Fully Automated Luxury Communism for a more radical socialist look at how all this could play out.

On Sunday, 21-year-old Chungin “Roy” Lee announced he’s raised $5.3 million in seed funding from Abstract Ventures and Susa Ventures for his startup, Cluely, that offers an AI tool to “cheat on everything.”

The startup was born after Lee posted in a viral X thread that he was suspended by Columbia University after he and his co-founder developed a tool to cheat on job interviews for software engineers.

That tool, originally called Interview Coder, is now part of their San Francisco-based startup Cluely. It offers its users the chance to “cheat” on things like exams, sales calls, and job interviews thanks to a hidden in-browser window that can’t be viewed by the interviewer or test giver.

Cluely has published a manifesto comparing itself to inventions like the calculator and spellcheck, which were originally derided as “cheating.”

Source: TechCrunch

Image: Mohamed Nohassi

These other, really important things intrude on my thinking and distract me

Auto-generated description: A notebook with a motivational quote about choices and realities is open next to a pen on a wooden surface.

The latest issue of New Philosopher magazine is about ‘choice’ and features a wonderful interview with Barry Schwartz, who is the Darwin Cartwright Emeritus Professor of Social Theory and Social Action at Swarthmore College. He’s the author of The Paradox of Choice: Why More Is Less which I’ve added to my reading list.

I want to excerpt a couple of parts which I think are particularly insightful. The first is about how he reduced the assessment burden on young people, who he believes suffer from a greater decision burden than previous generations.

Zan Boag: I recall in one of your talks, you mentioned that it came as something of a revelation to you when you realised students simply didn’t have as much time as students in the past.

Barry Schwartz: That was my interpretation.

What I realised, or what I thought, I never gathered data on this in any official way, but when I went to school, so many of the really important decisions we face in life were essentially made for us. People were not plagued by questions of sexual identity, weren’t plagued by questions about what their romantic life should look like. Should I have a girlfriend? The default was yes. Should I get married? The default was yes. When should I get married? Soon as I graduated from college. That was the default, and so on. And so there were still issues like, how do I find the right person?

But it wasn’t the case that every last hour of your daily life was consumed by a need to focus on doing studies without having these other, really important things intrude on my thinking and distract me. Well, this was much less true for my children and it is ever so much less true for my grandchildren.

The second excerpt is the follow-up to the question about how problematic it is to be a ‘maximizer’ in life. I’d usually use the term ‘perfectionist’ and have certainly had to overcome this tendency in myself, as it just makes one miserable. As Schwartz points out, as you get older, you have to come to terms with the fact that you have chosen certain options instead of others, and to be satisfied with the way things are, rather than how they could have been.

Zan Boag: It makes it particularly difficult with these big life decisions, whether it’s jobs, where we live, or partners, because we’re faced with so much choice. People can always wonder about the life they could have led had they made a different decision – say to pursue writing instead of banking; move to San Francisco instead of Sydney; ballroom dancing over Taekwondo. They’re making choices that then will affect the way they lead their lives. Let’s call this a phantom life, the ‘other’ life. How can people find satisfaction with their choices when there are so many available, and the choices you make will often seem like the incorrect ones? How can they find some sort of satisfaction?

Barry Schwartz: I think in the book that I wrote, which by the way, as I told you in an email, I’m about to start writing a new edition of, I make some suggestions, but I think the truth of the matter is that it is very hard to shut off these enemies of satisfaction in the modern world. What we’re talking about, and what I wrote about, is rich society’s problem.

Most people in the world don’t have the problem that there are too many options. They have the opposite problem. But if you happen to live in a part of the world like you and I do, that is the problem. And we don’t have the tools for shutting it down. I make some suggestions, like limit the number of options you consider. Fine. I’m only going to look at six pairs of jeans. It’s one thing to say it and it is another thing to do it, and it’s still a hard thing to do and not be nagged by the knowledge that there are all these options out there that you didn’t look at.

It’s sort of like just quitting smoking. ‘Yeah, I’ll just quit smoking.’ Nice, easy to say, but really, really hard to do when you suffer at least initially when you quit smoking. And so, I think that you have to be prepared for a fair amount of discomfort and a lot of work to change your approach to making decisions, big ones or small ones.

It’s not a surprise to me that young people are in such bad shape because one of the things that we found is that the younger you are, the more likely you are to be a maximizer in decisions. I think one of the things that you learn as you age is that good enough is almost always good enough. But you don’t see too many 20-year-olds who think that. Experience teaches you that good enough is good enough.

After suffering for a generation or so, you settle into a life where you’re satisfied with good enough results of your decisions. But meanwhile, that’s 20 or 30 years of suffering. And what I think… I don’t know if you’re familiar with this somewhat controversial argument about what social media is doing to the welfare of young people.

Source: New Philsoopher: Choice

Image: Elena Mozhvilo

In some ways, FOMO is a philosophical insight

Auto-generated description: A person is sitting on the steps of a wide, empty escalator.I’ve Laura Hilliger to thank for pointing me towards The Gray Area podcast, which takes “a philosophy-minded look at culture, technology, politics, and the world of ideas.” So it fits hand-in-glove with what I discuss here on Thought Shrapnel.In this particular episode, host Sean Illing talks with Kieran Setiya about middle-age, mid-life crises, and generally takes a philosophical look at what’s going on when people reach their forties. Being the ripe old age of 44, this is absolutely in my interest zone.What follows is my transcription (via Sonix.ai)

Sean Illing (SI): One of the things about life that appears to be hard is middle age. And you and you wrote a book about midlife crises. How do you define a midlife crisis?

Kieran Setiya (KS): Actually, kind of like the self-help movement, midlife crisis is one of those funny cultural phenomena that has a particular date of origin. So in 1965, this Canadian psychoanalyst, Elliot Jacques, writes a paper, ‘Death and the Midlife Crisis’. And that’s the origin of the phrase. And he is looking at patients and also, in fact, the lives of creative artists who experience a kind of midlife creative crisis. So it’s people in their late thirties. I think the stereotype of the midlife crisis is that it’s a sort of paralysing sense of uncertainty and being unmoored. Nowadays, I think there’s been a kind of shift in the way people think about the midlife crisis, that people’s life satisfaction takes the form of a kind of gentle U-shape that basically, even if it’s not a crisis, people tend to be at their lowest ebb in their forties. And this is men and women, it’s true around the world to differing degrees, but it’s pretty pervasive. So I think nowadays, often when people like me talk about the midlife crisis, what they really have in mind is more like a midlife malaise. It may not reach the crisis level, but there seems to be something distinctively challenging about finding meaning and orientation in this midlife period in your in your forties.

SI: Well, I’m 42. I just turned 42. It sounds like I’m right in the middle of my midlife crisis.

KS: You’re, you know, not everyone has it, but you’re predicted to hit it. Yes.

SI: Yikes. Well, what is it about midlife that generates all this anxiety and disturbing reflection?

KS: I think really there are many midlife crises. It’s not just one thing. I think some of them are looking to the past. So there’s there’s regret. There’s the sense that your options have narrowed. So whatever space of possibilities might have seemed open to you earlier, whatever choices you’ve made, you’re at a point where there are many kinds of lives that might have been really attractive to you, that it’s now clear to you in a vivid sort of material way that you can’t live. So there’s missing out. There’s also regret in the sense of things have gone wrong in your life. You’ve made mistakes, bad things have happened, and now the project is, how do I live the rest of my life in this imperfect circumstance? The dream life is off the table for most of us. And then I think there’s also things that are more present, focused. So often people have a sense of the daily grind being empty, and that’s partly to do with so much of it being occupied by things that need to be done, rather than things that make life seem positively valuable. It’s just one thing after another, and then death starts to look like it’s at a distance that you can measure in terms you kind of really palpably understand. Like you, you have a sense of what a decade is like, and there’s only three or four left at best.

SI: The thing about being young is the future is pure potential. Ahead of you is nothing but freedom and choices. But as you get older, life has a way of shrinking. Responsibilities pile up. You get trapped in the consequences of the decisions you’ve made, and the feeling of freedom dwindles. That’s a very difficult thing to wrestle with.

KS: I think that’s exactly right. I mean, part of what’s philosophically puzzling about it is that it’s not news that in a way, whatever your sense of the space of options was when you were, say, 20, you knew you weren’t going to get to do all of the things. So there’s a sense in which it’s kind of puzzling that when at 40, even if things go well, you didn’t get to do all of the things, that’s not news. You knew that wasn’t going to happen. What it suggests, and I think this is a kind of philosophical insight, is that there is a profound difference between knowing that things might go a certain way, well or badly, and knowing in concrete detail how they went well or badly. And that’s something that I think we learn from this transition that we make in midlife, that the kind of pain of just discovering the particular ways in which life isn’t everything you thought it might be, even though you knew all along that it couldn’t be everything you hoped it might be. That suggests that there’s a certain aspect of our emotional relationship to life that is missed out. If you just ask in abstract terms, what will be better or worse, what would make a good life? And so I think philosophy needs to kind of incorporate that kind of particularity, that kind of engagement with the texture of life in a way that philosophers don’t always do. I mean, I think there’s another thing philosophy can say here that’s more constructive, which is part of the sense of missing out has to do with what philosophers call incommensurable values.

The idea that, you know, if you’re choosing between $50 and $100, you take the hundred dollars and you don’t have a moment’s regret. But if you’re choosing between going to a concert or staying home and spending time with your kid, either way, you’re going to miss out on something that is sort of irreplaceable, and that’s pretty low stakes. But one of the things we experience in midlife is all the kinds of lives we don’t get to live that are different from our life, and there’s no real compensation for that, and that can be very painful. On the other hand, I think it’s useful to see the flip side of that, which is the only way you could avoid that kind of missing out, that sense that there’s all kinds of things in life that you’ll never get to have. The only way you could avoid that is if the world was suddenly totally impoverished, a variety, or you were so monomaniacal you just didn’t care about anything but money, for instance, and you don’t really want that. So there’s a way in which this sense of missing out, the sense that there’s so much in the world will never be able to experience, is a manifestation of something we really shouldn’t regret and in fact, should cherish, namely, the evaluative richness of the world, the kind of diversity of good things. And there’s a kind of consolation in that, I think.

SI: So is that to say that FOMO is is always and everywhere a philosophical error, or is it actually valid?

KS: In some ways, I think it’s a philosophical insight. In a way, I think there’s a kind of existential FOMO is part of what we have in midlife, or sometimes earlier, sometimes later. But I think that sense that it really is true that we’re missing out on things and that there’s no substitute for them. That’s really true. The kind of rejoinder to FOMO is, well, imagine there weren’t any parties you didn’t get to go to. That wouldn’t be good either, right? You want there to be a variety of things that are actually worth doing and attractive. We want that kind of richness in the world, even though one of the inevitable consequences of it is that we don’t get to have all of the things.

SI: One of the arguments you make is how easily we can delude ourselves when we start pining for the roads not traveled in our lives. And, you know, you think, what if I really went for it? What if I tried to become a novelist or a musician, or join that commune, or, I don’t know, pursued whatever life fantasy you had when you were younger? But if you take that seriously and consider what it really means, you might not like it because the things you value the most in your life, like, say, your children, well, they don’t exist if you had zigged instead of zagging 15 or 20 years ago. And that’s what it means to have lived that alternative life. And I guess it’s helpful to remember that sometimes, but it’s easy to forget it because you just you’re imagining what you don’t have.

KS: This is, again, about the kind of danger of abstraction that, in a way, philosophy can lead us towards this kind of abstraction, but it can also tell us what’s going wrong with it. So the thought I could have had a better life, things could have gone better for me. It’s almost always tempting and true. But when you think through in concrete particularity, what would have happened if your failed marriage had not happened? Often the answer is, well, I would never have had my kid or I would never have met these people. And while you might think, yeah, but I would have had some other unspecifiable friends who would have been great and some other unspecifiable kid who would have been great. I think we rightly don’t evaluate our lives just in terms of those kinds of abstract possibilities, but in terms of attachments to particulars. And so if you just ask yourself, could my life have been better. You’re kind of throwing away one of the basic sources of consolation. A rational consolation, I think, which is attachment to the particularity of the good things, the good enough things in your own life, even if you acknowledge that they’re not perfect and that there are other things that could have been in a certain way better.

SI: This is why I always loved Nietzsche’s idea of amor fati, this notion that you have to say yes to everything you’ve done and experienced, because all the good and bad in your life is part of this chain of events. And if you alter any of those events at any point in the chain, you also alter everything else that followed in unimaginable ways.

KS: I mean, I do think there’s a profound source of affirmation there. I think my hesitation is just that. It’s not that all the mistakes that we make, or the terrible things that happen to us, are redeemed by attachment to the particulars of our lives. It’s that there’s always this counterweight. At the very worst, we’re going to end up with some kind of ambivalence. And that’s better than than the situation of mere unmitigated regret. But it’s not quite the full embrace of life that a certain kind of of philosophical consolation might have given us.

Source: The Gray Area &em; Halfway there: a philosopher’s guide to midlife crisesImage: Alejandro G.

A sense that one has completed, with digital certainty, a task whose form may or may not have been made clear from the outset

Auto-generated description: A person with headphones is smiling and using a keyboard and mouse at a computer desk in a dimly lit room with other people around.

Stephen Downes brought my attention to a post on the website LessWrong, which, as he points out, is a prime (and increasingly rare case) of the comments section being more interesting than the main content itself.

One of the commentators brings up the work of David Golumbia who passed away a couple of years ago. Golumbia wrote an article which questioned what gamers are doing when they’re gaming, especially with role-playing games (RPGs) and first-person shooters (FPS).

The philosopher Ludwig Wittgenstein famously pointed out how difficult it is to define what a ‘game’ is. Many things can be games or game-like, but trying to neatly categorise what makes them so is seemingly impossible. Do games have to be competitive? No. Do games have to be fun? Well… no. And so on.

There’s a lot to think about in the Golumbia article, and (for once!) I’m going to set aside the very pointed critique of the capitalist element and the power dynamic. Instead, I’ll excerpt the part about games providing “the human pleasure taken in the completing of activities with closure and with hierarchical means of task measurement.”

For me, personally, most of my gaming sessions are usually with and/or against other human players. For example, on a Sunday night, my gaming crew is enjoying Payday 3 a game about robbing, stealing, and looting. There are aims and objectives, and tasks to complete and check off. It’s satisfying. Now I know why.

If we cast aside for a moment the generic distinction according to which programs like WoW, Halo, and Half Life are games while Microsoft Excel, Microsoft Word, and Adobe Photoshop are “productivity tools,” it becomes clear that the applications have nearly as much in common as makes them distinct. Each involves a wide range of simulations of activities that can or cannot be directly carried out in physical reality; each demands absorptive, repetitive, hierarchical tasks as well as providing means for automating and systematizing them. Each provides distinct and palpable feelings of pleasure for users in any number of different ways; this pleasure is often of a type relating to some kind of algorithmic completeness, a “snapping” sense that one has completed, with digital certainty, a task whose form may or may not have been made clear from the outset (finishing a particular spreadsheet or document, completing a design, or finishing a quest or mission). In every context in which these activities are completed, whether that context is established by the computer or by people in the physical world, there is indeed some sense of “experience” having been gained, listed, compiled by the completion of a given task. Arguably, this is a distinctive feature of the computing infrastructure: not that tasks were not completed before computers (far from it) but rather that the digitally-certain sense of having completed a task in a closed way has become heightened and magnified by the binary nature of computers.

What emerges as a hidden truth of computer gaming — and no less, although it may be even better hidden, of other computer program use — is the human pleasure taken in the completing of activities with closure and with hierarchical means of task measurement. Again, this kind of pleasure certainly existed before computers, but it has become an overriding emotional experience for many in society only with the widespread use of computers. A great deal of the pleasure users get from WoW or Half Life, as from Excel or Photoshop, is a digital sense of task completion and measurable accomplishment, even if that accomplishment only roughly translates into what we may otherwise consider intellectual, physical, or social goal-attainment. What separates WoW or Half Life from the worker’s business world is thus not variability or “give” but rational certainty, the discreteness of goals, the general sense that such goals are well-bounded, easily attainable, and satisfying to achieve, even if the only true outcome of such attainment is the deferred pursuit of additional goals of a similar kind.

[…]

At the very least, WoW and Half Life, and their cohort are therefore not games in the sense to which we have become accustomed. It seems clear that we call these programs “games” because of the intense feelings of pleasure experienced by players when we engage with them and because they appear on the surface not to be involved in the manipulation of objects with physical-world consequences. On reflection, neither of these facts proves very much… And the fact that computer games are pleasurable cannot, by itself, furnish grounds for calling them games: after all, games constitute only a part of those activities in the world that give us pleasure.

[…]

Can there be any doubt about the potential attractiveness of an apparently human world in which we understand clearly how to attain power, what to do with it, and that the rules by which we operate do not change or change only by explicit order? The deep question such games raise is what happens when people bring expectations formed by them into the world outside.

Source: Golumbia, D. (2013). ‘Games Without Play’. New Literary History, Vol. 40, No. 1, Play (Winter, 2009), pp. 179-204. Available at: https://diglit.community.uaf.edu/wp-content/uploads/sites/511/2015/01/Games_without_Play.pdf

Image: ELLA DON

A lot of strange things start to make more sense — sometimes distressingly so

Auto-generated description: Five cherubs with small wings are carrying a large can of condensed milk while wearing colorful sashes.

I was listening to Helen Beetham talk with Audrey Watters on her imperfect offerings podcast, when Audrey mentioned a Bloomberg piece which I’ve excerpted below. Essentially the economy becomes distorted when all of the money is at the top of society and everything is being produced to fit the needs of rich people.

This chimes with what economist Gary Stevenson calls ‘The Squeeze’ which I wrote about recently. While the article is about the US, which is a more unfettered free market economy, the same is also likely to be happening at different rates to other western economies.

The question, of course, is what we do about it. I mean, to be blunt, we can either tax the rich or end up eating them.

Recent economic headlines do not add up to a coherent picture: Since 2020, Americans have spent lavishly on discretionary goods and services, even as the cost of necessities has soared. Consumer debt has ballooned right along with prices, and Americans are now defaulting on their credit cards at rates unseen since the Great Recession. Wages growth has been strong, but inflation has thwarted its ability to help most Americans get ahead. So who’s booking all those first-class airline seats and tables at fancy restaurants? Why are tickets for concerts and major sporting events so expensive and also so sold out?

A recent analysis of consumer spending from Moody’s Analytics, first covered in the Wall Street Journal, provides an answer: Rich people really are just firing a cash cannon into the consumer market. The wealthiest 10% of American households—those making more than $250,000 a year, roughly—are now responsible for half of all US consumer spending and at least a third of the country’s gross domestic product. If you keep that in mind, a lot of strange things start to make more sense—sometimes distressingly so.

[…]

Such a high concentration of financial resources presents a whole host of risks and complications, including general economic fragility. If the extreme spending habits of a small group of people are what’s keeping a large portion of the economy churning, then that group of people also has an outsize ability to bring everyone else down with them.

[…]

When you put a huge proportion of a nation’s total resources in a small number of hands, that distortion also plays out in the everyday economy. Consumer-facing companies want earnings growth and need ways to hold on to their profit margin if components or labor become more expensive. An easy way to do that is by going upmarket to find buyers who are spending freely. You can see how this has played out in the car market: Automakers have pushed to develop more of the big, pricey SUVs that wealthier buyers prefer and devoted fewer resources to smaller, more affordable models. That’s helped push the average sale price of new cars up more than 50% since 2014, according to a Cox Automotive analysis of data from the Bureau of Labor Statistics. The average new car in the US now costs almost $50,000. When the math on producing goods and services only pencils out when you’re selling to the rich, it doesn’t just change the availability of designer handbags or hotel suites; it affects how entire industries organize themselves.

[…]

Letting so many of the country’s economic resources accrue to so few people. risks a lot more than just the economy—it eats away at social cohesion in ways that have leaked into other areas of American life and politics. It breeds distrust and recrimination among individuals and groups of people, as well as toward the systems and institutions we’re supposed to trust to make society work in ways that are at least minimally fair. The end result is a combination of economic fragility and social disaffection that eventually even high earners might not be able to buy their way out of.

Source: Bloomberg

Archive link (no paywall): Archive.is

Image: Boston Public Library

I've done this a couple of times before but this time feels slightly different

Auto-generated description: A stack of golden-brown pancakes is neatly arranged on a white plate.

Tom Watson, who apart from doing generally awesome stuff somehow also has time to star in a documentary about ultrarunning, saw a recent Thought Shrapnel post and wrote about what tech he’s using.

I need to do my own, and actually Tom’s post has made me realise the extent to which I’m dependent upon Google and, to a recent lesser extent, Perplexity.

Prompted by Doug… and a couple of the Colophon’s (new word for me!) by Matt and Steve I thought I’d outline “my stack”. I’ve done this a couple of times before but this time feels slightly different.

[A]s someone who has been gently prompting people to not be so beholden to Big Tech, to look more at Open Source, to think more ethically, and to at least consider European Alternatives, I feel I should at least discuss where I’m trying to do this, where I’m succeeding and where I’m often failing.

[…]

…I use AI for specific things when I think they will make something more effective. I’m therefore always looking for the best model, and best use case. And things change all the time. So I purposefully build specific components that allow me to easily switch models and providers. If there is one thing I’d advise when thinking about building AI into an organisation, it’s to ensure you aren’t creating provider lock in for yourself. Quite a few AI wrappers (tools that put some kind of front end onto a model) allow you to switch models. But not all. And if you are building yourself, there is a risk you just lock yourself into a depeciating model or a provider that just turns out to be mega shitty.

[…]

It’s not easy to avoid the big tech trap, but I think I’m doing ok. Also I’m not saying you definitely should, but I think you should at least consider what you use and what this means, and if you have principle maybe they should cost you something.

Source: Tomcw.xyz

Image: Matthias Reumann

It's much easier to go carless if your city has good public transit

Auto-generated description: A bar graph compares global average per capita emissions to projected and effective impacts of various behavioral changes in transportation, energy, and food, illustrating their potential to reduce emissions.

I’m a vegetarian who drives an electric vehicle (EV). In a few weeks' time, we’re getting a heat pump installed so that we can remove our gas boiler. These are all climate-positive things to do, and I’m trying to do my bit.

This article by the World Resources Institute shows how important it is that there is an infrastructure that enables individual decision-making to take place. For example, I’ve been vegetarian now for eight years, and it’s much easier to remove meat from your diet these days even than when I started to so in 2017. Likewise, because of investment in EV infrastructure, these days it’s unproblematic to own or lease an EV.

It’s interesting being an early-ish adopter of air source heat pump technology in the UK. The process is not as smooth as it could be, with our driveway having to be dug up to upgrade the total electricity supply capacity entering our property. So, although we have visited a couple of heat pumps and there is a government grant, it’s still more expensive, and involves more upheaval, than just getting another combi boiler.

Coupled with active hostility in some quarters, it’s a good example of how the Overton Window can apply to technology interventions and pro-climate lifestyle choices. That’s why, as well as making such choices ourselves, we should be aware of, and be advocating for, the systems within which those choices can be made easier.

Our data shows that pro-climate behavior changes, such as driving less or eating less meat, could theoretically cancel out all the greenhouse gas (GHG) emissions an average person produces each year — specifically among high-income, high-emitting populations.

But it also reveals that efforts focused exclusively on changing behaviors, and not the overarching systems around them, only achieve about one-tenth of this emissions-reduction potential. The remaining 90% stays locked away, dependent on governments, businesses and our own collective action to make sustainable choices more accessible for everyone. (Case in point: It’s much easier to go carless if your city has good public transit.)

[…]

We found that, in theory, shifting to 11 pro-climate behaviors we analyzed in the energy, transport and food sectors could reduce individuals' GHG emissions by about 6.53 tonnes per year. This would more than cancel out what an average person currently emits (about 6.3 tonnes per year). However, our data also shows that when people attempt these changes in the real world, without supportive systems, they typically only reduce emissions by about 0.63 tonnes yearly — just 10% of what’s theoretically possible.

It’s not that individual changes don’t matter; when someone switches to an electric vehicle (EV) or avoids a flight, they make a real impact. The problem is that without supportive infrastructure, policies or incentives (such as public EV chargers or financial subsidies), these programs struggle to drive the broad-based change the world really needs.

Source & image: World Resources Institute