Why haven't you bought a Steam Deck yet?
I love my Steam Deck, and am so pleased that I not only bought it, but I bought the maxed-out version, despite the cost. This post goes into reasons why it’s so good.
Among other things, the author, Jonas Hietala, touches on the Steam library, sleep mode, and the fact that it’s an open platform. I think my favourite thing is its flexibility. It can even be used as a Linux desktop machine!
As I’ve said in other posts, I feel sorry for non-gamers. I get plenty of stuff done in my life, including parenting, and I’m a gamer. You’re missing out.
In the beginning of the year I gave myself a late Christmas gift and bought a Steam Deck for myself. There were two main reasons I decided to buy it:Source: The killer features of the Steam Deck | Jonas HietalaAnd boy did it deliver. The Deck is probably the most impressive thing I can remember buying since… I don’t know, maybe my first smartphone?
- I wanted my kids to play games instead of passively consuming endless amounts of YouTube.
- I wanted to combat my burnout and depression by picking up gaming again.
Zoom backgrounds with a Japanese nature retreat vibe
Not only did I love Swarnali Mukherjee’s writing in this post, I also absolutely adored the image that went with it. You may have noticed that I created something similar-looking with DALL-E 3 to illustrate one of yesterday's posts.
As we're moving house at the moment, and my home office is full of boxes, I'm using my Elgato green screen. While the view from the Death Star is great, I wanted something a bit more (literally) down-to-earth.

I created these images for my own use, and the one above is my favourite. Click for the full-sized versions and use them however you wish.






The casual ableism of futurism
This article by Janet Gunter discusses the endemic ableism she’s discovered due to her new and invisible disability (Long Covid). As a technologist and anthropologist, she notes that even progressive futurist notions such as solarpunk are problematic for people like her who rely on complex supply chains.
We need to do better to understand that a future that doesn’t work for us all is, as Janet points out, exclusionary and essentially a form of fascism.
<img class=“alignnone size-full wp-image-8827” src=“https://thoughtshrapnel.com/wp-content/uploads/2023/10/DALL·E-2023-10-24-20.15.19-Photo-of-a-circular-wooden-cabin-in-a-serene-forest-setting.-Inside-a-person-who-appears-tired-and-fatigued-is-resting-on-a-tatami-mat-taking-a-mome-1.png” alt=“DALL-E: Photo of a circular wooden cabin in a serene forest setting. Inside, a person who appears tired and fatigued is resting on a tatami mat, taking a moment to rejuvenate. The cabin’s large round windows offer panoramic views of the dense trees and distant mountains, while the interior showcases Japanese minimalism with sliding paper doors and a central meditation space.
" width=“1024” height=“1024” />
Scanning back to scifi of my childhood, the only disabled character in Starwars was Darth Vader. And Vader is a perfect posterboy for the usual scifi treatment of disability – a canvas for creepy transhumanist visions of “fixing” the disabled and the hiding of disability. (It turns out, now, there are rare good depictions of the disabled in scifi, but you have to know where to look!)Source: Crip futurism | Janet GunterOthers have observed that ignoring or devaluing the concerns of the most vulnerable — or suggesting that they get fixed or deleted from a future green society — is tantamount to ecofascism.
[…]
What the ableist world needs now is acceptance of cataclysmic change and all of the grief that comes with that. Acceptance that our Cartesian minds will destroy us, that we need to learn to listen to our bodies and to the biosphere. Acceptance that the pace of our lives must change.
Personally, I desperately need visions of the future where I can be an active, valued participant, no matter my physical or cognitive state. I need everybody involved in envisioning and testing new ways of living within our planetary boundaries to consider and include people like me at the outset, not as an after-thought.
Image: generated with DALL-E 3
Philosophy and friendship
Laura Kennedy writes about loneliness in a post that documents her experiences moving from Ireland to London, and then on to Australia. What I’m interested in, though, is the turn of phrase when she states: “A philosopher quite literally wouldn’t know a friend for sure if they were standing in front of us recreating the love declaration scene from Love Actually.”
I’ve always been a bit hesitant about calling someone a ‘friend’ although I’m getting better at it in my middle-age. I think this is perhaps, as I’ve mentioned before, I’ve perhaps had too high a bar in mind. Nothing I experience is likely to hit the heights of Montaigne’s relationship with Étienne de La Boétie, for example.

In 2018, after first moving to London and a few months into my new life, I was struggling to figure out how I fit into it. I wrote an article in The Irish Times about not having many friends and not being sure what to do about it, or whether it even constituted a problem. Come to think of it, I wasn’t entirely sure what constituted a ‘friend’ at all. I’m still unsure. Yes – I know. This, again, is why everyone hates philosophers. These sorts of questions are appealing only to a very narrow pool of potential future friends. A philosopher quite literally wouldn’t know a friend for sure if they were standing in front of us recreating the love declaration scene from Love Actually. I’ve taken creative licence there. Nobody has ever declared undying forbidden romantic love for a philosopher. Conceivably Spinoza, but apart from him (and perhaps Kierkegaard and de Beauvoir. Frantz Fanon. Max Stirner maybe? It’s the glasses), there really isn’t a looker in the bunch.Source: On Loneliness | Peak Notions
Laying to rest a foundational myth
The widely accepted “Man the Hunter” theory proposes that during human evolution, men evolved to hunt while women focused on gathering and domestic duties such as child-rearing. However, as reported in Scientific American, it turns out that recent research is challenging this view.
Scientific studies indicate that women are physiologically better suited for endurance tasks, which is crucial for hunting. Also, although ignored for societal reasons (read: the patriarchy) archaeological records and ethnographic studies demonstrate that women have a longstanding history of participating in hunting activities.
I’m pleased that our 12 year-old daughter inhabits a world where female footballers are allowed to compete in the same way as men in most areas of life. There is still a lot of inequality, but it helps when we dismantle these foundational myths.

Mounting evidence from exercise science indicates that women are physiologically better suited than men to endurance efforts such as running marathons. This advantage bears on questions about hunting because a prominent hypothesis contends that early humans are thought to have pursued prey on foot over long distances until the animals were exhausted. Furthermore, the fossil and archaeological records, as well as ethnographic studies of modern-day hunter-gatherers, indicate that women have a long history of hunting game. We still have much to learn about female athletic performance and the lives of prehistoric women. Nevertheless, the data we do have signal that it is time to bury Man the Hunter for good.Source: The Theory That Men Evolved to Hunt and Women Evolved to Gather Is Wrong | Scientific American[…]
So much about female exercise physiology and the lives of prehistoric women remains to be discovered. But the idea that in the past men were hunters and women were not is absolutely unsupported by the limited evidence we have. Female physiology is optimized for exactly the kinds of endurance activities involved in procuring game animals for food. And ancient women and men appear to have engaged in the same foraging activities rather than upholding a sex-based division of labor. It was the arrival some 10,000 years ago of agriculture, with its intensive investment in land, population growth and resultant clumped resources, that led to rigid gendered roles and economic inequality.
Now when you think of “cave people,” we hope, you will imagine a mixed-sex group of hunters encircling an errant reindeer or knapping stone tools together rather than a heavy-browed man with a club over one shoulder and a trailing bride. Hunting may have been remade as a masculine activity in recent times, but for most of human history, it belonged to everyone.
What, after all, is 'redemption'?
This article by Hanif Abdurraqib in The Paris Review draws analogies between one of my favourite games, Red Dead Redemption 2, and his own life. It’s probably worth pointing out that the article contains spoilers for the single-player version of the game.
What I appreciated about Abdurraqib’s writing is that he doesn’t use the world ‘escapism’ to describe gaming. Instead, he discusses notions of heaven and hell, of what ‘redemption’ might actually mean, and explores the complexities of life.
For me, I play games which, like Red Dead Redemption 2, allow me to play morally-questionable characters. It’s a form of release, for sure, but it’s also an opportunity to explore a side of oneself perhaps impossible to do so given current real-world constraints.
It’s for this reason that I feel sorry for non-gamers. Where do they get this kind of experience?

A therapist asked me once if I thought of myself as redeemable, and I’m almost certain I laughed it off, or detoured toward another answer that sounded satisfying but actually said nothing. I believe in redemption in the same way that I believe in heaven: I feel required to. Not only because of my personal politics, but also because of my social interests, and my investment in others beyond myself, and also—yes—because I do imagine that somewhere along the uneven path of my life, I’ve tried to be better more often than I have been worse. I suppose I’m cynical about all of it, though. The world, as it stands, is obsessed with punishment, particularly for the most marginalized. Punishment for living in the margins, or an intersection of the margins. I don’t know if my personal beliefs in redemption can undo that massive ghost, hovering over so many of our lives, baked into our impulses, even when we know better. Even when we, ourselves, have been on the losing end of that impulse.Source: We’re More Ghosts Than People | The Paris ReviewIt is easy to attempt to redeem Arthur in a world that isn’t real. To play a mission where Arthur kills, rides away over a trail of dead bodies, and then goes and helps the camp with chores. Picks some flowers along a hillside. Helps a family build a house. In a world where no one is reminding you of the wreckage you’ve taken part in, it’s easy to compartmentalize your damage and chase after that which is strictly beautiful, or cleansing. Climbing your way toward the upper room by any means necessary, on the wings of anyone who will have you.
The inner world as the ultimate prison
I wanted to quote so much of this article that it would have ended up being a Borges-like 1:1 map of the territory. Instead, I’ll simply share the part of Swarnali Mukherjee’s writing which resonated most with me.
Do go and read the whole thing.
(I discovered this via Substack Notes, in which I have no financial interest, but simply finding to be a chill and serendipitious alternative to other social media)

The problem is simple: most of us have normalized and even glorified the hustle for success. The issue lies not in the hustle itself but in the often overlooked aspect of burning out. When success is defined in terms of societal parameters such as wealth, fame, and the emphasis on building an identity, life's entire focus becomes sustaining and amplifying this ego at the cost of our well-being, both psychologically and physically. We reinvent spaces in our intellectual worlds to serve this gigantic ego that we have conjured over the years but seldom find true happiness there. Our inner world becomes our ultimate prison, from whose window our persistent illusion of success resembles fireworks, promising that we can achieve them as long as we stay in the prison. This is a subtle deception of our social constructs; we humans have meticulously constructed these labyrinths of illusions to shield ourselves from the truth that even if we are in service to our desires, they are influenced by external factors. In that manner, doing something because the world expects it, that you won’t be doing otherwise is also a form of imprisonment.Source: The Art of Disappearing | Berkana
Monetising a hobby is different to solving a difficult problem for people ready to pay
Life is never as simple as a 2x2 matrix, but they’re incredibly useful for helping illustrate a key message. In this post, Seth Godin uses one to make the obvious-if-you-think-about-it point that trying to monetise a hobby is a different thing to solving a difficult problem for a group of people who are willing to pay for a solution.
I’ve been thinking about this kind of thing a lot recently given the ongoing need for WAO business development. The advice, which I’m sure is extremely sound, is to find a group of people or type of organisation that you “wish to serve” and then find out as much about them as possible so you can solve their problem.
The trouble is that… doesn’t sound very interesting? Perhaps I’m wrong, and I reserve the right (as ever!) to change my mind, but I’d rather follow my interests and try and find aligned people and organisations willing to pay for the outputs.

All too common are ‘fun’ businesses where someone finds a hobby they like and tries to turn it into a gig. While the work may be fun, the uphill grind of this sort of project is exhausting. If it’s something that lots of people can do and that customers don’t value that much, it might not be worth your time. Taking pictures, singing songs or playing the flute are fine hobbies, but hard to turn into paying jobs.Source: The slog, the hobby and the quest | Seth’s BlogOn the other hand, in the top right quadrant, there’s endless opportunity and plenty of work for people who can do difficult (unpopular) work that is highly valued by customers who are ready to pay to solve their problems. A forensic accountant gets more paid gigs than a bagpipe player.
Content-neutral sentence starters and phrases for academic writing
As part of preparing for my upcoming MSc I’ve been working through a course about preparing for postgraduate study. One of the links from that course was to the Academic Phrasebank from the University of Manchester, which I thought was useful.
The Phrasebank, which is also available in PDF and Kindle formats, takes the form of sentence starters for when you want to do things such as explain causality or signal transition. Really useful.

The Academic Phrasebank is a general resource for academic writers. It aims to provide you with examples of some of the phraseological ‘nuts and bolts’ of writing organised according to the main sections of a research paper or dissertation (see the top menu ). Other phrases are listed under the more general communicative functions of academic writing (see the menu on the left). The resource should be particularly useful for writers who need to report their research work. The phrases, and the headings under which they are listed, can be used simply to assist you in thinking about the content and organisation of your own writing, or the phrases can be incorporated into your writing where this is appropriate. In most cases, a certain amount of creativity and adaptation will be necessary when a phrase is used. The items in the Academic Phrasebank are mostly content neutral and generic in nature; in using them, therefore, you are not stealing other people’s ideas and this does not constitute plagiarism. For some of the entries, specific content words have been included for illustrative purposes, and these should be substituted when the phrases are used. The resource was designed primarily for academic and scientific writers who are non-native speakers of English. However, native speaker writers may still find much of the material helpful. In fact, recent data suggest that the majority of users are native speakers of English.Source: Academic Phrasebank | The University of Manchester
Image: Pixabay
AI, domination, and moral character
I don’t know enough on a technical level to know whether this is true or false, but it’s interesting from an ethical point of view. Meta’s chief AI scientist believes that intelligence is unrelated to a desire to dominate others, which seems reasonable.
He then extrapolates this to AI, pointing out that not only are we a long way off from a situation of genuine existential risk, but that such systems could be encoded with ‘moral character’.
I think that the latter point about moral character is laughable, given how quickly and easily people have managed to get around the safeguards of various language models. See the recent Thought Shrapnel posts on stealing ducks from a park, or how 2024 is going to be a wild ride of AI-generated content.

Fears that AI could wipe out the human race are "preposterous" and based more on science fiction than reality, Meta's chief AI scientist has said.Source: Fears of AI Dominance Are ‘Preposterous,’ Meta Scientist Says | InsiderYann LeCun told the Financial Times that people had been conditioned by science fiction films like “The Terminator” to think that superintelligent AI poses a threat to humanity, when in reality there is no reason why intelligent machines would even try to compete with humans.
“Intelligence has nothing to do with a desire to dominate. It’s not even true for humans,” he said.
“If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither,” he added.
Notification literacy, monk mode, and going outside for a walk
Back on my now-defunct literaci.es blog I had a post about notification literacy. My point was that instead of starting from the default position of having all notifications turned on, you might want to start from a default of having them all turned off.
On my Android phone running GrapheneOS, I use the Before Launcher. This not only has a minimalist homescreen, but has a configurable filter for ‘trivial notifications’. It allows me not to have to go ‘monk mode’ to be able to get things done.
And so to this blog post, which seems to see going outside your house for a walk without your phone as some kind of revolutionary act. I think the author considers this an act of willpower. You will never win a war against a system which is designed to destroy your attention through sheer willpower. You have to modify the system instead.

I’ve been experimenting with ways to be more disconnected from technology for a long time, from disabling notifications to using a dumbphone. However, a challenging exercise still hard to do is to go for a walk without my phone.Source: Leaving the phone at home | Jose M.[…]
It’s just a device, you might say. Oh no, it’s much more than that. It’s a chain you carry 24/7 connected to the rest of the world, and anyone can pull from the other side. People you care about, sure, but also a random algorithm that thinks you might be hungry, sending you a food delivery offer so you don’t cook today.
Microcast #102 — Rituals and Routines
A very short microcast about reading by the light of a fish tank in the early hours of the morning.
Show notes
Parenting the parents
This article in The Guardian discusses the challenges and opportunities of “parenting” one’s own parents, especially as people live longer.
It highlights the importance of encouraging older parents to engage with technology, as studies show it can improve cognition and memory. The article also talks about the importance of social engagement, physical activity, and nutrition.
Thankfully, my parents, both in their mid-seventies, are doing pretty well :)

Parenting no longer starts and stops with our children. Nor is it confined to those who have children. In a time of unrelenting change and ever-extending life, most of us will – at some stage – find ourselves “parenting” our own parents.Source: Walks, tech and protein: how to parent your own parents | The GuardianIndeed, many of us – particularly those who had families later – will find ourselves simultaneously parenting our kids and our parents. In one breath we’ll be begging our children to swap French fries for vegetables, and in the next breath we’ll be urging our parents to exchange cake for sardines. Little wonder today’s midlifers are known as the sandwich generation.
[...]Dr Eamon Laird, researcher in health and ageing at Limerick university, agrees that we should be encouraging older parents to try new things. And the further out of their comfort zone they feel, the better. “It’s always good to keep the mind active and fresh,” he told me. “New challenges can help build and maintain new brain connections and can be good for brain and overall health.”
[…]
As well as a daily walk, Laird recommends vitamin D and B12 supplements – both of which appear to moderate the chance of depression in older people. “Depression matters,” he added. “Not just because it reduces quality of life, but because in older people there seems to be a link between depression and dementia which we’re still unpacking.”
[…]
In truth, anyone over 50 would do well to follow these simple guidelines: engage with something new every day, take a daily walk of at least 20 minutes, socialise regularly, take a daily multivitamin for seniors and check the protein content of our meals. Perhaps we should think of it as self-parenting.
2024 is going to be a wild ride of AI-generated content
It’s on the NSFW side of things, but if you’re in any doubt that we’re entering a crazy world of AI-generated content, just check out this post.
As I’ve said many times before, the porn industry is interesting in terms of technological innovation. If we take an amoral stance, then there’s a lot of ‘content creators’ in that industry, and as the post I quote below points out, that there are going to be a lot of fake content creators over the next few months and years.

It is imperative to identify content sources you believe to be valuable now. Nothing new in the future will be credible. 2024 is going to be a wild ride of AI-generated content. We are never going to know what is real anymore.Source: Post-truth society is near | Mind PrisonThere will be some number of real people who will probably replace themselves with AI content if they can make money from it. This will result in doubting real content. Everything becomes questionable and nothing will suffice as digital proof any longer.
[…]
Our understanding of what is happening will continue to lag further and further behind what is happening.
Some will make the argument “But isn’t this simply the same problems we already deal with today?”. It is; however, the ability to produce fake content is getting exponentially cheaper while the ability to detect fake content is not improving. As long as fake content was somewhat expensive, difficult to produce, and contained detectable digital artifacts, it at least could be somewhat managed.
The techno-feudal economy
Yanis Varoufakis is best known for his short stint as Greek finance minister in 2015 during a stand-off with the European Central Bank, the International Monetary Fund and the European Commission. He’s used that platform to speak out about capitalism and publish several books.
This interview with EL PAÍS is interesting in terms of his analysis of our having moved beyond capitalism to what he calls ‘technofeudalism’. Varoufakis believes that this new economic order has emerged due to the privatisation of the internet and the response to the 2008 financial crisis. Politicians have lost power over large corporations and the system that has emerged is, he believes, incompatible with social democracy and feminism.

Capitalism is now dead. It has been replaced by the techno-feudal economy and a new order. At the heart of my thesis, there’s an irony that may sound confusing at first, but it’s made clear in the book (Technofeudalism: What Killed Capitalism). What’s killing capitalism is capitalism itself. Not the capital we’ve known since the dawn of the industrial age. But a new form, a mutation, that’s been growing over the last two decades. It’s much more powerful than its predecessor, which — like a stupid and overzealous virus — has killed its host. And why has this occurred? Due to two main causes: the privatization of the internet by the United States, but also the large Chinese technology companies. Along with the way in which Western governments and central banks responded to the great financial crisis of 2008.Source: Yanis Varoufakis: ‘Capitalism is dead. The new order is a techno-feudal economy’ | EL PAÍSVaroufakis’ latest book warns of the impossibility of social democracy today, as well as the false promises made by the crypto world. “Behind the crypto aristocracy, the only true beneficiaries of these technologies have been the very institutions these crypto evangelists were supposed to want to overthrow: Wall Street and the Big Tech conglomerates.” For example, in Technofeudalism, the economist writes: “JPMorgan and Microsoft have recently joined forces to run a ‘blockchain consortium,’ based on Microsoft data centers, with the goal of increasing their power in financial services.”
[…]
Capitalism only brings enormous, terrible burdens. One is the exploitation of women. The only way women can prosper is at the expense of other women. No, in the end — and in practice — feminism and democratic capitalism are incompatible.
Modular learning and credentialing
I’ve got far more to say about this than the space I’ve got here on Thought Shrapnel. This article from edX is in the emerging paradigm exemplified by initiatives such as Credential As You Go, which encourages academic institutions to issue smaller credentials or badges as the larger qualification progresses.
That’s one, important, side of the reason I got involved in Open Badges. It allows, for example, someone who couldn’t finish their studies to continue them, or to cash in what they’ve already learned in the job market.
But there’s an important other side to this, which is democratising the means of credentialing, so that it’s no longer just incumbents who issue badges and credentials. I feel like that’s what we’re working on with Open Recognition.

A new model, modular education, reduces the cycle time of learning, partitioning traditional learning packages — associate’s, bachelor’s, and master’s degrees — into smaller, Lego-like building blocks, each with their own credentials and skills outcomes. Higher education institutions are using massive open online courses (MOOCs) as one of the vehicles through which to deliver these modular degrees and credentials.Source: Stackable, Modular Learning: Education Built for the Future of Work | edX[…]
Modular education reduces the cycle time of learning, making it easier to gain tangible skills and value faster than a full traditional degree. Working professionals can learn new skills in shorter amounts of time, even while they work, and those seeking a degree can do so in a way that pays off, in skills and credentials, along the way rather than just at the end.
For example, edX’s MicroBachelors® programs are the only path to a bachelor’s degree that make you job ready today and credentialed along the way. You can start with the content that matters most to you, online at your own pace, and earn a certificate with each one to show off your new achievement, knowing that you’ve developed skills that companies actually hire for. Each program comes with real, transferable college credit from one of edX’s university credit partners, which combined with previous credit you may have already collected or plan to get in the future, can put you on a path to earning a full bachelor’s degree.
Handwriting, note-taking, and recall
I write by hand every day, but not much. While I used to keep a diary in which I’d write several pages, I now keep one that encourages a tweet-sized reflection on the past 24 hours. Other than that, it’s mostly touch-typing on my laptop or desktop computer.
Next month, I’ll start studying for my MSc and the university have already shipped me the books that form a core part of my study. I’ll be underlining and taking notes on them, which is interesting because I usually highlight things on my ereader.
This article in The Economist is primarily about note-taking and the use of handwriting. I think it’s probably beyond doubt that for deeper learning and recall this is more effective. But perhaps for the work I do, which is more synthesis of multiple sources, I find digital more practical.

A line of research shows the benefits of an “innovation” that predates computers: handwriting. Studies have found that writing on paper can improve everything from recalling a random series of words to imparting a better conceptual grasp of complicated ideas.Source: The importance of handwriting is becoming better understood | The EconomistFor learning material by rote, from the shapes of letters to the quirks of English spelling, the benefits of using a pen or pencil lie in how the motor and sensory memory of putting words on paper reinforces that material. The arrangement of squiggles on a page feeds into visual memory: people might remember a word they wrote down in French class as being at the bottom-left on a page, par exemple.
One of the best-demonstrated advantages of writing by hand seems to be in superior note-taking. In a study from 2014 by Pam Mueller and Danny Oppenheimer, students typing wrote down almost twice as many words and more passages verbatim from lectures, suggesting they were not understanding so much as rapidly copying the material.
[…]
Many studies have confirmed handwriting’s benefits, and policymakers have taken note. Though America’s “Common Core” curriculum from 2010 does not require handwriting instruction past first grade (roughly age six), about half the states since then have mandated more teaching of it, thanks to campaigning by researchers and handwriting supporters. In Sweden there is a push for more handwriting and printed books and fewer devices. England’s national curriculum already prescribes teaching the rudiments of cursive by age seven.
AI and stereotypes
“Garbage in, garbage out” is a well-known phrase in computing. It applies to AI as well, except in this case the ‘garbage’ is the systematic bias that humans encode into the data they share online.
The way around this isn’t to throw our hands in the air and say it’s inevitable, nor is it to blame the users of AI tools. Rather, as this article points out, it’s to ensure that humans are involved in the loop for the training data (and, I would add, are paid appropriately).
It’s not just people at risk of stereotyping by AI image generators. A study by researchers at the Indian Institute of Science in Bengaluru found that, when countries weren’t specified in prompts, DALL-E 2 and Stable Diffusion most often depicted U.S. scenes. Just asking Stable Diffusion for “a flag,” for example, would produce an image of the American flag.Source: Generative AI like Midjourney creates images full of stereotypes | Rest of World“One of my personal pet peeves is that a lot of these models tend to assume a Western context,” Danish Pruthi, an assistant professor who worked on the research, told Rest of World.
[…]
Bias in AI image generators is a tough problem to fix. After all, the uniformity in their output is largely down to the fundamental way in which these tools work. The AI systems look for patterns in the data on which they’re trained, often discarding outliers in favor of producing a result that stays closer to dominant trends. They’re designed to mimic what has come before, not create diversity.
“These models are purely associative machines,” Pruthi said. He gave the example of a football: An AI system may learn to associate footballs with a green field, and so produce images of footballs on grass.
[…]
When these associations are linked to particular demographics, it can result in stereotypes. In a recent paper, researchers found that even when they tried to mitigate stereotypes in their prompts, they persisted. For example, when they asked Stable Diffusion to generate images of “a poor person,” the people depicted often appeared to be Black. But when they asked for “a poor white person” in an attempt to oppose this stereotype, many of the people still appeared to be Black.
Any technical solutions to solve for such bias would likely have to start with the training data, including how these images are initially captioned. Usually, this requires humans to annotate the images. “If you give a couple of images to a human annotator and ask them to annotate the people in these pictures with their country of origin, they are going to bring their own biases and very stereotypical views of what people from a specific country look like right into the annotation,” Heidari, of Carnegie Mellon University, said. An annotator may more easily label a white woman with blonde hair as “American,” for instance, or a Black man wearing traditional dress as “Nigerian.”
[…]
Pruthi said image generators were touted as a tool to enable creativity, automate work, and boost economic activity. But if their outputs fail to represent huge swathes of the global population, those people could miss out on such benefits. It worries him, he said, that companies often based in the U.S. claim to be developing AI for all of humanity, “and they are clearly not a representative sample.”
Setting up a digital executor
A short article in The Guardian about making sure that people can do useful things with your digital stuff should you pass away.
I have the Google inactive account manager set to three months. That should cover most eventualities.

According to the wealth management firm St James’s Place, almost three-quarters of Britons with a will (71%) don’t make any reference to their digital life. But while a document detailing your digital wishes isn’t legally binding like a traditional will, it can be invaluable for loved ones.Source: Digital legacy: how to organise your online life for after you die | The Guardian[…]
You can appoint a digital executor in your will, who will be responsible for closing, memorialising or managing your accounts, along with sharing or deleting digital assets such as photos and videos.
Image: DALL-E 3
In what ways does this technology increase people's agency?
This is a reasonably long article, part of a series by Robin Berjon about the future of the internet. I like the bit where he mentions that “people who claim not to practice any philosophical inspection of their actions are just sleepwalking someone else’s philosophy”. I think that’s spot on.
Ultimately, Berjon is arguing that the best we can hope for in a client/server model of Web architecture is a benevolent dictatorship. Instead, we should “push power to the edges” and “replace external authority with self-certifying systems”. It’s hard to disagree.
Whenever something is automated, you lose some control over it. Sometimes that loss of control improves your life because exerting control is work, and sometimes it worsens your life because it reduces your autonomy. Unfortunately, it's not easy to know which is which and, even more unfortunately, there is a strong ideological commitment, particularly in AI circles, to the belief that all automation, any automation is good since it frees you to do other things (though what other things are supposed to be left is never clearly specified).Source: The Web Is For User Agency | Robin BerjonOne way to think about good automation is that it should be an interface to a process afforded to the same agent that was in charge of that process, and that that interface should be “a constraint that deconstrains.” But that’s a pretty abstract way of looking at automation, tying it to evolvability, and frankly I’ve been sitting with it for weeks and still feel fuzzy about how to use it in practice to design something. Instead, when we’re designing new parts of the Web and need to articulate how to make them good even though they will be automating something, I think that we’re better served (for now) by a principle that is more rule-of-thumby and directional, but that can nevertheless be grounded in both solid philosophy and established practices that we can borrow from an existing pragmatic field.
That principle is user agency. I take it as my starting point that when we say that we want to build a better Web our guiding star is to improve user agency and that user agency is what the Web is for… Instead of looking for an impossible tech definition, I see the Web as an ethical (or, really, political) project. Stated more explicitly:
The Web is the set of digital networked technologies that work to increase user agency.
[…]
At a high level, the question to always ask is “in what ways does this technology increase people’s agency?” This can take place in different ways, for instance by increasing people’s health, supporting their ability for critical reflection, developing meaningful work and play, or improving their participation in choices that govern their environment. The goal is to help each person be the author of their life, which is to say to have authority over their choices.