Scintillating scotomas

Manuscript illumination by Hildegard of Bingen resembling a visual disturbance similar to that experienced during a migraine

It’s weird to think that I was about my son’s age (17) when I started getting migraines. It wasn’t a massive surprise: there were migraineurs on both sides of my family, including my mother and my paternal grandmother. I may have literally dodged a bullet: being susceptible to migraines disqualifies you from pretty much every role in the Royal Air Force, to which I was in the process of applying.

These days, partly through stress management, ensuring I get good sleep, avoiding dehyrdration, and taking some supplements I’ve found helpful, my migraines are both less frequent and less extreme. They’re still part of who am, though, and I know to get off screens immediately and take some of my meds if my vision starts getting distorted.

How to describe a scintillating scotoma? It’s one of the most common symptoms of a migraine, but unless you’ve had one, it sounds unreal. A scintillating scotoma is like a barbed ripple in the pool of sight. It’s a skeletal Magic Eye raised up from the flatness of the world. It’s a glare on the tarmac as you drive West at sunset on a rain-slick freeway—only when you turn your head, it’s still there, so you have to pull over, close your eyes, and wait out the slow-motion firework working its way across your brain.

[…]

In the absence of an organizing mind, everything comes unglued. Faces go missing and dark holes seem to eat half the universe. Migraine sufferers can experience the uncanny sense of consciousness doubling known as déja-vu, or its cousin, jamais-vu, in which the world feels newly-made. The world might feel suddenly very unreal, fracture into a mosaic, or slow to a stop-motion pace, dropping frames. The self might cleave in two in a fit of somatopsychic duality. Writing about these bizarre and horrifying perceptual phenomena, the late Oliver Sacks observed that migraines “show us how the brain-mind constructs ‘space’ and ‘time,’ by demonstrating what happens when space and time are broken, or unmade.”

[…]

According to Migraine Art: The Migraine Experience From Within, migraine auras are as old as humankind—so old, perhaps, that they may have inspired the geometric forms of Stone Age cave drawings. Which makes recent attempts to generate migraine auras using convolutional neural networks seem particularly poignant to me: what began in stone, animated by the hot flicker of firelight, continues 5,000 years later, deep in the heart of servers whose mineral components were mined from the same dark Earth.

Source: Wild Information

Image: Manuscript illumination by Hildegard of Bingen, 1511 (who was a migraineur)

Career vs Job

A solitary figure reflects on the edge of a cliff at sunrise, representing the importance of contemplation in distinguishing between a job and a career. The serene landscape and color scheme emphasize reflection and the broader view of one's professional life.

This post by Tim Klapdor is definitely related the Aeon article I quoted about carving out time for reflection.

People are surprised when I say that I do about 20-25 hours of paid work per week. Somehow that’s ‘part time’. But I live a full life: studying, writing, taking my kids here, there, and everywhere. The only thing missing? I’d like to travel more, professionally.

A career contains a multitude of jobs. Some of them are the ones you get paid for, but many of them aren’t. And that’s often where the confusion comes into play. The paid job begins to bleed into other areas, and you associate the paid job with all the other jobs. They get lumped together as a career, but they are distinct and need to be kept separate. It’s our mind that blends them together, so every so often, we need to pull focus, reevaluate and paint in the edges to make it clear what our jobs really are.

[…]

In reflection, I can say that for the last few years, I’ve paid too much attention to my paid job and not my career. I’ve allowed the job to expand beyond its parameters and edges to consume everything around it—my time, attention, and priorities. What I need to do, and what I plan to do in 2024, is to switch that.

I want to focus on my career, not my job.

Source: Tim Klapdor

Absence is not a (defect)ion

<img src=“https://cdn.uploads.micro.blog/139275/2024/1922dd0f-2054-4f52-b82f-d17c513dfe80.webp" width=“600” height=“342” alt=“This image is a digital collage that layers photographic textures with digital painting. A monochrome urban landscape in dark gray symbolizes the conventional work environment, while vibrant pockets of red, yellow, and blue form miniature worlds floating above the city. These bubbles represent “temporary autonomous zones” where individuals can engage in purposeless action and creativity, highlighting the contrast between the daily grind and the personal sanctuaries we create for ourselves.">

I hadn’t thought of the early days of the pandemic as being akin to a general labour strike. Interesting. I could quote the entirety of this article, but I’ll just mention one thing that I haven’t included below: “It is because of its emptiness that the room is useful.” (Lao Tzu). The author of this article, David J Siegel, uses this to make the point that I’ve used as the title for this post; that absence is not defection.

The early period of the pandemic (which approximated in many respects a kind of general labour strike) gave some of us an intimation of what life lived largely off the clock can be like when much of what passes for work is suspended or slowed and we are afforded precious ‘little gaps of solitude and silence’, as the French philosopher Gilles Deleuze called them, to engage in worthy pursuits that elude us under normal circumstances. We found incomparable personal freedoms and new opportunities for enrichment and fulfilment in the cessation of many of our standard operating procedures.

Then, as everyone recalls, we were summoned back to the office. But, once we had experienced this new way of being, the prospect of returning to the old order – submitting to the control, policing and surveillance of our former workaday lives – became almost unthinkable, especially for members of a chronically insecure workforce forced to endure low pay, lack of opportunity for advancement, inflexible schedules, and a multitude of everyday insults and indignities. Perhaps the chief insult to us all is the governing assumption that we must be collocated – or collated – to do our best work, despite having demonstrated our capacity for self-directed productivity from home (or other private quarters) under the most trying circumstances.

[…]

In The Scent of Time: A Philosophical Essay on the Art of Lingering (2009), Byung-Chul Han suggests that our experience of intervals is being ‘destroyed in order to produce total proximity and simultaneity’. When everything (and everyone) is within reach at all times, we lose a sense of what it means to be in – and even to savour – transitional states of in-betweenness. As an antidote, [some authors] recommend that we ‘tarry with time’ and ‘make spaces for the play of purposeless action’.

We can, in other words, reappropriate some of the time and space being withdrawn from us. These can be reclaimed in the fugitive moments we thieve from the calendar, or they can be recovered in what the anarchist Hakim Bey in 1985 called ‘temporary autonomous zones’: undetectable underground enclaves that we carve out of the landscape of our everyday lives in order to find or free ourselves. Simultaneously, practices of disengagement might withdraw from organisations (workplaces primary among them) their extraordinary power to mediate – to dictate and direct – far too many aspects of our existence and experience. Opting to bypass certain workplace amenities and conveniences expertly designed to keep us at work – the cafeteria, the fitness centre, the dry cleaner, the onsite health clinic – might not seem like much of a tactic of rebellion, but it does its part to lessen our dependence on our employer as lifehack, helpmate or healer.

[…]

Withdrawal has an almost universally negative connotation in public life, where it is treated as the ultimate transgression and disdained as retreat or defeat – the very opposite of engagement. However, to withdraw is also, crucially, to repair – both to go to a place and to mend. From this perspective, withdrawal is not merely a defeatist tack; rather, it is, or can be, direct action for a restoration of intellectual life – the kind that is free to ask (to fully engage with) impertinent questions – in settings that have practically banished it, made it inaccessible, or are attempting to monitor and monetise it according to terms not of our choosing.

[…]

Among the questions some of us are investigating in our contemplative moments of disengagement, withdrawal, removal, retreat or escape – however we choose to designate those instances when we take our leave – are these: when, or to what extent, do our norms of organisational affiliation and attachment make us sick or otherwise compound the very problems such forms of connection are meant to solve? In what ways might our occasional absences improve our solitary and even our solidary experiences of work and of life more generally?

Source: Aeon

Claude's Prompt Library

Screenshot of Anthropic's Prompt Library

Anthropic, the organisation set up by ex-OpenAI staffers, has recently released Claude 3. This is apparently even more powerful than GPT-4, although I haven’t had a chance to play with it yet.

Alongside the release, Anthropic has also shared a Prompt Library which, I guess, is the equivalent of OpenAI’s GPTs.

Source: Anthropic Prompt Library

Sports betting and neoliberal atomisation

This image portrays a lone figure illuminated by the light of a smartphone screen, surrounded by darkness. It captures the solitary nature of sports betting through apps, contrasting the vibrant world of sports with the individual's isolation.

The only times I’ve ever betted on sports is with my father. Back when I lived at home, we’d all choose a horse in the Grand National (out of a hat) and I’d go down with him to the bookies to put the bets on. And then, when we went to a football match at Sunderland, we’d decide what bet to put on, too.

I’ve never betted on sports by myself. It’s a slippery slope, as I know what I’m like. When I was my son’s age (17) I was mildly addicted to scratchcards for a few weeks, but quit when I won enough to break even. That’s why the whole world of sports betting, which I know must be huge given that almost every Premier League football team is sponsored by a related company, is a black box to me.

Drew Austin talks about sports betting not only being the further atomisation of an activity which was at least nominally social, but also the way that it reduces a complex bundle of qualitative emotions down to a set of flat, quantitative, numbers.

As the Facebook/Google/Twitter clearnet dissolves and the internet becomes a dark forest, another relatively recent tech category offers a lens for anticipating the future of shared experience and solipsism: sports betting apps. Although largely unleashed by regulatory changes rather than technical innovation, the rise of mainstream, app-enabled sports gambling has reframed a still-powerful bulwark of mass culture as a solitary pursuit. As televised sports continue fragmenting into digital content just like everything else, sports betting creates a derivative market on top of that content, which in turn yields its own additional bounty of content. If you’ve ever bet on a game and then watched it with other people, you probably realized quickly that nobody cares about your betting angle(s) and that you have to shut up about it. You’re on your own. But if you show up at the Super Bowl party wearing a Kansas City Chiefs jersey, you are a legible entity, and everyone has something to talk to you about. To bet on sports is to share the same space (literal or figurative) with a multitude of people who have their own specific angle and only the meta-game in common. Sports gambling is even more fascinating, however, in the way it alters your brain as a spectator of the game: You exchange a complex bundle of emotional and aesthetic nuance for a purely quantitative perspective, which highlights everything that benefits you and pushes the rest to the background. It’s how it would feel to be a computer watching sports. A lot of things we do on the internet feel like that. Who needs NPCs to interact with when we all act like them anyway? We pay so much attention to how computers are learning to be human, but forget we’re also learning to act like them.

Source: Kneeling Bus

Image: DALL-E 3

Subject, Consumer, Citizen

Andrew Curry reflects on the work of Jon Alexander, author of a book called Citizens (2022). Alexander has been on a bit of a journey talking to people, and has made some discoveries. The image features three columns, each representing different societal roles: SUBJECT, CONSUMER, and CITIZEN, each with its own background color—orange, pink, and blue, respectively. For the SUBJECT, words such as DEPENDENT, RELIGIOUS, DUTY, OBEY, RECEIVE, COMMAND, PRINT, HIERARCHY, and SUBJECTIVE are listed, set against a light orange striped background. The CONSUMER column has words like INDEPENDENT, MATERIAL, RIGHTS, DEMAND, CHOOSE, SERVE, ANALOGUE, BUREAUCRACY, and OBJECTIVE, all on a pink striped background. Lastly, the CITIZEN column lists INTERDEPENDENT, SPIRITUAL, PURPOSE, PARTICIPATE, CREATE, FACILITATE, DIGITAL, NETWORK, and DELIBERATIVE, against a light blue striped background. The text is arranged vertically in a sans-serif font, and each word is placed in a horizontal alignment with its counterparts in the other columns.

I’m mainly sharing this for the diagram, which Stowe Boyd also picked up on, and provides a better commentary than I ever could. All I’ll say is that it’s good to see things laid out so clearly, although I would have put the ‘Subject’ column to the right (where it is politically) and made it an easier-to read colour!

[H]eaven knows we have a lot of Sensible Grown-Up Politicians around the place. Albanese in Australia, Starmer in the UK. But: because they have not yet realised, or acknowledged, that our political systems are failing, they don’t have the tools to deal with authoritarianism.

But it’s not just down to them. We can’t sit down in Restaurant Hope and wait for the menu. We need to be in the kitchen. […]

Authoritarians offer to replace this with a story about being a subject: if we put them in power, they will fix things for us (although they don’t, of course).

[…]

We need to believe in people if we, the people, are to have any hope for ourselves and for humanity.

Source: Just Two Things

A truly liberatory (digital) future for everyone

A pixelated skull graphic is centered on a black background, with horizontal bands of vibrant colors intersecting the image, simulating a visual glitch. The colors — light gray, dark gray, bright red, yellow, and blue — appear in sharp, fragmented lines that give the impression of the image being momentarily disrupted by digital interference. The pattern of gray crosses is subtly visible in the background, further adding to the glitch effect. This digital distortion suggests that the skull image is experiencing a moment of digital decay, reminiscent of static interference on an old television screen.

After giving a potted history of the internet and all of the ways it has failed to live up to its promise, Paris Marx suggests that we need to start over with the entire tech industry. It’s hard to disagree.

My internet habits are vastly different to what they were a decade ago. Back then, I was seven years into using Twitter, had a great following and ‘personal learning network’. The world, pre-Brexit and Trump had the seeds of the turmoil to come, but Big Tech was nowhere near as brazen as it is post-pandemic, and coked-up on AI fever dreams.

There can only be only conclusion from all of this: the digital revolution has failed. The initial promise was a deception to lay the foundation for another corporate value-creation scheme, but the benefits that emerged from it have been so deeply eroded by commercial imperatives that the drawbacks far outweigh the remaining redeeming qualities — and that only gets worse with every day generative AI tools are allowed to keep flooding the web with synthetic material.

The time for tinkering around the edges has passed, and like a phoenix rising from the ashes, the only hope to be found today is in seeking to tear down the edifice the tech industry has erected and to build new foundations for a different kind of internet that isn’t poisoned by the requirement to produce obscene and ever-increasing profits to fill the overflowing coffers of a narrow segment of the population.

There were many networks before the internet, and there can be new networks that follow it. We don’t have to be locked into the digital dystopia Silicon Valley has created in a network where there was once so much hope for something else entirely. The ongoing erosion already seems to be sending people fleeing by ditching smartphones (or at least trying to reduce how much they use them), pulling back from the mess that social media has become, and ditching the algorithmic soup of streaming services.

Personal rejection is a welcome development, but as the web declines, we need to consider what a better alternative could look like and the political project it would fit within. We also can’t fall for any attempt to cast a libertarian “declaration of independence” as a truly liberatory future for everyone.

Source: Disconnect

Image: DALL-E 3

Moderation is up to us now

A lighthouse stands tall on a rugged cliff, emitting a powerful, multi-colored beam of light that pierces through a dark, stormy sea below. The beam, in shades of light gray, dark gray, bright red, yellow, and blue, guides small boats carrying diverse internet users towards a calm, welcoming shore. This scene symbolizes the efforts of individuals and communities to navigate through the chaotic and often hostile digital ocean, seeking safe, inclusive online environments. The lighthouse serves as a beacon of hope and guidance amidst the tumultuous waters, representing the importance of creating a supportive and protective space for all users in the vast digital landscape.

I’ve curated my comfy middle-class life to such a degree that I mostly hear about the dark underbelly of the web / toxic online behaviour through publications such as Ryan Broderick’s excellent Garbage Day.

In his latest missive, Broderick gives the example of a comedian I’ve never encountered before by the name of Shane Gillis. Go and read the whole thing for the bigger context, but the main point Broderick is making I’ve bolded below. I would point out that the Fediverse is, in my experience, on the whole well-moderated. At least, better moderated than centralised social networks such as X and Instagram.

Last year, Gillis was a guest on the unwatchable “comedy “podcast” Flagrant and had to tell the hosts to stop pulling up and laughing at videos of people with Down Syndrome dancing. Clips from the episode recently started making the rounds again this week on Reddit and X. It’s incredibly uncomfortable to watch.

And, sure, Gillis is not directly organizing any of this larger edgelord behavior. But he can’t be separated from it either. As I wrote above, the companies that run the internet have all but given up moderating it, so that work has to be done by us now. We have to manage our own communities and we have to look out for the most vulnerable. People with Down Syndrome and their loved ones should be able to openly share their lives online without worrying about getting turned into a meme or converted into engagement bait by some anonymous goblin. Even if that means dropping your chill bro facade and riling up the Stoolies when you tell people to stop.

Gills has the biggest podcast on Patreon. He’s been at the top of their charts for over a year. He has a massive platform and he built it by letting every awful guy in the country project themselves on to him. And while he does genuinely seem to really want to use that fame to bring visibility to the Down Syndrome community — and I think it’s admirable that he does — he’s not willing to draw a clear line between visibility and exploitation.

Source: Garbage Day

Image: DALL-E 3

Hope vs Natality

Trigger warnings: death, persecution, suicide

The image portrays a grounded, realistic scene within that visually interprets the concept of natality, inspired by Hannah Arendt. It depicts a community gathering in a park or natural setting, actively participating in planting trees and caring for a garden. This setting symbolizes the principle of natality through the act of nurturing new life and the collaborative effort to foster growth and renewal. The scene embodies the essence of natality as the capacity for continuous human existence, highlighting practical actions that contribute to the creation of a hopeful future.

Over on my personal blog I wrote that, given the depth of the climate emergency,‘hope’ is the wrong thing to be focusing upon. Will Richardson left a comment which pointed me towards this article by Samantha Rose Hill, a biographer of Hannah Arendt, for Aeon.

Arendt was a German-American historian and philosopher who escaped the Nazis. This article is about Arendt’s rejection in her work of the concept of ‘hope’ as being a lot less useful than action. Before getting to Arendt’s thoughts, I just want to share this quotation that is included in the article from Tadeusz Borowski, a Polish poet who wrote about the ways in which hope was used to destroy Jewish humanity. Borowski wrote the following lines while reflecting on his imprisonment in Auschwitz. He killed himself soon afterwards:

Never before in the history of mankind has hope been stronger than man, but never also has it done so much harm as it has in this war, in this concentration camp. We were never taught how to give up hope, and this is why today we perish in gas chambers.

Arendt suggests that hope is part of a desire for a happy ending, not based on the facts around us, but rather wishful thinking:

Many discussions of hope veer toward the saccharine, and speak to a desire for catharsis. Even the most jaded observers of world affairs can find it difficult not to catch their breath at the moment of suspense, hoping for good to triumph over evil and deliver a happy ending. For some, discussions of hope are attached to notions of a radical political vision for the future, while for others hope is a political slogan used to motivate the masses. Some people uphold hope as a form of liberal faith in progress, while for others still hope expresses faith in God and life after death.

Arendt breaks with these narratives. Throughout much of her work, she argues that hope is a dangerous barrier to acting courageously in dark times. She rejects notions of progress, she is despairing of representative democracy, and she is not confident that freedom can be saved in the modern world. She does not even believe in the soul, as she writes in one love letter to her husband. The political theorist George Kateb once remarked that her work is ‘offensive to a democratic soul’. When she was awarded an honorary degree at Smith College in Massachusetts in 1966, the president said: ‘Your writings challenge the mind, disturb the conscience, and depress the spirit of your readers; yet out of your wisdom and firm belief in mankind’s inner strength comes a sure hope.’

I’ve been listening to Ep.28 (‘Superhumanly Inhuman’) of Dan Carlin’s Hardcore History: Addendum which is about the Holocaust. It’s absolutely awful listening, but important stuff to know about. The article continues by talking about this dark period for Jewish and world history:

It was holding on to hope, Arendt argued, that rendered so many helpless. It was hope that destroyed humanity by turning people away from the world in front of them. It was hope that prevented people from acting courageously in dark times.

Caught between fear and ‘feverish hope’, the inmates in the ghetto were paralysed. The truth of ‘resettlement’ and the world’s silence led to a kind of fatalism. Only when they gave up hope and let go of fear, Arendt argues, did they realise that ‘armed resistance was the only moral and political way out’.

Instead, Arendt coined a new term: natality which celebrates the miracle of birth and continued human existence:

An uncommon word, and certainly more feminine and clunkier-sounding than hope, natality possesses the ability to save humanity. Whereas hope is a passive desire for some future outcome, the faculty of action is ontologically rooted in the fact of natality. Breaking with the tradition of Western political thought, which centred death and mortality from Plato’s Republic through to Heidegger’s Being and Time (1927), Arendt turns towards new beginnings, not to make any metaphysical argument about the nature of being, but in order to save the principle of humanity itself. Natality is the condition for continued human existence, it is the miracle of birth, it is the new beginning inherent in each birth that makes action possible, it is spontaneous and it is unpredictable. Natality means we always have the ability to break with the current situation and begin something new. But what that is cannot be said.

Hill, the author of the Aeon article, argues that:

Conceptually, natality can be understood as the flipside of hope:

  • Hope is dehumanising because it turns people away from this world.
  • Hope is a desire for some predetermined future outcome.
  • Hope takes us out of the present moment.
  • Hope is passive.
  • Hope exists alongside evil.
  • Natality is the principle of humanity.
  • Natality is the promise of new beginnings.
  • Natality is present in the Now.
  • Natality is the root of action.
  • Natality is the miracle of birth.

What I love about this approach is that, as the article says, it’s kind of a “secular article of faith,” placing the responsibility for action firmly in our hands. Hope is, to some degree, the wish to be told soothing stories by a authoritative figure. It’s time for us to grow up.

Source: Aeon

Image: DALL-E 3

Post-Holocene preferable future habitats

A futuristic but natural scene showing transport, housing, and community

If someone asks me “what kind of future would you like to live in?” I’m going to just point them to this. It’s the work of Pascal Wicht, a systems thinker and strategic designer who specialises in tackling complex and ill-defined problems.

The dangers and problems with generative AI are many and well-documented. What I love about it is that all of a sudden we can quickly create things that we point to for inspiration and alternative futures. In this case, Wicht is experimenting with the Midjourney v6 Alpha, and there are many more images here.

Future Visualisations for Preferable Futures, using the MidJourney’s Generative Adversarial Networks.

I am in my third week of long Covid again. I can spend one or two hours per day on AI images and doing some writing. These images are part of what kept me motivated while mostly stuck in bed.

In this ongoing series, I continue to use the power of AI to explore a compelling question: What does a future look like where we successfully slow down and avert the looming abominations of collapse and extinction?

Source: Whispers & Giants

Image CC BY-NC Pascal Wicht

Being a good listener also means being a good talker

A child and an adult engage in a respectful conversation at eye level, seated at a dark gray table with light gray chairs. The environment, accented with elements in bright red, yellow, and blue, underscores the importance of treating children with the same respect and dignity as adults, emphasizing the value of meaningful communication.

What an absolutely fantastic read this is. I’d encourage everyone to read it in its entirety, especially if you’re a parent. The list of things that the author, Molly Brodak, suggests we try out is:

  1. Let people feel their feels.
  2. Check your own emotions.
  3. Talk to children as if they are people.
  4. Don’t give advice. Not really.
  5. Don’t relate.
  6. Ask questions.

I find #5 difficult, have gotten better at #4, and think that #3 is really, super important. I used to hate being talked to ‘differently’ as a child (compared to adults), and have noticed how much kids appreciated being talked to without being patronised.

I’m a child of a therapist. What that means is that I was expertly listened-to most of my life. And then, wow, I met the rest of the world.

It’s a good thing for our survival. It’s what makes this whole civilization thing possible, these linked minds. So why are so many people still so bad at listening?

One reason is this myth: that the good listener just listens. This egregious misunderstanding actually leads to a lot of bad listening, and I’ll tell you why: because a good listener is actually someone who is good at talking.

Source: Tomb Log

Image: DALL-E 3

AI agents as customers

A modern living room with light and dark gray decor features a bright red smart speaker connected to various smart home devices in yellow and blue by glowing blue lines, symbolizing the initial phase of technological integration governed by human rules.

I don’t often visit Medium other than when I’m writing a post for the WAO blog. When I’m there, it’s unlikely that any of the ‘recommended’ articles grab my attention. But this one did.

Although it seems ‘odd’, when you come to think of it, the notion of businesses selling to machines as well as humans makes complete sense. It won’t be long until, for better or worse, many of us will have AI agents who act on our behalf. That will not only be helping us with routine tasks and giving advice, but also making purchases on our behalf.

Obviously, the entity behind this blog post, “next-generation professional services company” has an interest in this becoming a reality. But it seems plausible.

Below is a timeline that encapsulates this progression, providing a roadmap for navigating the impending shifts in the landscape of consumer behavior:

  1. Bound customer (today): Here, humans set the rules, and machines follow, executing purchases for specific items. This is seen in today’s smart devices and services like automated printer ink subscriptions.

  2. Adaptable customer (by 2026): Machines will co-lead with humans, making optimized decisions from a set of choices. This will be reflected in smart home systems that can choose energy providers.

  3. Autonomous customer (by 2036): The machine will take the lead, inferring needs and making purchases based on a complex understanding of rules, content, and preferences, such as AI personal assistants managing daily tasks.

Source: Slalom Business

Image: DALL-E 3

Ultravioleta

'Ultravioleta' image

I’m not sure of the backstory to this drawing (‘Ultravioleta’) by Jon Juarez, but I don’t really care. It looks great, and so I’ve bought a print of it from their shop. They seem, from what I can tell, to have initially withdrawn from social media after companies such as OpenAI and Midjourney started using artists' work for their training data, but are now coming back.

Fediverse: @harriorrihar@mas.to

Shop: Lama

Language is probably less than you think it is

gapingvoid cartoon showing information, knowledge, wisdom, etc.

This is a great post by Jennifer Moore, whose main point is about using AI for software development, but along the way provide three paragraphs which get to the nub of why tools such as ChatGPT seem somewhat magical.

As Moore points out, large language models aren’t aware. They model things based on statistical probability. To my mind, it’s not so different than when my daughter was doing phonics and learning to recognise the construction of words and the probability of how words new to her would be spelled.

ChatGPT and the like are powered by large language models. Linguistics is certainly an interesting field, and we can learn a lot about ourselves and each other by studying it. But language itself is probably less than you think it is. Language is not comprehension, for example. It’s not feeling, or intent, or awareness. It’s just a system for communication. Our common lived experiences give us lots of examples that anything which can respond to and produce common language in a sensible-enough way must be intelligent. But that’s because only other people have ever been able to do that before. It’s actually an incredible leap to assume, based on nothing else, that a machine which does the same thing is also intelligent. It’s much more reasonable to question whether the link we assume exists between language and intelligence actually exists. Certainly, we should wonder if the two are as tightly coupled as we thought.

That coupling seems even more improbable when you consider what a language model does, and—more importantly—doesn’t consist of. A language model is a statistical model of probability relationships between linguistic tokens. It’s not quite this simple, but those tokens can be thought of as words. They might also be multi-word constructs, like names or idioms. You might find “raining cats and dogs” in a large language model, for instance. But you also might not. The model might reproduce that idiom based on probability factors instead. The relationships between these tokens span a large number of parameters. In fact, that’s much of what’s being referenced when we call a model large. Those parameters represent grammar rules, stylistic patterns, and literally millions of other things.

What those parameters don’t represent is anything like knowledge or understanding. That’s just not what LLMs do. The model doesn’t know what those tokens mean. I want to say it only knows how they’re used, but even that is over stating the case, because it doesn’t know things. It models how those tokens are used. When the model works on a token like “Jennifer”, there are parameters and classifications that capture what we would recognize as things like the fact that it’s a name, it has a degree of formality, it’s feminine coded, it’s common, and so on. But the model doesn’t know, or understand, or comprehend anything about that data any more than a spreadsheet containing the same information would understand it.

Source: Jennifer++

Image: gapingvoid

Humans and AI-generated news

A surreal representation of the digital era's climax, where users are depicted as digital avatars being force-fed content by a colossal, mechanical behemoth. This machine, symbolizing Big Tech, is fueled by outrage and engagement, its machinery adorned with rising shareholder value graphs, all portrayed in an imaginative color scheme of Light Gray, Dark Gray, Bright Red, Yellow, and Blue.

The endgame of news, as far as Big Tech is concerned is, I guess, just-in-time created content for ‘users’ (specified in terms of ad categories) who then react in particular ways. That could be purchasing a thing, but it also could be outrage, meaning more time on site, more engagement, and more shareholder value.

Like Ryan Broderick, I have some faith that humans will get sick of AI-generated content, just as they got sick of videos and list posts. But I also have this niggling doubt: the tendency is to see AI only through the lens of tools such as ChatGPT. That’s not what the AI of the future is likely to resemble, at all.

Adweek broke a story this week that Google will begin paying publications to use an unreleased generative-AI tool to produce content. The details are scarce, but it seems like Google is largely approaching small publishers and paying them an annual “five-figure sum”. Good lord, that’s low.

Adweek also notes that the publishers don’t have to publicly acknowledge they’re using AI-generated copy and the, presumably, larger news organizations the AI is scraping from won’t be notified. As tech critic Brian Merchant succinctly put it, “The nightmare begins — Google is incentivizing the production of AI-generated slop.”

Google told Engadget that the program is not meant to “replace the essential role journalists have in reporting, creating, and fact-checking their articles,” but it’s also impossible to imagine how it won’t, at the very least, create a layer of garbage above or below human-produced information surfaced by Google. Engadget also, astutely, compared it to Facebook pushing publishers towards live video in the mid-2010s.

[…]

Companies like Google or OpenAI don’t have to even offer any traffic to entice publishers to start using generative-AI. They can offer them glorified gift cards and the promise of an executive’s dream newsroom: one without any journalists in it. But the AI news wire concept won’t really work because nothing ever works. For very long, at least. The only thing harder to please than journalists are readers. And I have endless faith in an online audience’s ability to lose interest. They got sick of lists, they got sick of Facebook-powered human interest social news stories, they got sick of tweet roundups, and, soon, they will get sick of “news” entirely once AI finally strips it of all its novelty and personality. And when the next pivot happens — and it will — I, for one, am betting humans figure out how to adapt faster than machines.

Source: Garbage Day

Image: DALL-E 3

Elegant media consumption

A landscape divided into a digital stream and a creative river, with the former depicted in light gray, dark gray, and bright red, symbolizing media consumption, and the latter in yellow and blue, illustrating people engaging in creative pursuits along its banks, highlighting the balance between digital engagement and personal creativity.

Jay Springett shares some media consumption figures. It blows my mind how much time people spend consuming media rather than making stuff.

I was hanging out with a friend the other week and we were talking about our ‘little hobbies’ as we called them. All the things that we’re interested in. Our niches that we nerd out about which aren’t the sort of thing that we can talk to people about at any great length.

We got to wondering about how we spend our time, and what other people spend their time doing. We had big conversation with our other friends at the table with us about what they do with there time. Their answers wen’t all that far away from these stats I’ve just Googled:

Did you know in the UK in January 2024, adults watched an average of 2 hours 31 minutes a day of linear TV?

Meanwhile, a Pinterest user spends about 14 minutes on the platform daily, BUT “83% of weekly Pinterest users report making a purchase based on content they encountered”

The average podcast listener spends an hour a day listening to podcast.

Using a different metric, an average audiobook enjoyer spends an average of 2 hours 19 minutes every day,

In Q3 of 2023 the average amount of time spent on social media a day was 2 hours and 23 minutes? 1 in 3 internet minutes spent online can be attributed to social media platforms.

I like this combined more holistic statistic.

According to a study on media consumption in the United Kingdom, the average time spent consuming traditional media is consistently decreasing while people spend more time using digital media. In 2023, it is estimated that people in the United Kingdom will spend four hours and one minute using traditional media, while the average daily time spent with digital media is predicted to reach six hours and nine minutes.

Source: thejaymo

Image: DALL-E 3

Philosophy and folklore

An ancient library transitions into an enchanted forest, where mystical creatures and philosophers exchange ideas, under a canopy of intertwined branches and glowing manuscripts, illustrating the harmonious integration of folklore and philosophy, depicted in light gray, dark gray, bright red, yellow, and blue.

I love this piece in Aeon from Abigail Tulenko, who argues that folklore and philosophy share a common purpose in challenging us to think deeply about life’s big questions. Her essay is essentially a critique of academic philosophy’s exclusivity and she calls for a broader, more inclusive approach that embraces… folklore.

Tulenko suggests that folktales, with all of their richness and diversity, offer fresh perspectives and can invigorate philosophical discussions by incorporating a wider range of experiences and ideas. By integrating folklore into philosophical inquiry, she suggests that there is the potential to democratise the field and make it not only more accessible and engaging, but help to break down academic barriers and interdisciplinary collaboration.

I’m all for it. Although it’s problematic to talk about Russian novels and culture at the moment, there are some tales from that country which are deeply philosophical in nature. I’d also include things like Dostoevsky’s Crime and Punishment as a story from which philosophers can glean insights.

The Hungarian folktale Pretty Maid Ibronka terrified and tantalised me as a child. In the story, the young Ibronka must tie herself to the devil with string in order to discover important truths. These days, as a PhD student in philosophy, I sometimes worry I’ve done the same. I still believe in philosophy’s capacity to seek truth, but I’m conscious that I’ve tethered myself to an academic heritage plagued by formidable demons.

[…]

propose that one avenue forward is to travel backward into childhood – to stories like Ibronka’s. Folklore is an overlooked repository of philosophical thinking from voices outside the traditional canon. As such, it provides a model for new approaches that are directly responsive to the problems facing academic philosophy today. If, like Ibronka, we find ourselves tied to the devil, one way to disentangle ourselves may be to spin a tale.

Folklore originated and developed orally. It has long flourished beyond the elite, largely male, literate classes. Anyone with a story to tell and a friend, child or grandchild to listen, can originate a folktale. At the risk of stating the obvious, the ‘folk’ are the heart of folklore. Women, in particular, have historically been folklore’s primary originators and preservers. In From the Beast to the Blonde (1995), the historian Marina Warner writes that ‘the predominant pattern reveals older women of a lower status handing on the material to younger people’.

[…]

To answer that question [folklore may be inclusive, but is it philosophy?], one would need at least a loose definition of philosophy. This is daunting to provide but, if pressed, I’d turn to Aristotle, whose Metaphysics offers a hint: ‘it is owing to their wonder that men both now begin, and at first began, to philosophise.’ In my view, philosophy is a mode of wondrous engagement, a practice that can be exercised in academic papers, in theological texts, in stories, in prayer, in dinner-table conversations, in silent reflection, and in action. It is this sense of wonder that draws us to penetrate beyond face-value appearances and look at reality anew.

[…] Beyond ethics, folklore touches all the branches of philosophy. With regard to its metaphysical import, Buddhist folklore provides a striking example. When dharma – roughly, the ultimate nature of reality – ‘is internalised, it is most naturally taught in the form of folk stories: the jataka tales in classical Buddhism, the koans in Zen,’ writes the Zen teacher Robert Aitken Roshi. The philosophers Jing Huang and Jonardon Ganeri offer a fascinating philosophical analysis of a Buddhist folktale seemingly dating back to the 3rd century BCE, which they’ve translated as ‘Is This Me?’ They argue that the tale constructs a similar metaphysical dilemma to Plutarch’s ‘ship of Theseus’ thought-experiment, prompting us to question the nature of personal identity.

Source: Aeon

Image: DALL-E 3

3 issues with global mapping of micro-credentials

A fantastical battlefield where traditional educational gatekeepers, depicted as towering structures, face off against rebels wielding glowing Open Badges and alternative credentials, using them to break through barriers, highlighted in shades of gray, red, yellow, and blue.

If you’ll excuse me for a brief rant, I have three, nested, issues with this ‘global mapping initiative’ from Credential Engine’s Credential Transparency Initiative. The first is situating micro-credentials as “innovative, stackable credentials that incrementally document what a person knows and can do”. No, micro-credentials, with or without the hyphen, are a higher education re-invention of Open Badges, and often conflate the container (i.e. the course) with the method of assessment (i.e. the credential).

Second, the whole point of digital credentials such as Open Badges is to enable the recognition of a much wider range of things that formal education usually provides. Not to double-down on the existing gatekeepers. This was the point of the Keep Badges Weird community, which has morphed into Open Recognition is for Everybody (ORE).

Third, although I recognise the value of approaches such as the Bologna Process, initiatives which map different schemas against one another inevitably flatten and homogenise localised understandings and ways of doing things. It’s the educational equivalent of Starbucks colonising cities around the world.

So I reject the idea at the heart of this, other than to prop up higher education institutions which refuse to think outside of the very narrow corner into which they have painted themselves by capitulating to neoliberalism. Credentials aren’t “less portable” because there is no single standardised definitions. That’s a non sequitur. If you want a better approach to all this, which might be less ‘efficient’ for institutions, but which is more valuable for individuals, check out Using Open Recognition to Map Real-World Skills and Attributes.

Because micro-credentials have different definitions in different places and contexts, they are less portable, because it’s harder to interpret and apply them consistently, accurately, and efficiently.

The Global Micro-Credential Schema Mapping project helps to address this issue by taking different schemas and frameworks for defining micro-credentials and lining them up against each other so that they can be compared. Schema mapping involves crosswalking the defined terms that are used in data structures. The micro-credential mapping does not involve any personally identifiable information about people or the individual credentials that are issued to them– the mapping is done across metadata structures. This project has been initially scoped to include schema terms defining the micro-credential owner or offeror, issuer, assertion, and claim.

Source: Credential Engine

Image: DALL-E 3

Perhaps stop caring about what other people think (of you)

A vibrant city street where masks lie discarded, and individuals radiate their true selves in bright, unique colors, symbolizing the liberation from pretense and the embrace of authenticity.

In this post, Mark Manson, author of _The Subtle Art of Not Giving a F*ck _ outlines ‘5 Life-Changing Levels of Not Giving a Fuck’. It’s not for those with an aversion to profanity, but having read his book what I like about Manson’s work is that he’s essentially applying some of the lessons of Stoic philosophy to modern life. An alternative might be Derren Brown’s book Happy: Why more or less everything is absolutely fine.

Both books are a reaction to the self-help industry, which doesn’t really deal with the root cause of suffering in the world. As the first lines of Epictetus' Enchiridion note: “Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our own actions.”

Manson’s post is essentially a riff on this, outlining five ‘levels’ of, essentially, getting over yourself. There’s a video, if you prefer, but I’m just going to pull out a couple of parts from the post which I think are actually most life-changing if you can internalise them. At the end of the day, unless you’re in a coercive relationship of some description, the only person that can stop you doing something is… yourself.

The big breakthrough for most people comes when they finally drop the performance and embrace authenticity in their relationships. When they realize no matter how well they perform, they’re eventually gonna be rejected by someone, they might as well get rejected for who they already are.

When you start approaching relationships with authenticity, by being unapologetic about who you are and living with the results, you realize you don’t have to wait around for people to choose you, you can also choose them.

[…]

Look, you and everyone you know are gonna die one day. So what the fuck are you waiting for? That goal you have, that dream you keep to yourself, that person you wanna meet. What are you letting stop you? Go do it.

Source: Mark Manson

Image: DALL-E 3

Educators should demand better than 'semi-proctored writing environments'

Screenshot showing PowerNotes tool highlighting copy/pasted text and AI-generated text My longer rant about the whole formal education system of which this is a symptom will have to wait for another day, but this (via [Stephen Downes](https://www.downes.ca/cgi-bin/page.cgi?post=76308)) makes me despair a little. Noting that "it is essentially impossible for one machine to determine if a piece of writing was produced by another machine" one company has decided to create a "semi-proctored writing tool" to "protect academic integrity".

Generative AI is disruptive, for sure. But as I mentioned on my recent appearance on the Artificiality podcast, it’s disruptive to a way of doing assessment that makes things easier for educators. Writing essays to prove that you understand something is an approach which was invented a long time ago. We can do much better, including using technology to provide much more individualised feedback, and allowing students to use technology to much more closely apply it to their own practice.

Update: check out the AI Pedagogy project from metaLAB at Harvard

PowerNotes built Composer in response to feedback from educators who wanted a word processor that could protect academic integrity as AI is being integrated into existing Microsoft and Google products. It is essentially impossible for one machine to determine if a piece of writing was produced by another machine, so PowerNotes takes a different approach by making it easier to use AI ethically. For example, because AI is integrated into and research content is stored in PowerNotes, copying and pasting information from another source should be very limited and will be flagged by Composer.

If a teacher or manager does suspect the inappropriate use of AI, PowerNotes+ and Composer help shift the conversation from accusation to evidence by providing a clear trail of every action a writer has taken and where things came from. Putting clear parameters on the AI-plagiarism conversation keeps the focus on the process of developing an idea into a completed paper or presentation.

Source: eSchool News