Thought Shrapnel

Feb 27, 2024 ↓

The perils of over-customising your setup

XKCD comic #1806
&10;
&10;Hat: Can I load it up on your laptop?
&10;
&10;Other: Sure!
&10;
&10;Oh, just hit both shift keys to change over to qwerty
&10;
&10;Capslock is Control.
&10;And Spacebar is Capslock.
&10;
&10;And two-finger scroll moves through time instead of space.
&10;
&10;And ---

Until about a decade ago, I used to heavily customise my digital working environment. I'd have custom keyboard shortcuts, and automations, and all kinds of things. What I learned was that a) these things take time to maintain, and b) using computers other than your own becomes that much harder. I think the turning point was reading Clay Shirky say "current optimization is long-term anachronism".

So, these days, I run macOS on my desktop with pretty much the out-of-the-box configuration. My laptop runs ChromeOS Flex. I think if I went back to Linux, I'd probably go for something like Fedora Silverblue which is an immutable system like ChromeOS. In other words, the system files are read-only which makes for an extremely stable system.

One other point which might not work for everyone, but works for me. It's been seven years since I ditched my cloud-based password manager for a deterministic one. Although my passwords don't auto-fill, it's easy for me to access them anywhere, on any device. And they're not stored anywhere meaning there's no single point of failure.

Source: xkcd

Feb 27, 2024 ↓

Educators should demand better than 'semi-proctored writing environments'

Screenshot showing PowerNotes tool highlighting copy/pasted text and AI-generated text My longer rant about the whole formal education system of which this is a symptom will have to wait for another day, but this (via Stephen Downes) makes me despair a little. Noting that "it is essentially impossible for one machine to determine if a piece of writing was produced by another machine" one company has decided to create a "semi-proctored writing tool" to "protect academic integrity".

Generative AI is disruptive, for sure. But as I mentioned on my recent appearance on the Artificiality podcast, it's disruptive to a way of doing assessment that makes things easier for educators. Writing essays to prove that you understand something is an approach which was invented a long time ago. We can do much better, including using technology to provide much more individualised feedback, and allowing students to use technology to much more closely apply it to their own practice.

Update: check out the AI Pedagogy project from metaLAB at Harvard

PowerNotes built Composer in response to feedback from educators who wanted a word processor that could protect academic integrity as AI is being integrated into existing Microsoft and Google products. It is essentially impossible for one machine to determine if a piece of writing was produced by another machine, so PowerNotes takes a different approach by making it easier to use AI ethically. For example, because AI is integrated into and research content is stored in PowerNotes, copying and pasting information from another source should be very limited and will be flagged by Composer.

If a teacher or manager does suspect the inappropriate use of AI, PowerNotes+ and Composer help shift the conversation from accusation to evidence by providing a clear trail of every action a writer has taken and where things came from. Putting clear parameters on the AI-plagiarism conversation keeps the focus on the process of developing an idea into a completed paper or presentation.

Source: eSchool News

Feb 27, 2024 ↓

Perhaps stop caring about what other people think (of you)

A vibrant city street where masks lie discarded, and individuals radiate their true selves in bright, unique colors, symbolizing the liberation from pretense and the embrace of authenticity.

In this post, Mark Manson, author of [The Subtle Art of Not Giving a F*ck](https://en.wikipedia.org/wiki/TheSubtleArtofNotGivingaF*ck) _ outlines '5 Life-Changing Levels of Not Giving a Fuck'. It's not for those with an aversion to profanity, but having read his book what I like about Manson's work is that he's essentially applying some of the lessons of Stoic philosophy to modern life. An alternative might be Derren Brown's book Happy: Why more or less everything is absolutely fine.

Both books are a reaction to the self-help industry, which doesn't really deal with the root cause of suffering in the world. As the first lines of Epictetus' Enchiridion note: "Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our own actions."

Manson's post is essentially a riff on this, outlining five 'levels' of, essentially, getting over yourself. There's a video, if you prefer, but I'm just going to pull out a couple of parts from the post which I think are actually most life-changing if you can internalise them. At the end of the day, unless you're in a coercive relationship of some description, the only person that can stop you doing something is... yourself.

The big breakthrough for most people comes when they finally drop the performance and embrace authenticity in their relationships. When they realize no matter how well they perform, they’re eventually gonna be rejected by someone, they might as well get rejected for who they already are.

When you start approaching relationships with authenticity, by being unapologetic about who you are and living with the results, you realize you don’t have to wait around for people to choose you, you can also choose them.

[...]

Look, you and everyone you know are gonna die one day. So what the fuck are you waiting for? That goal you have, that dream you keep to yourself, that person you wanna meet. What are you letting stop you? Go do it.

Source: Mark Manson

Image: DALL-E 3

Feb 28, 2024 ↓

3 issues with global mapping of micro-credentials

A fantastical battlefield where traditional educational gatekeepers, depicted as towering structures, face off against rebels wielding glowing Open Badges and alternative credentials, using them to break through barriers, highlighted in shades of gray, red, yellow, and blue.

If you'll excuse me for a brief rant, I have three, nested, issues with this 'global mapping initiative' from Credential Engine's Credential Transparency Initiative. The first is situating micro-credentials as "innovative, stackable credentials that incrementally document what a person knows and can do". No, micro-credentials, with or without the hyphen, are a higher education re-invention of Open Badges, and often conflate the container (i.e. the course) with the method of assessment (i.e. the credential).

Second, the whole point of digital credentials such as Open Badges is to enable the recognition of a much wider range of things that formal education usually provides. Not to double-down on the existing gatekeepers. This was the point of the Keep Badges Weird community, which has morphed into Open Recognition is for Everybody (ORE).

Third, although I recognise the value of approaches such as the Bologna Process, initiatives which map different schemas against one another inevitably flatten and homogenise localised understandings and ways of doing things. It's the educational equivalent of Starbucks colonising cities around the world.

So I reject the idea at the heart of this, other than to prop up higher education institutions which refuse to think outside of the very narrow corner into which they have painted themselves by capitulating to neoliberalism. Credentials aren't "less portable" because there is no single standardised definitions. That's a non sequitur. If you want a better approach to all this, which might be less 'efficient' for institutions, but which is more valuable for individuals, check out Using Open Recognition to Map Real-World Skills and Attributes.

Because micro-credentials have different definitions in different places and contexts, they are less portable, because it’s harder to interpret and apply them consistently, accurately, and efficiently.

The Global Micro-Credential Schema Mapping project helps to address this issue by taking different schemas and frameworks for defining micro-credentials and lining them up against each other so that they can be compared. Schema mapping involves crosswalking the defined terms that are used in data structures. The micro-credential mapping does not involve any personally identifiable information about people or the individual credentials that are issued to them– the mapping is done across metadata structures. This project has been initially scoped to include schema terms defining the micro-credential owner or offeror, issuer, assertion, and claim.

Source: Credential Engine

Image: DALL-E 3

Feb 28, 2024 ↓

Philosophy and folklore

An ancient library transitions into an enchanted forest, where mystical creatures and philosophers exchange ideas, under a canopy of intertwined branches and glowing manuscripts, illustrating the harmonious integration of folklore and philosophy, depicted in light gray, dark gray, bright red, yellow, and blue.

I love this piece in Aeon from Abigail Tulenko, who argues that folklore and philosophy share a common purpose in challenging us to think deeply about life's big questions. Her essay is essentially a critique of academic philosophy's exclusivity and she calls for a broader, more inclusive approach that embraces... folklore.

Tulenko suggests that folktales, with all of their richness and diversity, offer fresh perspectives and can invigorate philosophical discussions by incorporating a wider range of experiences and ideas. By integrating folklore into philosophical inquiry, she suggests that there is the potential to democratise the field and make it not only more accessible and engaging, but help to break down academic barriers and interdisciplinary collaboration.

I'm all for it. Although it's problematic to talk about Russian novels and culture at the moment, there are some tales from that country which are deeply philosophical in nature. I'd also include things like Dostoevsky's Crime and Punishment as a story from which philosophers can glean insights.

The Hungarian folktale Pretty Maid Ibronka terrified and tantalised me as a child. In the story, the young Ibronka must tie herself to the devil with string in order to discover important truths. These days, as a PhD student in philosophy, I sometimes worry I’ve done the same. I still believe in philosophy’s capacity to seek truth, but I’m conscious that I’ve tethered myself to an academic heritage plagued by formidable demons.

[...]

propose that one avenue forward is to travel backward into childhood – to stories like Ibronka’s. Folklore is an overlooked repository of philosophical thinking from voices outside the traditional canon. As such, it provides a model for new approaches that are directly responsive to the problems facing academic philosophy today. If, like Ibronka, we find ourselves tied to the devil, one way to disentangle ourselves may be to spin a tale.

Folklore originated and developed orally. It has long flourished beyond the elite, largely male, literate classes. Anyone with a story to tell and a friend, child or grandchild to listen, can originate a folktale. At the risk of stating the obvious, the ‘folk’ are the heart of folklore. Women, in particular, have historically been folklore’s primary originators and preservers. In From the Beast to the Blonde (1995), the historian Marina Warner writes that ‘the predominant pattern reveals older women of a lower status handing on the material to younger people’.

[...]

To answer that question [folklore may be inclusive, but is it philosophy?], one would need at least a loose definition of philosophy. This is daunting to provide but, if pressed, I’d turn to Aristotle, whose Metaphysics offers a hint: ‘it is owing to their wonder that men both now begin, and at first began, to philosophise.’ In my view, philosophy is a mode of wondrous engagement, a practice that can be exercised in academic papers, in theological texts, in stories, in prayer, in dinner-table conversations, in silent reflection, and in action. It is this sense of wonder that draws us to penetrate beyond face-value appearances and look at reality anew.

[...] Beyond ethics, folklore touches all the branches of philosophy. With regard to its metaphysical import, Buddhist folklore provides a striking example. When dharma – roughly, the ultimate nature of reality – ‘is internalised, it is most naturally taught in the form of folk stories: the jataka tales in classical Buddhism, the koans in Zen,’ writes the Zen teacher Robert Aitken Roshi. The philosophers Jing Huang and Jonardon Ganeri offer a fascinating philosophical analysis of a Buddhist folktale seemingly dating back to the 3rd century BCE, which they’ve translated as ‘Is This Me?’ They argue that the tale constructs a similar metaphysical dilemma to Plutarch’s ‘ship of Theseus’ thought-experiment, prompting us to question the nature of personal identity.

Source: Aeon

Image: DALL-E 3

Feb 28, 2024 ↓

Elegant media consumption

A landscape divided into a digital stream and a creative river, with the former depicted in light gray, dark gray, and bright red, symbolizing media consumption, and the latter in yellow and blue, illustrating people engaging in creative pursuits along its banks, highlighting the balance between digital engagement and personal creativity.

Jay Springett shares some media consumption figures. It blows my mind how much time people spend consuming media rather than making stuff.

I was hanging out with a friend the other week and we were talking about our ‘little hobbies’ as we called them. All the things that we’re interested in. Our niches that we nerd out about which aren’t the sort of thing that we can talk to people about at any great length.

We got to wondering about how we spend our time, and what other people spend their time doing. We had big conversation with our other friends at the table with us about what they do with there time. Their answers wen’t all that far away from these stats I’ve just Googled:

Did you know in the UK in January 2024, adults watched an average of 2 hours 31 minutes a day of linear TV?

Meanwhile, a Pinterest user spends about 14 minutes on the platform daily, BUT “83% of weekly Pinterest users report making a purchase based on content they encountered”

The average podcast listener spends an hour a day listening to podcast.

Using a different metric, an average audiobook enjoyer spends an average of 2 hours 19 minutes every day,

In Q3 of 2023 the average amount of time spent on social media a day was 2 hours and 23 minutes? 1 in 3 internet minutes spent online can be attributed to social media platforms.

I like this combined more holistic statistic.

According to a study on media consumption in the United Kingdom, the average time spent consuming traditional media is consistently decreasing while people spend more time using digital media. In 2023, it is estimated that people in the United Kingdom will spend four hours and one minute using traditional media, while the average daily time spent with digital media is predicted to reach six hours and nine minutes.

Source: thejaymo

Image: DALL-E 3

Mar 2, 2024 ↓

Humans and AI-generated news

A surreal representation of the digital era's climax, where users are depicted as digital avatars being force-fed content by a colossal, mechanical behemoth. This machine, symbolizing Big Tech, is fueled by outrage and engagement, its machinery adorned with rising shareholder value graphs, all portrayed in an imaginative color scheme of Light Gray, Dark Gray, Bright Red, Yellow, and Blue.

The endgame of news, as far as Big Tech is concerned is, I guess, just-in-time created content for 'users' (specified in terms of ad categories) who then react in particular ways. That could be purchasing a thing, but it also could be outrage, meaning more time on site, more engagement, and more shareholder value.

Like Ryan Broderick, I have some faith that humans will get sick of AI-generated content, just as they got sick of videos and list posts. But I also have this niggling doubt: the tendency is to see AI only through the lens of tools such as ChatGPT. That's not what the AI of the future is likely to resemble, at all.

Adweek broke a story this week that Google will begin paying publications to use an unreleased generative-AI tool to produce content. The details are scarce, but it seems like Google is largely approaching small publishers and paying them an annual “five-figure sum”. Good lord, that’s low.

Adweek also notes that the publishers don’t have to publicly acknowledge they’re using AI-generated copy and the, presumably, larger news organizations the AI is scraping from won’t be notified. As tech critic Brian Merchant succinctly put it, “The nightmare begins — Google is incentivizing the production of AI-generated slop.”

Google told Engadget that the program is not meant to “replace the essential role journalists have in reporting, creating, and fact-checking their articles,” but it’s also impossible to imagine how it won’t, at the very least, create a layer of garbage above or below human-produced information surfaced by Google. Engadget also, astutely, compared it to Facebook pushing publishers towards live video in the mid-2010s.

[...]

Companies like Google or OpenAI don’t have to even offer any traffic to entice publishers to start using generative-AI. They can offer them glorified gift cards and the promise of an executive’s dream newsroom: one without any journalists in it. But the AI news wire concept won’t really work because nothing ever works. For very long, at least. The only thing harder to please than journalists are readers. And I have endless faith in an online audience’s ability to lose interest. They got sick of lists, they got sick of Facebook-powered human interest social news stories, they got sick of tweet roundups, and, soon, they will get sick of “news” entirely once AI finally strips it of all its novelty and personality. And when the next pivot happens — and it will — I, for one, am betting humans figure out how to adapt faster than machines.

Source: Garbage Day

Image: DALL-E 3

Mar 2, 2024 ↓

Language is probably less than you think it is

gapingvoid cartoon showing information, knowledge, wisdom, etc.

This is a great post by Jennifer Moore, whose main point is about using AI for software development, but along the way provide three paragraphs which get to the nub of why tools such as ChatGPT seem somewhat magical.

As Moore points out, large language models aren't aware. They model things based on statistical probability. To my mind, it's not so different than when my daughter was doing phonics and learning to recognise the construction of words and the probability of how words new to her would be spelled.

ChatGPT and the like are powered by large language models. Linguistics is certainly an interesting field, and we can learn a lot about ourselves and each other by studying it. But language itself is probably less than you think it is. Language is not comprehension, for example. It's not feeling, or intent, or awareness. It's just a system for communication. Our common lived experiences give us lots of examples that anything which can respond to and produce common language in a sensible-enough way must be intelligent. But that's because only other people have ever been able to do that before. It's actually an incredible leap to assume, based on nothing else, that a machine which does the same thing is also intelligent. It's much more reasonable to question whether the link we assume exists between language and intelligence actually exists. Certainly, we should wonder if the two are as tightly coupled as we thought.

That coupling seems even more improbable when you consider what a language model does, and—more importantly—doesn't consist of. A language model is a statistical model of probability relationships between linguistic tokens. It's not quite this simple, but those tokens can be thought of as words. They might also be multi-word constructs, like names or idioms. You might find "raining cats and dogs" in a large language model, for instance. But you also might not. The model might reproduce that idiom based on probability factors instead. The relationships between these tokens span a large number of parameters. In fact, that's much of what's being referenced when we call a model large. Those parameters represent grammar rules, stylistic patterns, and literally millions of other things.

What those parameters don't represent is anything like knowledge or understanding. That's just not what LLMs do. The model doesn't know what those tokens mean. I want to say it only knows how they're used, but even that is over stating the case, because it doesn't know things. It models how those tokens are used. When the model works on a token like "Jennifer", there are parameters and classifications that capture what we would recognize as things like the fact that it's a name, it has a degree of formality, it's feminine coded, it's common, and so on. But the model doesn't know, or understand, or comprehend anything about that data any more than a spreadsheet containing the same information would understand it.

Source: Jennifer++

Image: gapingvoid

Mar 2, 2024 ↓

Ultravioleta

'Ultravioleta' image

I'm not sure of the backstory to this drawing ('Ultravioleta') by Jon Juarez, but I don't really care. It looks great, and so I've bought a print of it from their shop. They seem, from what I can tell, to have initially withdrawn from social media after companies such as OpenAI and Midjourney started using artists' work for their training data, but are now coming back.

Fediverse: @harriorrihar@mas.to

Shop: Lama