There is no evidence that restrictive school policies are associated with overall phone and social media use or better mental wellbeing in adolescents

Child in uniform using a smartphone with an open notebook and pen on the table.

I usually find abstracts on academic papers a bit rubbish, but this ‘summary’ at the top of a research study is aces. As many people in the UK will have seen in the news over the last week, a study has shown that there’s “no evidence that restrictive school policies are associated with overall phone and social media use or better mental wellbeing in adolescents. As a result, “the findings do not provide evidence to support the use of school policies that prohibit phone use during the school day in their current form.”

This, of course, does not chime with what the public (parents, politicians, etc.) want to hear, so I imagine it will be widely ignored. In fact, when this was reported on in a radio news bulletin I heard, they immediately cut to a soundbite from a headteacher who had implemented a “no phones” policy who basically said it had worked for them. There are many problems with smartphone uses by teenagers in schools. But then there are many problems with schools.

Background: Poor mental health in adolescents can negatively affect sleep, physical activity and academic performance, and is attributed by some to increasing mobile phone use. Many countries have introduced policies to restrict phone use in schools to improve health and educational outcomes. The SMART Schools study evaluated the impact of school phone policies by comparing outcomes in adolescents who attended schools that restrict and permit phone use.

Methods: We conducted a cross-sectional observational study with adolescents from 30 English secondary schools, comprising 20 with restrictive (recreational phone use is not permitted) and 10 with permissive (recreational phone use is permitted) policies. The primary outcome was mental wellbeing (assessed using Warwick– Edinburgh Mental Well-Being Scale [WEMWBS]). Secondary outcomes included smartphone and social media time. Mixed effects linear regression models were used to explore associations between school phone policy and participant outcomes, and between phone and social media use time and participant outcomes. Study registration: ISRCTN77948572.

Findings: We recruited 1227 participants (age 12–15) across 30 schools. Mean WEMWBS score was 47 (SD = 9) with no evidence of a difference between groups (adjusted mean difference −0.48, 95% CI −2.05 to 1.06, p = 0.62). Adolescents attending schools with restrictive, compared to permissive policies had lower phone (adjusted mean difference −0.67 h, 95% CI −0.92 to −0.43, p = 0.00024) and social media time (adjusted mean difference −0.54 h, 95% CI −0.74 to −0.36, p = 0.00018) during school time, but there was no evidence for differences when comparing usage time on weekdays or weekends.

Interpretation: There is no evidence that restrictive school policies are associated with overall phone and social media use or better mental wellbeing in adolescents. The findings do not provide evidence to support the use of school policies that prohibit phone use during the school day in their current form, and indicate that these policies require further development.

Source: The Lancet

Image: True Images/Alamy (via The Guardian)

Technology is a means of spreading misinformation, not the cause of misinformation

This fine illuminated Book of Hours was produced in two stages in the second and third quarters of the fifteenth century. The manuscript contains eleven full-page miniatures and twenty historiated initials. The first stage of production includes a section attributed to the Masters of Zweder van Culemborg and the calendar (fols. 3r-14v, 52v-211v), while additional prayers illustrated in the style of the workshop of Willem Vrelant were added later in the fifteenth century (fols. 16r-50v, 213r-223r), presumably when the book was bound in its present binding. The Hours of the Virgin is for the Use of Rome. The Use of the Office of the Dead is unidentified, but the calendar is for the Use of Utrecht. The two separate parts of the manuscript were bound together in Flanders. The sections of W.168 attributed to the Masters of Zweder van Culemborg have been compared to Utrecht, Utrecht University Ms. 1037; Cambridge, Fitzwilliam Museum James Ms. 141; the second hand in New York, Pierpont Morgan Library Ms. M.87; Stockholm, Royal Library A 226, and Philadelphia, Free Library Lewis Ms. 88.

As a technologist and educator (former History teacher!) who wrote his doctoral thesis on digital literacies, this article couldn’t be any more in my sweet spot if it tried. Dr Gordon McKelvie talks about his British Academy project about misinformatoin, focusing on queens “because they were prominent enough figures to be spoken about and blamed for the country’s ills.”

Although only coined in 2013, Brandolini’s law has always been in full effect: “The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it." At least we live in a time when things can, in theory, be rebutted and debunked quickly. Back in medieval times, people would believe misinformation for years — if not their entire lives.

While a focus on the immediate problems confronting democratic states dealing with the spread of conspiracy theories is essential, we should not lose sight of the fact that misinformation was around long before the internet. If we look back into the distant past, we see the spread of conspiracy theories have been a common feature throughout human history. Technology is a means of spreading misinformation, not the cause of misinformation. […]

A key finding has been that fake news often becomes accepted historical fact. An example that illustrates this is the death of Anne Neville, wife of the infamous Richard III. We do not know the exact cause of her death, but it was probably natural causes. One contemporary source, however, claimed that the king needed to deny poisoning her in order to marry his niece. This is the only near-contemporary reference to such an event, written by the hostile Crowland chronicler . By the time Shakespeare was writing his ‘The Tragedy of Richard III’ a century later, this had become an accepted historical fact. Here, we see something that began as a piece of misinformation in the fifteenth century transformed into an accepted historical fact in the sixteenth century. […]

When we look elsewhere in medieval Europe, we see other examples of misinformation premised on existing prejudices. During the First Crusade, mistrust between the Catholic crusaders and the Greek Orthodox Byzantine Empire led to conspiracy theories that the Byzantines were colluding with Muslims against their fellow Christians. When the first wave of the Black Death hit Europe in 1348, Jews were thought to have spread the disease by poisoning wells, simply to kill Christians. In both examples, pre-existing beliefs and fears meant that misinformation and conspiracy theories flourished quickly. […]

Misinformation was a key feature of medieval politics and society. Examining the spread of fake news, or conspiracy theories, in the centuries before even the printing press, never mind the internet, helps us understand how they flourish and their appeal. […]

Historians have an important part to play in fleshing out our understanding of misinformation. We are indeed living in an age of mistrust, but certainly not the first, and almost definitely not the last.

Source: The British Academy

Image: Walters Art Museum

Once you have a 360 view, you can redirect resources to insiders and cut off the opposition

I’ve held off posting anything about what’s currently going on in the USA at the moment, as apparently it’s all very confusing even if you’re paying full attention. What did make me sit up and take notice, though, was Jason Kottke’s use of screengrab from Mad Max: Fury Road when summarising a Bluesky thread by Abe Newman about Elon Musk’s seizure of key parts of the government’s information systems.

For those who haven’t seen the film (one of my favourites, especially the Black & Chrome Edition), it’s the perfect analogy. A character by the name of Immortan Joe, a dictator in a post-apocalyptic landscape, who is revered as a god by his followers. He dominates the economy by controlling the only supply of fresh water, which he turns on from time to time, saying “Do not, my friends, become addicted to water. It will take hold of you, and you will resent its absence!" I’ve included a gif above that shows the moment from the film.

Newman links to reporting that detail that these operations are controlled by Musk: payment, personnel, and operations. But seeing them as part of a bigger strategy is important:

The first point is to make the connection. Reporting has seen these as independent ‘lock outs’ or access to specific IT systems. This seems much more a part of a coherent strategy to identify centralized information systems and control them from the top.

Newman continues:

So what are the risks. First, the panopticon. Made popular by Foucault, the idea is that if you let people know that they are being watched from a central position they are more likely to obey. E.g. emails demanding changes or workers will be added to lists…

The second is the chokepoint. If you have access to payments and data, you can shut opponents off from key resources. Sen Wyden sees this coming.

Divert to loyalists. Once you have a 360 view, you can redirect resources to insiders and cut off the opposition.

Source: Kottke.org

Clinical studies have indicated that creatine might have an antidepressant effect

One scoop of white creatine monohydrate powder

Along with about six different supplements, I add creatine to my protein smoothies every day I do exercise. Which, to be fair, is most days ending in a ‘y’. Too much of the white powder and I get angry but, as a male vegetarian, it’s important that I get some in my diet.

It turns out that creatine isn’t just good for building and maintaining muscle mass, though. It turns out that it’s also good for mental health, too — and combining it with various forms of therapy is especially beneficial.

More recently, researchers have begun to look at the broader systemic effects of creatine supplementation. Of particular interest has been the relationship between creatine and brain health. Following the discovery of endogenous creatine synthesis in the human brain, research quickly moved to understand what role this compound plays in things like cognition and mood.

Most studies linking brain benefits to creatine supplementation are either small or preliminary but there are enough clues to suggest that something positive could be going on here. For example, one oft-cited clinical trial from 2012 found creatine supplementation can effectively augment anti-depressant treatment. The trial was small (just 52 subjects, all women) but after eight weeks it found those subjects taking creatine supplements with their SSRI antidepressant were twice as likely to achieve remission from depression symptoms compared to those just taking antidepressants.

A recent article reviewing the research on creatine supplementation and depression pointed to several physiological mechanisms that could plausibly explain how this compound could improve mental health. Alongside citing several small trials that found positive results from creatine supplementation, the article concludes by stating: “Creatine is a naturally occurring organic acid that serves as an energy buffer and energy shuttle in tissues, such as brain and skeletal muscle, that exhibit dynamic energy requirements. Evidence, deriving from a variety of scientific domains, that brain bioenergetics are altered in depression and related disorders is growing. Clinical studies in neurological conditions such as PD [Parkinson’s Disease] have indicated that creatine might have an antidepressant effect, and early clinical studies in depressive disorders – especially MDD [Major Depressive Disorder] – indicate that creatine may have an important antidepressant effect.”

Source: New Atlas

Image: HowToGym

The idea that this might in any way appeal to 'newcomers' is bananas to me

Screenshots of OpenVibe

It’s hard not to agree with John Gruber’s analysis of Openvibe, an app that allows you to mash together all of the different decentralised social networks (Mastodon, Bluesky, Threads, etc.) into one timeline. He doesn’t like it, and I have never liked the idea.

That’s partly because it’s confusing, but even if you managed to provide a compelling UX, the rhetorics of interactive communication are completely different on social networks. The way people interact on one social network use different norms and approaches than others. That means different literacies are involved. I’d argue that mashing it all together only really serves people who wish to ‘broadcast’ messages to multiple places at the same time.

I really don’t see the point of mashing the tweets from two (or more!) different social networks into one unified timeline. To me it’s just confusing. I don’t love the current situation where three entirely separate, thriving social networks are worth some portion of my attention (not to mention that a fourth, X, still kinda does too). But when I use each of these platforms, I want to use a client that is dedicated to each platform. These platforms all have different features, to varying degrees, and they definitely have different vibes and cultural norms. Pretending that they all form one big (lowercase-m) meta platform doesn’t make that pretense cohesive. Mashing them all together in one timeline isn’t simpler. It sounds simpler but in practice it’s more cacophonous.

The idea that this might in any way appeal to “newcomers” is bananas to me. The concept of streaming multiple accounts from multiple networks into one timeline is by definition a bit advanced. In my experience, for very obvious reasons, casual social network users only use the first-party client. They’re confused even by the idea of using, say, an app named Ivory to access a social network called Mastodon. The idea of explaining to them why they might want to use an app named Openvibe to access Mastodon, Bluesky, and Threads (and the weirdo blockchain network Nostr) is like trying to explain to your dog why they should stay out of the trash. There’s a market for third-party clients (or at least I hope there is), but that market is not made up of “newcomers”.

Source: Daring Fireball

The inevitable cracks in a rigid software logic that enables the surprising, delightful messiness of humanity to shine through

A photo of a diagram in a book showing an algorithm

I’ve been following the development of Are.na since the early days of leading the MoodleNet project. It’s a great example of a platform that serves a particular niche of users (“connected knowledge collectors”) really well.

In this Are.na editorial, Elan Ullendorff — a designer, writer, and educator — talks about the course he teaches. In it, he helps students research and map algorithms, before writing their own, and releasing them to the world.

I write a newsletter, teach a course, and run workshops all called “escape the algorithm.” The implicit joke of the name’s particularity (not “escape algorithms” but “escape the algorithm”) is that living outside of algorithms isn’t actually possible. An algorithm is simply a set of instructions that determines a specific result. The recommendation engine that causes Spotify to encourage you to listen to certain music is a cultural sieve, but so were, in a way, the Billboard charts and radio gatekeepers that preceded it. There have always been centers of power, always been forces that exert gravitational pulls on our behavior.

The anxiety isn’t determined by the presence or absence of code. It comes from a lack of transparency and control. You are susceptible whether or not TikTok exists, whether or not you delete it. Logging off is one tool, but it will not alone cure you.

Instead of withdrawing, I encourage my students to dive deeper, engaging with platforms as if they were close reading a work of literature. In doing so, I believe that we can not only better understand a platform’s ideological premises, but also the inevitable cracks in a rigid software logic that enables the surprising, delightful messiness of humanity to shine through. And in so doing, we might move beyond the flight response towards a fight response. Or if it is a flight response, let it be a flight not just away from something, but towards something.

[…]

Resisting the paths most traveled invites us to look at the platforms we use with a critical eye, leading us to new forms of critique, making visible parts of the world and culture that are out of our view, and inspiring entirely new ways of navigating the web.

Take Andrew Norman Wilson’s ScanOps, a collection of Google Books screenshots that include the hands of low-paid Google data entry workers, or Chia Amisola’s The Sound of Love, which curates evocative comments on Youtube songs. Then there’s Riley Walz’s Bop Spotter (a commentary on ShotSpotter, gunshot detection microphones often licensed by city governments), a constantly Shazam-ing Android phone hidden on a pole in the Mission district.

Source: Are.na

Image: Андрей Сизов

⭐ Become a Thought Shrapnel supporter!

Hi everyone, Doug here. Just to let you know that it’s now possible to support Thought Shrapnel on a monthly basis!

👀 Find out more.

Don’t worry, nothing’s changing other than your ability to ensure the sustainability of this publication, receive a holographic sticker to go on your water bottle (or whatever), and have your name listed as a supporter.

We used to have around 60 supporters of Thought Shrapnel back in the day, so I hope you’ll consider becoming one of the first to get this new (rare!) holographic sticker. This is part of a little February experimentation

Description of Things and Atmosphere

Black and silver pocket knife

My daughter was complaining that, now she’s in high school, her English teacher demands more of her writing. I happened to have just read a post at Futility Closet about the notebooks of F. Scott Fitzgerald which gives examples of him coming up with vividly atmospheric descriptions of scenes. I shared it with her, so hopefully she’ll use it as inspiration.

While I’m not a fan of overly-long descriptions just for the sake of it, this writing is sublime. It makes me want to re-read The Great Gatsby.

In the light of four strong pocket flash lights, borne by four sailors in spotless white, a gentleman was shaving himself, standing clad only in athletic underwear upon the sand. Before his eyes an irreproachable valet held a silver mirror which gave back the soapy reflection of his face. To right and left stood two additional menservants, one with a dinner coat and trousers hanging from his arm and the other bearing a white stiff shirt whose studs glistened in the glow of the electric lamps. There was not a sound except the dull scrape of the razor along its wielder’s face and the intermittent groaning sound that blew in out of the sea.

Source: The Notebooks of F. Scott Fitzgerald

Image: Illia Plakhuta

Cozy comfort for gamers

A screenshot of the start of the 'Cozy Comfort' article/game

More articles about games should be games themselves, in my opinion! I loved this, and there’s a write-up of how and why it was created at here.

I spend enough times on screens, so haven’t really got into the ‘cozy’ genre, but I know that it’s a huge thing. Games that you can play on your own terms, provide a bit of escapism, are (as the article describes) proven to be as good as meditation and other forms of deep relaxation.

The gaming industry is larger than the film and music industries combined globally. A growing sector is the subgenre dubbed “cozy games.” They are marked by their relaxing nature, meant to help players unwind with challenges that are typically more constructive than destructive. Recent research explores whether this style of game, along with video games more generally, can improve mental health and quality of life.

These play-at-your-own-pace games attract both longtime gamers and newcomers. […]

There’s no hard definition for a “cozy game.” If the game gives the player a cozy, warm feeling then it fits.

[…]

These games can provide a space for people to connect in ways they may not in the real world. Suzanne Roman, who describes herself as an autistic advocate, said gaming communities can be lifelines for neurodivergent people, including her own autistic daughter who celebrated her 18th birthday in lockdown. “I think it’s just made them more confident people, who feel like they fit in socially. There’s even been relationships, of course, that have formed in the real world out of this.”

Source: Reuters

A large public domain image-text dataset to train frontier LLM models

PD12M

Yesterday, after a conversation on the #ai channel in WAO’s Slack, I published Ways of categorising ethical concerns relating to generative AI. There was some pushback on Mastodon.

Alan Levine asked if I knew of any LLMs which say they’re trained on “open data” and where you can actually see the sources. It’s a good point, and I do know of one, which is Public Domain 12M (or PD12M for short). LLMs are a class of technologies, so (as I was trying to get at in my original post) we should be clear and specific in our objections to them.

Although I don’t share the concern, I understand the position which could be broadly stated as: “I have a problem with LLM datasets being scraped from the open web without the explicit consent of copyright holders.” But that’s not a position against LLMs per se. It’s an objection based on the copyright status of the ingested data.

At 12.4 million image-caption pairs, PD12M is the largest public domain image-text dataset to date, with sufficient size to train foundation models while minimizing copyright concerns. Through the Source.Plus platform, we also introduce novel, community-driven dataset governance mechanisms that reduce harm and support reproducibility over time.

Source: Source.Plus

Strava for Stoics?

Apple Watch showing Strava app

Matt Webb is, like me, over 40 years of age. Although some would argue differently, it’s a time when you realise that your fastest days are behind you. So apps like Strava reminding you that you’re not quite as fast as you were a few years ago isn’t… particularly helpful.

There’s definitely a gap in the market for fitness apps for people who are no longer spring chickens and, although they like to challenge themselves occasionally, aren’t trying to smash it every time they go out for a run, cycle, or to the gym. I also appreciate Webb’s related point that there comes a time when reminders about things in life are just a bit painful. The opportunity not to be reminded about things would be nice.

Part of getting older is finding that my PBs each time I train up - personal bests - are not as quick as they were before. […]

I’m currently trying to increase my baseline endurance and went out for a 17 mi run a few days ago. Paused to take photos of the Thames Barrier and a rainbow, no stress. Beautiful. Felt ok when I finished – hey I made it back! Wild!

Then Strava showed me the almost identical run from exactly 5 years ago, I’d forgotten: a super steady pace, a whole minute per mile faster than this week’s tough muddle through. […]

Our quantified self apps (Strava is one) are built by people with time on their side and capacity to improve. A past achievement will almost certainly remind you of a happy day that can be visited again or, in the rear view mirror, has been joyfully surpassed. But for older people… And I’m not even that old, just past my physical peak… […]

I’m not asking Strava to hide these reminders. I’ve found peace (not as completely as I thought it turns out). But I don’t want to avoid those memories. Reminded, I do remember that time from 2020! It is a happy memory! I like to remember what I could do, even if it is slightly bittersweet! […]

And I’ll bet we’re all having that feeling a little bit, deep down, unnamed and unlocated and unconscious quite often, amplified by using these apps. So many of us are using apps that draw from the quantified self movement (Wikipedia) of the 2010s, in one way or another, and that movement was by young people. Perhaps there were considerations unaccounted for – getting older for one. There will be consequences for that, however subtle.

(Another blindspot of the under 40s: it is the most heartbreaking thing to see birthday notifications for people who are no longer with us. Please, please, surely there is a more humane way to handle this?)

So I can’t help viewing some of the present day’s fierce obsession with personal health and longevity or even brain uploads not as a healthy desire for progress but, amplified by poor design work, as an attempt to outrun death.

Source: Interconnected

Image: Tim Foster

How to Raise Your Artificial Intelligence

Three images of a street. Overlying the image are different shapes which are arranged to look like QR code symbols. These are in white/blue colours and intersect one another. The first image is clear, but the second is slightly more pixelated, and the final image is very pixelated.

This is an absolutely incredible interview of Alison Gopnik (AG) and Melanie Mitchell (MM). Gopnik is a professor of psychology and philosophy and studies children’s learning and development, while Mitchell is a professor of computer science and complexity focusing on conceptual abstraction and analogy-making in AI systems.

There’s so much insight in here, so you’ll have to forgive me quoting it at length. I urge you to go and read the whole thing. The thing that really stood out for me was Gopnik’s philosophical insights based on her experience around child development. Fascinating.

AG: There is an implicit intuitive model that everyday people (including very smart people in the tech world) have about how intelligence works: there’s this mysterious substance called intelligence, and as you have more of it, you gain power and authority. But that’s just not the picture coming out of cognitive science. Rather, there’s this very wide array of different kinds of cognitive capacities, many of which trade off against each other. So being really good at one thing actually makes you worse at something else. To echo Melanie, one of the really interesting things we’re learning about LLMs is that things like grammar, which we might have thought required an independent-model-building kind of intelligence, you can get from extracting statistical patterns in data. LLMs provide a test case for asking, What can you learn just from transmission, just from extracting information from the people around you? And what requires independent exploration and being in the world?

[…]

MM: I like to tell people that everything an LLM says is actually a hallucination. Some of the hallucinations just happen to be true because of the statistics of language and the way we use language. But a big part of what makes us intelligent is our ability to reflect on our own state. We have a sense for how confident we are about our own knowledge. This has been a big problem for LLMs. They have no calibration for how confident they are about each statement they make other than some sense of how probable that statement is in terms of the statistics of language. Without some extra ability to ground what they’re saying in the world, they can’t really know if something they’re saying is true or false.

[…]

AG: Some things that seem very intuitive and emotional, like love or caring for children, are really important parts of our intelligence. Take the famous alignment problem in computer science: How do you make sure that AI has the same goals we do? Humans have had that problem since we evolved, right? We need to get a new generation of humans to have the right kinds of goals. And we know that other humans are going to be in different environments. The niche in which we evolved was a niche where everything was changing. What do you do when you know that the environment is going to change but you want to have other members of your species that are reasonably well aligned? Caregiving is one of the things that we do to make that happen. Every time we raise a new generation of children, we’re faced with this difficulty of here are these intelligences, they’re new, they’re different, they’re in a different environment, what can we do to make sure that they have the right kinds of goals? Caregiving might actually be a really powerful metaphor for thinking about our relationship with AIs as they develop. […]

Now, it’s not like we’re in the ballpark of raising AIs as if they were humans. But thinking about that possibility gives us a way of understanding what our relationship to artificial systems might be. Often the picture is that they’re either going to be our slaves or our masters, but that doesn’t seem like the right way of thinking about it. We often ask, Are they intelligent in the way we are? There’s this kind of competition between us and the AIs. But a more sensible way of thinking about AIs is as a technological complement. It’s funny because no one is perturbed by the fact that we all have little pocket calculators that can solve problems instantly. We don’t feel threatened by that. What we typically think is, With my calculator, I’m just better at math. […]

But we still have to put a lot of work into developing norms and regulations to deal with AI systems. An example I like to give is, imagine that it was 1880 and someone said, all right, we have this thing, electricity, that we know burns things down, and I think what we should do is put it in everybody’s houses. That would have seemed like a terribly dangerous idea. And it’s true—it is a really dangerous thing. And it only works because we have a very elaborate system of regulation. There’s no question that we’ve had to do that with cultural technologies as well. When print first appeared, it was open season. There was tons of misinformation and libel and problematic things that were printed. We gradually developed ideas like newspapers and editors. I think the same thing is going to be true with AI. At the moment, AI is just generating lots of text and pictures in a pretty random way. And if we’re going to be able to use it effectively, we’re going to have to develop the kinds of norms and regulations that we developed for other technologies. But saying that it’s not the robot that’s going to come and supplant us is not to say we don’t have anything to worry about. […]

Often the metaphor for an intelligent system is one that is trying to get the most power and the most resources. So if we had an intelligent AI, that’s what it would do. But from an evolutionary point of view, that’s not what happens at all. What you see among the more intelligent systems is that they’re more cooperative, they have more social bonds. That’s what comes with having a large brain: they have a longer period of childhood and more people taking care of children. Very often, a better way of thinking about what an intelligent system does is that it tries to maintain homeostasis. It tries to keep things in a stable place where it can survive, rather than trying to get as many resources as it possibly can. Even the little brine shrimp is trying to get enough food to live and avoid predators. It’s not thinking, Can I get all of the krill in the entire ocean? That model of an intelligent system doesn’t fit with what we know about how intelligent systems work.

Source: LA Review of Books

Image: Elise Racine & The Bigger Picture

Building a quantum computer that can run reliable calculations is extremely difficult

Red and purple light digital wallpaper

Domenico Vicinanza, Associate Professor of Intelligent Systems and Data Science at Anglia Ruskin University, explains the difference between classical computing and quantum computing. The latters uses ‘qubits’ instead of bits, so instead of just being in the binary state of 0 or 1, then can be in either or both simultaneously.

Vicinanza gives the example of optimising flight paths for the 45,000+ flights, organised by 500+ airlines, using 4,000+ airports. With classical computing, this optimisation would attempted sequentially using algorithms. It would take too long. In quantum computing, every permutation can be tried at the same time.

Quantum computing deals in probabilities rather than certainties, so classical computing isn’t going away anytime soon. In fact, reading this article reminded me of using LLMs. They’re very useful, but you have to know how to use them — and you can’t necessarily take a single response at face value.

Quantum computers are incredibly powerful for solving specific problems – such as simulating the interactions between different molecules, finding the best solution from many options or dealing with encryption and decryption. However, they are not suited to every type of task.

Classical computers process one calculation at a time in a linear sequence, and they follow algorithms (sets of mathematical rules for carrying out particular computing tasks) designed for use with classical bits that are either 0 or 1. This makes them extremely predictable, robust and less prone to errors than quantum machines. For everyday computing needs such as word processing or browsing the internet, classical computers will continue to play a dominant role.

There are at least two reasons for that. The first one is practical. Building a quantum computer that can run reliable calculations is extremely difficult. The quantum world is incredibly volatile, and qubits are easily disturbed by things in their environment, such as interference from electromagnetic radiation, which makes them prone to errors.

The second reason lies in the inherent uncertainty in dealing with qubits. Because qubits are in superposition (are neither a 0 or 1) they are not as predictable as the bits used in classical computing. Physicists therefore describe qubits and their calculations in terms of probabilities. This means that the same problem, using the same quantum algorithm, run multiple times on the same quantum computer might return a different solution each time.

To address this uncertainty, quantum algorithms are typically run multiple times. The results are then analysed statistically to determine the most likely solution. This approach allows researchers to extract meaningful information from the inherently probabilistic quantum computations.

Source: The Conversation

Image: Sigmund

Playing stenographer in your little folding chair

Black folding chairs in rows

It’s hard to avoid the drama unfolding at the start of the second Trump presidential term. I don’t even know, really, what’s going on — other than a lot of confusion and emotional violence. I guess the cruelty is the point.

Anyway, Ryan Broderick of much-quoted Garbage Day fame, has some advice for journalists which is advice for us all, really: the world has changed, so it’s time to adapt. That doesn’t mean “sell out” or “abandon your ethics.” Quite the opposite.

Welcome to 2025. No one reads your website or watches your TV show. Subscription revenue will never truly replace ad revenue and ad revenue is never coming back. All of your influence is now determined by algorithms owned by tech oligarchs that stole your ad revenue and they not only hate you, personally, but have aligned themselves with a president that also hates you, personally. The information vacuum you created by selling yourself out for likes and shares and Facebook-funded pivot-to-video initiatives in the 2010s has been filled in by random “news influencers,” some of which are literally using ChatGPT to write their posts. While many others are just making shit up to go viral. And the people taking over the country currently have spent the last decade, in public, I might add, crafting a playbook — one you dismissed — that, if successful, means the end of everything that resembles America. And that includes our free and open and lazy mainstream media. And they’re pretty confident it’ll succeed because, unlike you, they know how broken the internet is now and are happy to take advantage of it. While I’m sure it feels very professional to continue playing stenographer in your little folding chair at the White House, they’re literally replacing you with podcasters as we speak. So this is it. Adapt or die. Or at the very least, die with some dignity.

Source: Garbage Day

Image: wu yi

3-column blog themes

Screenshot of garry.net

This is more of a bookmark than a post, but I’ve only just discovered the blog of Garry Newman (who some might know from Garry’s Mod). Looking at the HTML, it doesn’t look like he’s using a particular generator (e.g. WordPress) or a theme, so he must have custom-built it.

What I love about it is the logical unfolding from the meta-level navigation on the left, through the column of densely-listed posts in the middle, to the display of the individual post on the right.

If you’re reading this and know of a similar blog theme, on any platform, could you let me know?

Source: garry.net

Those who find the texture of your mind boring or offensive can close the tab

Laptop on the bed with WordPress add new post page.

In his most recent Monday Memo, Dave Gray explained how he channeled Brad Diderickson by composing his newsletter verbally while walking. That conversational style and approach is interesting and engaging, and a good way to not sound ‘stilted’ when writing. Another way is to teach yourself to touch-type, so that the words that are coming out of your brain appear on the computer screen quickly.

I’m thinking about this due to a post that I saw via Hacker News where Henrik Karlsson gives some advice for a friend who wants to start a blog. There are 19 pieces of advice, and #4 is:

People tend to sound more like themselves in chat messages than in blog posts. So perhaps write in the chat, rapidly, to a friend.

And #6:

One reason chat messages are unusually lively is that the format encourages you to write from emotion. You are talking to someone you like and you want to resonate with them, you want to make them laugh. This creates a surge in the writing. It is lovely. When you write from your head, your style sinks back under the waves.

But the best bit of advice in the list is, I think, #18:

In real life, you can’t go on and on about your obsessions; you have to tame yourself to not ruin the day for others. This is a good thing. Otherwise, we’d be ripping each other’s arms off like chimpanzees. But a blog is a tiny internet house where you decide the norms. And since there are already countless places where you can’t be yourself, there is no need to build another one of those. The law of the land is that everything you think is funny is funny. Those who find the texture of your mind boring or offensive can close the tab—no need to worry about them. It is good for the soul to have a place where being just the way you are is normal. And it is a service to others, too. You’ll be surprised how many people are laughably similar to you and who wish there was a place where they felt normal. You can build that.

If you’re reading this and don’t put your words out on the internet on a regular basis, why not change that?

Source: Escaping Flatland

Image: Justin Morgan

Prices and wages are a political matter, not an 'economic' one

Four paper card tags

Cory Doctorow is such an amazing writer and speaker. He explains reasonably complex things so concisely and straightforwardly. This is one such explainer, where he discusses how issues which are usually described as being to do with the ‘economy’ are actually to do with power.

This kind of reframing is really useful, especially for people who, like the proverbial fish swimming in water, haven’t really thought about what it means to live within capitalism.

The cost and price of a good or service is the tangible expression of power. It is a matter of politics, not economics. If consumer protection agencies demand that companies provide safe, well-manufactured goods, if there are prohibitions on price-fixing and profiteering, then value shifts from the corporation to its customers.

But if labor and consumer groups act in solidarity, then they can operate as a bloc and bosses and investors have to eat shit. Back in 2017, the pilots' union for American Airlines forced their bosses into a raise. Wall Street freaked out and tanked AA’s stock. Analysts for big banks were outraged. Citi’s Kevin Crissey summed up the situation perfectly, in a fuming memo: “This is frustrating. Labor is being paid first again. Shareholders get leftovers”:

Limiting the wealth of the investor class also limits their power, because money translates pretty directly into political power. This sets up a virtuous cycle: the less money the investor class has to spend on political projects, the more space there is for consumer- and labor-protection laws to be enacted and enforced. As labor and consumer law gets more stringent, the share of the national income going to people who make things, and people who use the things they make, goes up – and the share going to people who own things goes down.

Seen this way, it’s obvious that prices and wages are a political matter, not an “economic” one. Orthodox economists maintain the pretense that they practice a kind of physics of money, discovering the “natural,” “empirical” way that prices and wages move. They dress this up with mumbo-jumbo like the “efficient market hypothesis,” “price discovery,” “public choice,” and that old favorite, “trickle-down theory.” Strip away the doublespeak and it boils down to this: “Actually, your boss is right. He does deserve more of the value than you do”:

Even if you’ve been suckered by the lie that bosses have a legal “fiduciary duty” to maximize shareholder returns (this is a myth, by the way – no such law exists), it doesn’t follow that customers or workers share that fiduciary duty. As a customer, you are not legally obliged to arrange your affairs to maximize the dividends paid by to investors in your corporate landlord or by the merchants you patronize. As a worker, you are under no legal obligation to consider shareholders' interests when you bargain for wages, benefits and working conditions.

The “fiduciary duty” lie is another instance of politics masquerading as economics: even if bosses bargain for as big a slice of the pie as they can get, the size of that slice is determined by the relative power of bosses, customers and workers.

Source: Pluralistic

Image: Angèle Kamp

It seems there have been better times to be alive

Police van on fire during the 2024 Southport Riots

Marina Hyde reflects, in her inimical way, on the children’s commissioner’s report into why children became involved in last summer’s riots. Apparently, at least 147 were arrested and 84 charged, and almost all were boys. As she points out, it’s not exactly a great time to be a kid in the UK, is it?

Having been a teacher, and as a parent of two teenagers who survived the pandemic lockdowns, I can attest that the world looks pretty grim when they raise their heads from their phones and games controllers. Why would they bother? And what are we, as adults, doing about it?

Children might sometimes do very bad and stupid things, but they are not so stupid that they can’t see they live in a country where the gulf in opportunities is quite staggering. It’s droll to think that two months after the riots, we’d be listening to Keir Starmer’s blithe defence of his decision to take up the freebie loan of an £18m penthouse so his son could study for his GCSEs in peace and quiet. “Any parent would have made the same decision,” explained the prime minister. Any parent, if you please. I do wonder what on earth the parents of the rioting youngsters were doing making the choices they did. I would simply have let my teens spend the afternoon in an £18m penthouse instead. Anyway, speaking of guillotine-beckoning comments, perhaps it isn’t the most enormous surprise that the Channel 4 study found 47% of gen Z agreeing that “the entire way our society is organised must be radically changed through revolution”.

Again, it’s easy to dismiss, but if they believe these things, surely it’s on those of our generations who failed to make the status quo seem remotely appealing? Many of the behaviours of today’s teens and young adults are not simply thick / snowflakey / lazy, but rational responses to a world created by their elders, if not always betters. The childhood experience has deteriorated completely in the past 15 years or so. We have addicted children to – and depressed them with – smartphones, and done next to nothing about this no matter how much evidence of the most toxic harms mounts up. Children in the US are expected to tidy their rooms by generations who also expect them to rehearse active-shooter drills. We require young people to show gratitude for living in an iteration of capitalism in which they have not only no stake, but no obvious hope of getting a stake. It seems to them that there have been better times to be alive.

Source: The Guardian

Image: Wikimedia Commons

This is not the dystopia we were promised

A bunch of archictectural boxes stacked on top of each other

Discovered via John Naughton’s Memex 1.1, this article by Henry Farrell explains how we’re not living in an Orwellian or Huxleyian dystopia, but one that resembles the writing of Philip K. Dick.

The reason for this analysis? Dick “was interested in seeing how people react when their reality starts to break down” and, if we’re honest, we can see this happening everywhere. It’s easy to point to Trump and other political examples, but it’s everywhere. People exist in their own little bubbles, interacting with other people who may or not be real, via algorithms they do not control.

This is not the dystopia we were promised. We are not learning to love Big Brother, who lives, if he lives at all, on a cluster of server farms, cooled by environmentally friendly technologies. Nor have we been lulled by Soma and subliminal brain programming into a hazy acquiescence to pervasive social hierarchies.

Dystopias tend toward fantasies of absolute control, in which the system sees all, knows all, and controls all. And our world is indeed one of ubiquitous surveillance. Phones and household devices produce trails of data, like particles in a cloud chamber, indicating our wants and behaviors to companies such as Facebook, Amazon, and Google. Yet the information thus produced is imperfect and classified by machine-learning algorithms that themselves make mistakes. The efforts of these businesses to manipulate our wants leads to further complexity. It is becoming ever harder for companies to distinguish the behavior which they want to analyze from their own and others’ manipulations.

This does not look like totalitarianism unless you squint very hard indeed. As the sociologist Kieran Healy has suggested, sweeping political critiques of new technology often bear a strong family resemblance to the arguments of Silicon Valley boosters. Both assume that the technology works as advertised, which is not necessarily true at all.

Standard utopias and standard dystopias are each perfect after their own particular fashion. We live somewhere queasier—a world in which technology is developing in ways that make it increasingly hard to distinguish human beings from artificial things. The world that the Internet and social media have created is less a system than an ecology, a proliferation of unexpected niches, and entities created and adapted to exploit them in deceptive ways. Vast commercial architectures are being colonized by quasi-autonomous parasites. Scammers have built algorithms to write fake books from scratch to sell on Amazon, compiling and modifying text from other books and online sources such as Wikipedia, to fool buyers or to take advantage of loopholes in Amazon’s compensation structure. Much of the world’s financial system is made out of bots—automated systems designed to continually probe markets for fleeting arbitrage opportunities. Less sophisticated programs plague online commerce systems such as eBay and Amazon, occasionally with extraordinary consequences, as when two warring bots bid the price of a biology book up to $23,698,655.93 (plus $3.99 shipping).

In other words, we live in Philip K. Dick’s future, not George Orwell’s or Aldous Huxley’s.

[…]

In his novels Dick was interested in seeing how people react when their reality starts to break down. A world in which the real commingles with the fake, so that no one can tell where the one ends and the other begins, is ripe for paranoia. The most toxic consequence of social media manipulation, whether by the Russian government or others, may have nothing to do with its success as propaganda. Instead, it is that it sows an existential distrust. People simply do not know what or who to believe anymore. Rumors that are spread by Twitterbots merge into other rumors about the ubiquity of Twitterbots, and whether this or that trend is being driven by malign algorithms rather than real human beings.

Source: Programmable Mutter

Image: Denys Nevozhai

The struggle for attention as the prime moral challenge of our time

Rusty metal iron weathered sticker 'caution' !

I’m posting this from Andrew Curry mainly so I don’t forget the books referenced (already added to my Literal.club queue) and so that I can remember to check back on the Strother School of Radical Attention that he mentions.

Sadly, their awesome-looking courses are either in-person (US) or online at times where it’s the early hours of the morning here in the UK.

There’s something of a history now of writing about the commodification of attention as a feature of late stage capitalism. [Rhoda] Feng [writing in The American Prospect] spotlights a few. Tim Wu’s book The Attention Merchants traces this back to its origins in the late 19th century. James Williams’ Stand out of Our Light: Freedom and Resistance in the Attention Economy positions the struggle for attention as the prime moral challenge of our time. Williams had previously worked for Google. The more recent collection Scenes of Attention: Essays on Mind, Time, and the Senses “took a multidisciplinary approach”, exploring the interplay between attention and practices like pedagogy, Buddhist meditation, and therapy.

[…]

In her review, Feng critiques the framing here of attention as scarcity—essentially an economics version that goes back to Herbert Simon in the 1970s—he said, presciently, “that a wealth of information creates a poverty of attention.”

Frankly, it is more serious than that: as James Williams says in Stand Out of Our Light,

“the main risk information abundance poses is not that one’s attention will be occupied or used up by information … but rather that one will lose control over one’s attentional processes.” The issue, one might say, is less about scarcity than sovereignty.

There are pockets of resistance which are trying to reclaim our sovereignty. Feng points to Friends of Attention, new to me, whose Manifesto calls for the emancipation of attention.

Source: Just Two Things

Image: Markus Spiske