AI cannot hold copyright (yet)
I think common sense would suggest that copyright should only apply to human-created works. But the line between what human brains and artificial ones do when working together is a thin one, so I don’t think this ruling is the last word.
A Recent Entrance to Paradise is part of a series Creativity Machine produced on the subject of a near-death experience. Thaler said the work “was autonomously created by a computer algorithm running on a machine,” according to court documents.Source: U.S. Copyright Office Rules That AI Cannot Hold Copyright | ARTnews.comThe U.S. Copyright review board said that this goes against the basic tenets of copyright law, which suggest that the work must be the product of a human mind. “Thaler must either provide evidence that the Work is the product of human authorship or convince the Office to depart from a century of copyright jurisprudence. He has done neither,” wrote the review board in its decision.
Technology and productivity
Julian Stodd’s personal realisation that what the people who make ‘productivity tools’ want and what he wants might be two different things.
See also: Four Thousand Weeks: Time Management for Mortals by Oliver Burkeman
I fear that the suites of tools and features that allow me to work from anywhere do, in fact, distract me everywhere.Source: The Delusion of Productivity | Julian Stodd’s Learning BlogI feel that at time i have lost the art of long form and collapsed into the conversational and reactive.
[…]
Does technology always make us more productive – or can technology hold us apart? Do we need to be together to forge culture, and to find meaning, or can being together make us more busy than wise?
I suspect my personal (and perhaps our organisational) challenge is one of separation: to separate out my segregated spaces – to separate my thinking and doing, my learning and acting, my reflection and practice.
Hacking the application process
It’s perhaps a massive over-simplification, but my understanding of the so-called ‘skills gap’ is that two things are happening.
The first is a long-term trend for employers expecting to have to spend zero dollars on training for the people they hire.
The second is the use of algorithmic scanning of CV-scanning software to reject the majority of applicants. Not surprisingly, although it might make recruiters' jobs a bit more manageable, it’s not great for diversity or finding people who haven’t done that exact job before.
Software can also disadvantage certain candidates, says Joseph Fuller, a management professor at Harvard Business School. Last fall, the US Equal Employment Opportunity Commission launched an initiative to examine the role of artificial intelligence in hiring, citing concerns that new technologies presented a “a high-tech pathway to discrimination.” Around the same time, Fuller published a report suggesting that applicant tracking systems routinely exclude candidates with irregularities on their résumés: a gap in employment, for example, or relevant skills that didn’t quite match the recruiter’s keywords. “When companies are focused on making their process hyperefficient, they can over-dignify the technology,” he says.Source: How Job Applicants Try to Hack Résumé-Reading Software | WIRED
You cannot 'solve' online misinformation
Matt Baer, who founded the excellent platform write.as, weighs in on misinformation and disinformation.
This is something I’m interested in anyway given my background in digital literacies, but especially at the moment because of the user research I’m doing around the Zappa project.
Seems to me that a space made up of humans is always going to have (very human) lying and deception, and the spread of misinformation in the form of simply not having all the facts straight. It's a fact of life, and one you can never totally design or regulate out of existence.Source: “Solving” Misinformation | MattI think the closest “solution” to misinformation (incidental) and disinformation (intentional) online is always going to be a widespread understanding that, as a user, you should be inherently skeptical of what you see and hear digitally.
[…]
As long as human interactions are mediated by a screen (or goggles in the coming “metaverse”), there will be a certain loss of truth, social clues, and context in our interactions — clues that otherwise help us determine “truthiness” of information and trustworthiness of actors. There will also be a constant chance for middlemen to meddle in the medium, for better or worse, especially as we get farther from controlling the infrastructure ourselves.
The life run by spreadsheet is not worth living
When work is the most significant thing in your life, you optimise for it. When relationships are are the most significant things in your life, you optimise for those.
I find this post by ‘crypto engineer’ Nat Eliason a bit tragic, to be honest. He says he’s almost always working, there’s zero mention of family, and he says that all of his friends are people who are hustling too.
As Socrates didn’t say, “the life run by spreadsheet is not worth living”.
Here’s the biggest thing to keep in mind when you’re reading about my process:Source: How to Be Really, Really, Ridiculously Productive | Nat EliasonI’m almost always working.
This is not some Tim Ferrissian “here’s how to work 2 hours a day and make lots of money” post. I tried that. It sucks. You’ll get depressed in about two days if you have an ounce of ambition in you. If you’re trying to optimize around working less, find better work.
It doesn’t mean, though, that I’m always doing things that feel like work. It means I enjoy the work that I do, and I’ve found ways to make my hobbies productive.
The benefits of taking Wednesdays off
Today is a Wednesday and I’m taking a half-day off today and tomorrow as it’s half-term for the kids. But, pre-pandemic I used to take Wednesday off in its entirety which was absolutely amazing and I’m not sure why I don’t still do it.
There’s a real movement growing at the moment for a four-day week, which I think is a really positive thing for humanity. Let’s just hope it’s not just white collar workers who can afford to reap the benefits.
One-offs, like a deadline for a big project, may temporarily restructure our lives, but cyclical pacers, like a two-day weekend followed by a five-day work week, have outsized psychological influence, partially because of repetition, and partially because they mimic the cyclical natural of our most fundamental pacer—day and night.Source: For Maximum Recharge, Take a Wednesday Off | Quartz[…]
A Wednesday holiday interrupts the externally imposed pacer of work, and gives you a chance to rediscover your internal rhythms for a day. While a long weekend gives you a little more time on your own schedule, it doesn’t actually disrupt the week’s pacing power. A free Wednesday builds space on either side, and shifts the balance between your pace and work’s—in your favor.
Dark patterns and gambling
Given that most gambling these days happens via smartphone apps, and that the psychological tricks used by gambling firms are also used by, for example, for-profit centralised social media sites, I found this fascinating (and worrying!)
Kim Lund, founder of poker game firm Aftermath Interactive, has made a career out of game design and has seen at first-hand how cold, hard probability defeats the illogical human mind every time – and allows the gambling companies to cash in. “All gambling games are based on psychological triggers that mean they work,” he tells me. “The human brain is incapable of dealing with randomness. We’re obsessed with finding patterns in things because that prevents us from going insane. We want to make sense of things.”Source: What gambling firms don’t want you to know – and how they keep you hooked | Thee Guardian[…]
In her 1975 paper The Illusion of Control, Ellen J Langer conducted a series of experiments that showed that our expectations of success in a game of chance vary, depending on factors that do not actually affect the outcome. One of the variables that makes a big difference to how gamblers behave is the introduction of an element of choice. In one of Langer’s experiments, subjects were given lottery tickets with an American football player on them. Some subjects got to choose which player they wanted, others were allocated a ticket at random. On the morning of the draw, everyone was asked how much they would be prepared to sell their ticket for. Those who had chosen their ticket demanded an average of $8.67, while those who had been allocated one at random were prepared to give it up for $1.96.
Speeding up a Chromebook by allocating zram

Oddly enough, in the few days since I've bookmarked this URL, it's disappeared. Thank goodness for the Internet Archive!
I'll post the main details below, which are instructions for making Chromebooks run faster by allocating compressed cache. Note that on my Google Pixelbook (2017) I used '4000' instead of the '2000' recommended and it's really made a difference
Also see: Cog - System Info Viewer
You use zram (otherwise known as compressed cache - compcache). With a single command you can create enough zram to compensate for your device's lack of physical RAM. You can create as much compcache as you need; but remember, most Chromebooks contain smaller internal drives, so create a swap space that doesn't gobble up too much of your physical drive (as swap is created using your Chromebook internal, physical drive).
To create compcache, you must work within Crosh (Chromebook shell), aka the command line. Believe it or not, the command use for this is incredibly simple; but the results are significant (especially in cases where you're frequently running out of memory).
[...]
The first thing you must do is open a Crosh tab. This is simple and doesn't require anything more than hitting the key combination [Ctrl]+[Alt]+[t]. When you find yourself at crosh> you know you're ready to go.
The command to create swap space is very simple and comes in the form of:
swap enable SIZE
Where SIZE is the size of the swap space you wish to create. The ChromeOS developers suggest adding a swap of 2GB, which means the command would be:
swap enable 2000
Once you've run the command, you must then reboot your Chromebook for the effect to take place. The swap will remain persistent until you run the disable command (again, from Crosh), like so:
swap disable
No matter how many times you reboot, the swap will remain until you issue the disable command.
How to prevent a Chromebook from running out of memory | TechRepublic (archive.org link)
Stone Age culture in the Orkney islands
When I was eight years old, we took a family trip to the Orkney islands off the north coast of Scotland. I don’t know why we went there particularly, but it was amazing. I almost don’t want to go back because it might break the spell the place has cast over my life.
While were were there, with no kind of tourist fanfare I was allowed to handle skulls that were thousands of years old, crawl into tombs, and generally really experience history. I doubt they have such a cavalier approach to artefacts these days…
If you happen to imagine that there’s not much left to discover of Britain’s stone age, or that its relics consist of hard-to-love postholes and scraps of bones, then you need to find your way to Orkney, that scatter of islands off Scotland’s north-east coast. On the archipelago’s Mainland, out towards the windswept west coast with its wave-battered cliffs, you will come to the Ness of Brodgar, an isthmus separating a pair of sparkling lochs, one of saltwater and one of freshwater. Just before the way narrows you’ll see the Stones of Stenness rising up before you. This ancient stone circle’s monoliths were once more numerous, but they remain elegant and imposing. Like a gateway into a liminal world of theatricality and magic, they lead the eye to another, even larger neolithic monument beyond the isthmus, elevated in the landscape as if on a stage. This is the Ring of Brodgar, its sharply individuated stones like giant dancers arrested mid-step – as local legend, indeed, has it.Source: ‘Every year it astounds us’: the Orkney dig uncovering Britain’s stone age culture | The Guardian
Upgrading an iPod Video for use in 2022
I’m an OG when it comes to MP3 players, having owned an Archos MP3 Jukebox while I was at uni in about 2001. It was ridiculously expensive for me as a student, but I was working at HMV at the time, and I was (and still am!) really into music.
In the end, I ‘upgraded’ the battery in it and managed to melt the entire thing, then switched to Spotify for all of my music in 2009. But there’s definitely part of me that wants to get back to what I would consider ‘real’ music listening.
While I do have plenty of MP3s and FLAC files on my smartphone, there’s just something about having a separate device for music. And you don’t get more iconic than an iPod. So this project is super-cool and once again has me thinking…
See also: How To Enjoy Your Own Digital Music
See also: ListenBrainz
I realised something not so long ago - I was being very lazy. I'd often just play my weekly/daily mix, or some playlist I made up a long time ago. I'd never really think about what music I liked + what music I wanted to listen to. I think this is in part due to the fact that almost any music was available - which made choosing even more difficult.Source: Building an iPod for 2022 | Ellie.wtfAnyway. Over the weekend I took apart a 5.5th gen iPod Classic (or iPod Video) and made it suit 2022 a little better :D
Digital to analogue and back again
It’s good to have Warren Ellis back. I have no opinion on this other than we should believe women when they accuse men of abuse.
His reflections on going analogue in 2021 and then coming back to digital workflows is interesting.
Someone sent me this article the other day, and here’s the quote we both independently flagged from it:Source: Going Analogue, Returning To Digital – WARREN ELLIS LTD“But just because something makes waves on Twitter doesn’t mean it actually matters to most people. According to the Pew Research Center, only 23 percent of U.S. adults use Twitter, and of those users, “the most active 25% … produced 97% of all tweets.” In other words, nearly all tweets come from less than 6 percent of American adults. This is not a remotely good representation of public opinion, let alone newsworthiness, and treating it as such will inevitably result in wrong conclusions."
I’m not as up to date on some things as I used to be, but, framing it like that — what am I really missing? Value is not necessarily intrinsic to a digital service (or most other things). We choose to invest these things with value. And sometimes we’re too caught up in the stream to reframe these things and do a proper test on them. It doesn’t feel right to celebrate snapping out of long-term behavioral loops that one allowed to form in the first damn place. One just gets it done and then keeps getting it done until it’s better, I think.
There’s a tech industry term: dogfooding. It means using your own product or service. The inventor of Twitter fucks off to silent tech-free meditation retreats for weeks at a time. How was that not a red flag?
Chrome OS Flex
About 18 months ago, Google acquired Neverware, a company who took the open source version of Chrome OS and customised it for the schools market.
The new version of Chrome OS, called ‘Flex’, can be installed on pretty much any device and also includes Linux containers. Interesting!
Google is positioning Chrome OS Flex as an answer to old Mac and Windows PCs that might not be able to handle the latest version of their native OS and/or that might not be owned by folks with budgets to replace the devices. Rather than buying new hardware, consumers or IT departments could install the latest version of Chrome OS Flex.Source: Google turns old Macs, PCs into Chromebooks with Chrome OS Flex | Ars Technica
OKRs as institutional memory
Rick Klau, formerly of Google Ventures, is a big fan of OKRs (or ‘Objectives and Key Results’). They’re different from KPIs (or ‘Key Performance Indicators’) for various reasons, including the fact that they’re transparent to everyone in the organisation, and build on one another towards organisational goals.
In this post, Klau talks about OKRs as a form of organisational memory, which is why he’s not fond of changing them half-way through a cycle just because there’s new information available.
Let’s not distract ourselves just because someone had a good idea on a Tuesday standup meeting; let’s finish the stuff we said we were going to do. We might not succeed at all of it. In fact, we probably won’t, but we’ll have learned more and more. You can encode that. That becomes part of the institutional memory at the organization. (link and emphasis mine)Source: OKRs as institutional memory | tins ::: Rick Klau's weblog
Nesta's predictions for 2022
Nesta shares its ‘Future Signals’ for 2022, some predictions about how things might shake out this year. I’d draw your attention in particular to climate inactivism coupled with quantifying carbon, as well as health inequalities around the quality of sleep.
Under the microscope this year we look at topics that range from sleep as a new dimension of health inequality to where our food will be grown in future. We ask complicated questions too. Is carbon counting really a tool for behaviour change? How will Covid-related service closures impact families? Our Nesta authors don’t offer up easy answers, but this collection should help you to distinguish the signal from the noise in 2022 and beyond.Source: Future Signals – what we're watching for in 2022 | Nesta
Medieval Fantasy City Generator
The history geek in me loves this so much. And the educator interested in digital literacies loves the fact that you have to manipulate the URL to generate different types of village / town / city!
Source: Medieval Fantasy City Generator
Blockchain and trusted third parties
As Cory Doctorow points out, merely putting something on a blockchain doesn’t make the data itself ‘trusted’ (or useful!)
In passing, it’s interesting that he cites Vinay Gupta in the piece, as Vinay is someone I’ve historically had a lot of time for. However, Mattereum (NFTs for physical assets) just… seems like a distraction from more important work he’s previously done?
In other words:Source: The Inevitability of Trusted Third Parties | Cory Doctorowif: problem + blockchain = problem - blockchain
then: blockchain = 0
The blockchain hasn’t added anything to the situation, except considerable cost (which could just as easily be spent on direct transfers to poor farmers, assuming you could find someone you trust to hand out the money) and complexity (which creates lots of opportunities for cheating).
On hobbies
This was linked to in the latest issue of Dense Discovery with the question of who amongst the readership has taken up a hobby recently?
As Anne Helen Petersen points out, it’s really hard to start a new hobby as an adult, not only for logistical reasons but because of the self-narrative that goes with it.
For me, the gulf between how good I am at something when starting it, and how good I want to be at a thing is often just too off-putting…
To me, that’s what I think a real hobby feels like. Not something you feel like you’re choosing, or scheduling — not a hassle, or something you resent or feel bad about when you don’t do it. Earlier this week, Katie Heaney wrote a piece in The Cut that speaks to what I think a lot of people feel when they think about their hobbies: she keeps trying to start one, but can’t make it stick. The truth is, it’s really really hard to start a hobby as an adult — it feels unnatural, or forced, or performative. We try to force ourselves into hobbies by buying things (see: Amanda Mull’s piece on the “trophies” of the new domesticity) but a Kitchen-Aid will not make you like cooking.Source: What a Hobby Feels Like | Anne Helen PetersenIt’s also hard when the messages about what you should be doing with your leisure time are so incredibly contradictory: you should devote yourself to self-care, but also spend more time on your children and partner; you should liberate yourself from the need to monetize your hobby but also have enough money to do the hobby in the first place. This “Smarter Living” piece in the NYT on what to do with a day off is emblematic of just how fucked up our leisure messaging has become: you should “embrace laziness,” “evaluate your career,” “have a family meal,” “fix your finances,” “do that one thing you’ve been putting off,” AND/OR “do nothing,” AND THEN tweet the author about what you did over the weekend!
Reducing offensive social media messages by intervening during content-creation
Six per cent isn’t a lot, but perhaps a number of approaches working together can help with this?
The proliferation of harmful and offensive content is a problem that many online platforms face today. One of the most common approaches for moderating offensive content online is via the identification and removal after it has been posted, increasingly assisted by machine learning algorithms. More recently, platforms have begun employing moderation approaches which seek to intervene prior to offensive content being posted. In this paper, we conduct an online randomized controlled experiment on Twitter to evaluate a new intervention that aims to encourage participants to reconsider their offensive content and, ultimately, seeks to reduce the amount of offensive content on the platform. The intervention prompts users who are about to post harmful content with an opportunity to pause and reconsider their Tweet. We find that users in our treatment prompted with this intervention posted 6% fewer offensive Tweets than non-prompted users in our control. This decrease in the creation of offensive content can be attributed not just to the deletion and revision of prompted Tweets -- we also observed a decrease in both the number of offensive Tweets that prompted users create in the future and the number of offensive replies to prompted Tweets. We conclude that interventions allowing users to reconsider their comments can be an effective mechanism for reducing offensive content online.Source: Reconsidering Tweets: Intervening During Tweet Creation Decreases Offensive Content | arXiv.org
The burnout epidemic
I work an average of about 25 hours per week and I’m tired at the end of it. I can’t even imagine how I coped in my twenties while teaching.
Textile mill workers in Manchester, England, or Lowell, Massachusetts, two centuries ago worked for longer hours than the typical British or American worker today, and they did so in dangerous conditions. They were exhausted, but they did not have the 21st-century psychological condition we call burnout, because they did not believe their work was the path to self-actualization. The ideal that motivates us to work to the point of burnout is the promise that if you work hard, you will live a good life: not just a life of material comfort, but a life of social dignity, moral character and spiritual purpose.Source: Your work is not your god: welcome to the age of the burnout epidemic | The Guardian[…]
This promise, however, is mostly false. It’s what the philosopher Plato called a “noble lie”, a myth that justifies the fundamental arrangement of society. Plato taught that if people didn’t believe the lie, then society would fall into chaos. And one particular noble lie gets us to believe in the value of hard work. We labor for our bosses’ profit, but convince ourselves we’re attaining the highest good. We hope the job will deliver on its promise, and hope gets us to put in the extra hours, take on the extra project and live with the lack of a raise or the recognition we need.
Check your perspective
A useful and illustrative story from Sheila Heen, author of Difficult Conversations: How To Discuss What Matters Most, about why it’s useful to understand other people’s context.
It reminds me, I sometimes tell this story about my eldest son. His name is Ben. He’s 22 now, but when he was about three, we were driving down the street. We stopped at a traffic light, and we were working on both colors and also traffic rules, because at the time we lived on kind of a busy street in Cambridge. So we’re stopped at the light. And I say, “Hey, Ben. What color is the light?” And he says, “It’s green.” I said, “Ben, we’re stopped at the light. What color is the light? Take a good look.” And he goes, “It’s green.” And when it turns, he says, “It’s red. Let’s go.”Source: Red Light Green Light | James SevedgeNow, the kid seemed bright in most other ways. So I just thought like, what is going on with him? My first hypothesis is maybe he’s color blind, which then that would be my husband’s fault. At least I thought at the time, it’s my husband’s fault. I’ve since been informed it would have been my fault.
So I started collecting data. I’m running a little scientific experiment of my own. So I start asking him to identify red and green in other contexts, and he gets it right every time. And yet every time we come to a traffic light, he’s still giving me opposite answers, because I get a little obsessed with this.
My second hypothesis, by the way, is that he is screwing with me, which I certainly had some data to support. This went on for about three weeks. It wasn’t until maybe three weeks later, and I think my mother-in-law was in town. So I was in the back seat sitting next to Ben, and we stopped at a traffic light. And I suddenly realized that from where he sits in his car seat, he usually can’t see the light in front of us, because the headrest is in the way or it’s above the level of the windshield, windscreen as they say in Europe. So he’s looking out the side window at the cross traffic light.
Now just think about the conversation from his point of view. He’s looking at the light, it’s green; I’m insisting that it’s red, and he’s like, you know, my mother seems right in most other ways, but she’s just wrong about this. The reason that that experience has stuck with me all these years is that it’s such a great illustration of the fact that where you sit determines what you see.