Hello hello. I hope you're well 🙂
According to my stats, the following posts, all published in the last 12 months, were the most accessed on Thought Shrapnel.
What were your favourites? Is it one on this list? The archives can be found here.
1. The burnout curve
Published: 11th September
I stumbled across this on LinkedIn. There doesn’t seem to be an authoritative source yet other than the author’s (Nick Petrie) social media posts, which is a shame. So I’m quoting most of it here so I can find and refer to it in future.
2. AI writing detectors don't work
Published: 9th September
This article discusses OpenAI’s recent admission that AI writing detectors are ineffective, often yielding false positives and failing to reliably distinguish between human and AI-generated content. They advise against the use of automated AI detection tools, something that educational institutions will inevitably ignore.
3. Oh great, another skills passport
Published: 25th September
This not only is the wrong metaphor, but it diverts money and attention from fixing some of the real issues in the system.
4. Good news on Covid treatments
Published: 16th September
Well this is promising. Researchers have identified a critical weakness in COVID-19 in its reliance on specific human proteins for replication. The virus has an “N protein” which needs human cells to properly package its genome and propagate. Apparently, blocking this interaction could prevent the virus from infecting human cells.
5. The punishment for being authentic is becoming someone else's content
Published: 9th September
What I think is interesting is how online and offline used to be seen as completely separate. Then we realised the impact that offline life had on online life, and now we’re seeing the reverse: Instagram, TikTok, etc. having a huge impact on the spaces in which we exist offline.
6. Using AI to aid with banning books is another level of dystopia
Published: 17th August
However, what I’m concerned about is AI decision-making. In this case, a crazy law is being implemented by people who haven’t read the books in questions who outsource the decision to a language model that doesn’t really understand what’s being asked of it.
7. A philosophy of travel
Published: 30th August
This article critically examines the concept of travel, questioning its oft-claimed benefits of ‘enlightenment’ and ‘personal growth’. It cites various thinkers who have critiqued travel (including one of my favourites, Fernando Pessoa) suggesting that it can actually distance us from genuine human connection and meaningful experiences.
8. We need to talk about AI porn
Published: 25th August
As this article details, a lot of porn has already been generated. Again, prudishness aside relating to people’s kinks, there are all kind of philosophical, political, legal, and issues at play here. Child pornography is abhorrent; how is our legal system going to deal with AI generated versions? What about the inevitable ‘shaming’ of people via AI generated sex acts?
9. Update your profile photo at least every three years
Published: 11th January
I think this is good advice. I try to update mine regularly, although I did realise that last year I chose a photo that was five years old! I prefer ‘natural’ photos that are taken in family situations which I then edit, rather than headshots these days.
10. Britain is screwed
Published: 8th February
I followed a link from this article to some OECD data which, as shown in the chart below, the UK has even lower welfare payments that the US. The economy of our country is absolutely broken, mainly due to Brexit, but also due to the chasm between everyday people and the elites.
Have a happy new year when it arrives!
PS I've given up on Substack and, because I'm tired of moving platforms, I think I'll just send out emails via this site for now. More news on that soon.
Benedict Evans, whose post about leaving Twitter I featured last week, has written about AI tools such as ChatGPT from a product point of view.
He makes quite a few good points, not least that if you need ‘cheat sheets’ and guides on how to prompt LLMs effectively, then they’re not “natural language”.
Alexa and its imitators mostly failed to become much more than voice-activated speakers, clocks and light-switches, and the obvious reason they failed was that they only had half of the problem. The new machine learning meant that speech recognition and natural language processing were good enough to build a completely generalised and open input, but though you could ask anything, they could only actually answer 10 or 20 or 50 things, and each of those had to be built one by one, by hand, by someone at Amazon, Apple or Google. Alexa could only do cricket scores because someone at Amazon built a cricket scores module. Those answers were turned back into speech by machine learning, but the answers themselves had to be created by hand. Machine learning could do the input, but not the output.Source: Unbundling AI | Benedict Evans
LLMs solve this, theoretically, because, theoretically, you can now not just ask anything but get an answer to anything.
This is understandably intoxicating, but I think it brings us to two new problems - a science problem and a product problem. You can ask anything and the system will try to answer, but it might be wrong; and, even if it answers correctly, an answer might not be the right way to achieve your aim. That might be the bigger problem.
Right now, ChatGPT is very useful for writing code, brainstorming marketing ideas, producing rough drafts of text, and a few other things, but for a lot of other people it looks a bit like those PCs ads of the late 1970s that promised you could use it to organise recipes or balance your cheque book - it can do anything, but what?