Category: General (page 1 of 278)

Back next year!

Sign saying 'See you later'

That’s it for Thought Shrapnel for 2023. Make sure you’re subscribed for when we’re back next year! (RSS / newsletter)

Image: Unsplash

If you need a cheat sheet, it’s not ‘natural language’

Benedict Evans, whose post about leaving Twitter I featured last week, has written about AI tools such as ChatGPT from a product point of view.

He makes quite a few good points, not least that if you need ‘cheat sheets’ and guides on how to prompt LLMs effectively, then they’re not “natural language”.

DALL-E 3 image created with prompt: "This image will juxtapose two scenarios: one where a user is frustrated with a voice assistant's limited capabilities (like Alexa performing basic tasks), and another where a user is amazed by the vast potential of an LLM like ChatGPT. The metaphor here is the contrast between limited and limitless potential. The image will feature a split scene: on one side, a user looks disappointedly at a simple smart speaker, and on the other side, the same user is interacting with a dynamic, holographic AI, showcasing the broad capabilities of LLMs."

Alexa and its imitators mostly failed to become much more than voice-activated speakers, clocks and light-switches, and the obvious reason they failed was that they only had half of the problem. The new machine learning meant that speech recognition and natural language processing were good enough to build a completely generalised and open input, but though you could ask anything, they could only actually answer 10 or 20 or 50 things, and each of those had to be built one by one, by hand, by someone at Amazon, Apple or Google. Alexa could only do cricket scores because someone at Amazon built a cricket scores module. Those answers were turned back into speech by machine learning, but the answers themselves had to be created by hand. Machine learning could do the input, but not the output.

LLMs solve this, theoretically, because, theoretically, you can now not just ask anything but get an answer to anything.


This is understandably intoxicating, but I think it brings us to two new problems – a science problem and a product problem. You can ask anything and the system will try to answer, but it might be wrong; and, even if it answers correctly, an answer might not be the right way to achieve your aim. That might be the bigger problem.


Right now, ChatGPT is very useful for writing code, brainstorming marketing ideas, producing rough drafts of text, and a few other things, but for a lot of other people it looks a bit like those PCs ads of the late 1970s that promised you could use it to organise recipes or balance your cheque book – it can do anything, but what?

Source: Unbundling AI | Benedict Evans

Systems and interconnected disaster risks

When you see that humans have exceeded six of the nine boundaries which keep Earth habitable, it’s more than a bit worrying. But then when you follow it up with this United Nations report, it makes you want to do something about it.

I guess this is one of the reasons that I’m interested in Systems Thinking as an approach to helping us get out of this mess. I can imagine pivoting to work on this kind of thing, because (as far as I can see) everyone seems to think it’s someone else’s problem to solve.

DALL-E 3 generated illustration showing a metaphorical depiction of climate tipping points. The scene includes a series of large dominoes in a fragile natural environment

Systems are all around us and closely connected to us. Water systems, food systems, transport systems, information systems, ecosystems and others: our world is made up of systems where the individual parts interact with one another. Over time, human activities have made these systems increasingly complex, be it through global supply chains, communication networks, international trade and more. As these interconnections get stronger, they offer opportunities for global cooperation and support, but also expose us to greater risks and unpleasant surprises, particularly when our own actions threaten to damage a system.


The six risk tipping points analysed in this report offer some key examples of the numerous risk tipping points we are approaching. If we look at the world as a whole, there are many more systems at risk that require our attention. Each system acts as a string in a safety net, keeping us from harm and supporting our societies. As the next system tips, another string is cut, increasing the overall pressure on the remaining systems to hold us up. Therefore, any attempt to reduce risk in these systems needs to acknowledge and understand these underlying interconnectivities. Actions that affect one system will likely have consequences on another, so we must avoid working in silos and instead look at the world as one connected system.

Luckily, we have a unique advantage of being able to see the danger ahead of us by recognizing the risk tipping points we are approaching. This provides us with the opportunity to make informed decisions and take decisive actions to avert the worst of these impacts, and perhaps even forge a new path towards a bright, sustainable and equitable future. By anticipating risk tipping points where the system will cease to function as expected, we can adjust the way the system functions accordingly or modify our expectations of what the system can deliver. In each case, however, avoiding the risk tipping point will require more than a single solution. We will need to integrate actions across sectors in unprecedented ways in order to address the complex set of root causes and drivers of risk and promote changes in established mindsets.

Source: 2023 Executive Summary – Interconnected Disaster Risks | United Nations University – Institute for Environment and Human Security (UNU-EHS)

Image: DALL-E 3