Systems and interconnected disaster risks

    When you see that humans have exceeded six of the nine boundaries which keep Earth habitable, it’s more than a bit worrying. But then when you follow it up with this United Nations report, it makes you want to do something about it.

    I guess this is one of the reasons that I’m interested in Systems Thinking as an approach to helping us get out of this mess. I can imagine pivoting to work on this kind of thing, because (as far as I can see) everyone seems to think it’s someone else’s problem to solve.

    DALL-E 3 generated illustration showing a metaphorical depiction of climate tipping points. The scene includes a series of large dominoes in a fragile natural environment
    Systems are all around us and closely connected to us. Water systems, food systems, transport systems, information systems, ecosystems and others: our world is made up of systems where the individual parts interact with one another. Over time, human activities have made these systems increasingly complex, be it through global supply chains, communication networks, international trade and more. As these interconnections get stronger, they offer opportunities for global cooperation and support, but also expose us to greater risks and unpleasant surprises, particularly when our own actions threaten to damage a system.

    […]

    The six risk tipping points analysed in this report offer some key examples of the numerous risk tipping points we are approaching. If we look at the world as a whole, there are many more systems at risk that require our attention. Each system acts as a string in a safety net, keeping us from harm and supporting our societies. As the next system tips, another string is cut, increasing the overall pressure on the remaining systems to hold us up. Therefore, any attempt to reduce risk in these systems needs to acknowledge and understand these underlying interconnectivities. Actions that affect one system will likely have consequences on another, so we must avoid working in silos and instead look at the world as one connected system.

    Luckily, we have a unique advantage of being able to see the danger ahead of us by recognizing the risk tipping points we are approaching. This provides us with the opportunity to make informed decisions and take decisive actions to avert the worst of these impacts, and perhaps even forge a new path towards a bright, sustainable and equitable future. By anticipating risk tipping points where the system will cease to function as expected, we can adjust the way the system functions accordingly or modify our expectations of what the system can deliver. In each case, however, avoiding the risk tipping point will require more than a single solution. We will need to integrate actions across sectors in unprecedented ways in order to address the complex set of root causes and drivers of risk and promote changes in established mindsets.

    Source: 2023 Executive Summary - Interconnected Disaster Risks | United Nations University - Institute for Environment and Human Security (UNU-EHS)

    Image: DALL-E 3

    System innovation is driven by reshaping relationships within the system

    As I may have mentioned a little too often recently, I’m about to start an MSc in Systems Thinking. So I’m always on the lookout for useful resources relating to the topic.

    I came across this one by Jennie Winhall and Charles Leadbeater from last year, which discusses how system innovation is driven by reshaping relationships within the system. It identifies four keys to system innovation: purpose, power, relationships, and resource flows. The focus is on relationships, which are the patterns of interactions between parts of a system. Transforming a system requires altering these relationships, which in turn unlocks other keys like purpose and power.

    (Over a decade ago, I lined up to talk with Leadbeater after his talk at Online Educa Berlin. I was going to ask him something specific about his most recent book, but as everyone before him gushed over it, I think I just mumbled something about not liking it and then sloped off. Not my finest moment. Apologies, Charles, if by some reason you’re reading this!)

    Systems are defined by the patterns of interactions between their parts: their relationships. Those interactions generate the outcomes of the system as a whole. Transforming the outcomes of a system requires remaking its relationships and then unlocking the other keys to system innovation: purpose, power and resources. This shift in relationships allows all those in the system to learn faster, to be more creative. System innovators redesign the relationships in the system to allow dramatically enhanced learning across the system, and thereby generate far better outcomes.
    Source: The Patterns of Possibility | The System Innovation Initiative

    Systems thinking and AI

    Edge is an interesting website. Its aim is:

    To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.
    One recent article on the site is from Mary Catherine Bateson, a writer and cultural anthropologist who retired in 2004 from her position as Professor in Anthropology and English at George Mason University. She's got some interesting insights into systems thinking and artificial intelligence.
    We all think with metaphors of various sorts, and we use metaphors to deal with complexity, but the way human beings use computers and AI depends on their basic epistemologies—whether they’re accustomed to thinking in systemic terms, whether they’re mainly interested in quantitative issues, whether they’re used to using games of various sorts. A great deal of what people use AI for is to simulate some pattern outside in the world. On the other hand, people use one pattern in the world as a metaphor for another one all the time.
    That's such an interesting way of putting it, the insinuation being that some people have epistemologies (theories of knowledge) that are not really nuanced enough to deal with the world in all of its complexity. As a result, they use reductive metaphors that don't really work that well. This is obviously problematic when dealing with AI that you want to do some work for you, hence the bias (racism, sexism) which has plagued the field.
    One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it's willing to make projections when it hasn’t been provided with everything that would be relevant to those projections. How do we get there? I don’t know. It’s important to be aware of it, to realize that there are limits to what we can do with AI. It’s great for computation and arithmetic, and it saves huge amounts of labor. It seems to me that it lacks humility, lacks imagination, and lacks humor. It doesn’t mean you can’t bring those things into your interactions with your devices, particularly, in communicating with other human beings. But it does mean that elements of intelligence and wisdom—I like the word wisdom, because it's more multi-dimensional—are going to be lacking.
    Something I always say is that technology is not neutral and that anyone who claims it to be so is a charlatan. Technologies are always designed by a person, or group of people, for a particular purpose. That person, or people, has hopes, fears, dreams, opinions, and biases. Therefore, AI has limits.
    You don’t have to know a lot of technical terminology to be a systems thinker. One of the things that I’ve been realizing lately, and that I find fascinating as an anthropologist, is that if you look at belief systems and religions going way back in history, around the world, very often what you realize is that people have intuitively understood systems and used metaphors to think about them. The example that grabbed me was thinking about the pantheon of Greek gods—Zeus and Hera, Apollo and Demeter, and all of them. I suddenly realized that in the mythology they’re married, they have children, the sun and the moon are brother and sister. There are quarrels among the gods, and marriages, divorces, and so on. So you can use the Greek pantheon, because it is based on kinship, to take advantage of what people have learned from their observation of their friends and relatives.
    I like the way that Bateson talks about the difference between computer science and systems theory. It's a bit like the argument I gave about why kids need to learn to code back in 2013: it's more about algorithmic thinking than it is about syntax.
    The tragedy of the cybernetic revolution, which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in.
    The article is worth reading in its entirety, as Bateson goes off at tangents that make it difficult to quote sections here. It reminds me that I need to revisit the work of Donella Meadows.

    Source: Edge