Page 2 of 31

All killer, no filler

This short posts cites a talk entitled 10 Timeframes given by Paul Ford back in 2012:

Ford asks a deceivingly simple question: when you spend a portion of your life (that is, your time) working on a project, do you take into account how your work will consume, spend, or use portions of other lives? How does the ‘thing’ you are working on right now play out in the future when there are “People using your systems, playing with your toys, [and] fiddling with your abstractions”?

In the talk, Ford mentions that in a 200-seat auditorium, his speaking for an extra minute wastes over three hours of human time, all told. Not to mention those who watch the recording, of course.

When we’re designing things for other people, or indeed working with our colleagues, we need to think not only about our own productivity but how that will impact others. I find it sad when people don’t do the extra work to make it easier for the person they have the power to impact. That could be as simple as sending an email that, you know, includes the link to the think being referenced. Or it could be an entire operating system, a building, or a new project management procedure.

I often think about this when editing video: does this one-minute section respect the time of future viewers? A minute multiplied by the number of times a video might be video suddenly represents a sizeable chunk of collective human resources. In this respect, ‘filler’ is irresponsible: if you know something is not adding value or meaning to future ‘consumers,’ then you are, in a sense, robbing life from them. It seems extreme to say that, yes, but hopefully the contemplating the proposition has not wasted your time.

My son’s at an age where he’s started to watch a lot of YouTube videos. Due to the financial incentives of advertising, YouTubers fill the first minute (at least) with tell you what you’re going to find out, or with meaningless drivel. Unfortunately, my son’s too young to have worked that out for himself yet. And at eleven years old, you can’t just be told.

In my own life and practice, I go out of my way to make life easier for other people. Ultimately, of course, it makes life easier for me. By modelling behaviours that other people can copy, you’re more likely to be the recipient of time-saving practices and courteous behaviour. I’ve still a lot to learn, but it’s nice to be nice.

Source: James Shelley (via Adam Procter)

“Do what you can, with what you have, where you are.”

(Theodore Roosevelt)

Systems thinking and AI

Edge is an interesting website. Its aim is:

To arrive at the edge of the world’s knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.

One recent article on the site is from Mary Catherine Bateson, a writer and cultural anthropologist who retired in 2004 from her position as Professor in Anthropology and English at George Mason University. She’s got some interesting insights into systems thinking and artificial intelligence.

We all think with metaphors of various sorts, and we use metaphors to deal with complexity, but the way human beings use computers and AI depends on their basic epistemologies—whether they’re accustomed to thinking in systemic terms, whether they’re mainly interested in quantitative issues, whether they’re used to using games of various sorts. A great deal of what people use AI for is to simulate some pattern outside in the world. On the other hand, people use one pattern in the world as a metaphor for another one all the time.

That’s such an interesting way of putting it, the insinuation being that some people have epistemologies (theories of knowledge) that are not really nuanced enough to deal with the world in all of its complexity. As a result, they use reductive metaphors that don’t really work that well. This is obviously problematic when dealing with AI that you want to do some work for you, hence the bias (racism, sexism) which has plagued the field.

One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it’s willing to make projections when it hasn’t been provided with everything that would be relevant to those projections. How do we get there? I don’t know. It’s important to be aware of it, to realize that there are limits to what we can do with AI. It’s great for computation and arithmetic, and it saves huge amounts of labor. It seems to me that it lacks humility, lacks imagination, and lacks humor. It doesn’t mean you can’t bring those things into your interactions with your devices, particularly, in communicating with other human beings. But it does mean that elements of intelligence and wisdom—I like the word wisdom, because it’s more multi-dimensional—are going to be lacking.

Something I always say is that technology is not neutral and that anyone who claims it to be so is a charlatan. Technologies are always designed by a person, or group of people, for a particular purpose. That person, or people, has hopes, fears, dreams, opinions, and biases. Therefore, AI has limits.

You don’t have to know a lot of technical terminology to be a systems thinker. One of the things that I’ve been realizing lately, and that I find fascinating as an anthropologist, is that if you look at belief systems and religions going way back in history, around the world, very often what you realize is that people have intuitively understood systems and used metaphors to think about them. The example that grabbed me was thinking about the pantheon of Greek gods—Zeus and Hera, Apollo and Demeter, and all of them. I suddenly realized that in the mythology they’re married, they have children, the sun and the moon are brother and sister. There are quarrels among the gods, and marriages, divorces, and so on. So you can use the Greek pantheon, because it is based on kinship, to take advantage of what people have learned from their observation of their friends and relatives.

I like the way that Bateson talks about the difference between computer science and systems theory. It’s a bit like the argument I gave about why kids need to learn to code back in 2013: it’s more about algorithmic thinking than it is about syntax.

The tragedy of the cybernetic revolution, which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in.

The article is worth reading in its entirety, as Bateson goes off at tangents that make it difficult to quote sections here. It reminds me that I need to revisit the work of Donella Meadows.

Source: Edge

Issue #300: Tricentennial

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

The four things you need to become an intellectual

I came across this, I think, via one of the aggregation sites I skim. It’s a letter in the form of an article by Paul J. Griffiths, who is a Professor of Catholic Theology at Duke Divinity School. In it, he replies to a student who has asked how to become an intellectual.

Griffiths breaks it down into four requirements, and then at the end gives a warning.

The first requirement is that you find something to think about. This may be easy to arrive at, or almost impossibly difficult. It’s something like falling in love. There’s an infinite number of topics you might think about, just as there’s an almost infinite number of people you might fall in love with. But in neither case is the choice made by consulting all possibilities and choosing among them. You can only love what you see, and what you see is given, in large part, by location and chance.

There’s a tension here, isn’t there? Given the almost infinite multiplicity of things it’s possible to spend life thinking about and concentrating upon, how does one choose between them? Griffiths mentions the role of location and chance, but I’d also through in tendencies. If you notice yourself liking a particular style of art, captivated by a certain style of writing, or enthralled by a way of approaching the world, this may be a clue that you should investigate it further.

The second requirement is time: You need a life in which you can spend a minimum of three uninterrupted hours every day, excepting sabbaths and occasional vacations, on your intellectual work. Those hours need to be free from distractions: no telephone calls, no email, no texts, no visits. Just you. Just thinking and whatever serves as a direct aid to and support of thinking (reading, writing, experiment, etc.). Nothing else. You need this because intellectual work is, typically, cumulative and has momentum. It doesn’t leap from one eureka moment to the next, even though there may be such moments in your life if you’re fortunate. No, it builds slowly from one day to the next, one month to the next. Whatever it is you’re thinking about will demand of you that you think about it a lot and for a long time, and you won’t be able to do that if you’re distracted from moment to moment, or if you allow long gaps between one session of work and the next. Undistracted time is the space in which intellectual work is done: It’s the space for that work in the same way that the factory floor is the space for the assembly line.

This chimes with a quotation from Mark Manson I referenced yesterday, in which he talks about the joy you feel and meaning you experience when you’ve spent decades dedicated to one thing in particular. You have to carve out time for that, whether through your occupation, or through putting aside leisure time to pursue it.

The third requirement is training. Once you know what you want to think about, you need to learn whatever skills are necessary for good thinking about it, and whatever body of knowledge is requisite for such thinking. These days we tend to think of this as requiring university studies.

[…]

The most essential skill is surprisingly hard to come by. That skill is attention. Intellectuals always think about something, and that means they need to know how to attend to what they’re thinking about. Attention can be thought of as a long, slow, surprised gaze at whatever it is.

[…]

The long, slow, surprised gaze requires cultivation. We’re quickly and easily habituated, with the result that once we’ve seen something a few times it comes to seem unsurprising, and if it’s neither threatening nor useful it rapidly becomes invisible. There are many reasons for this (the necessities of survival; the fact of the Fall), but whatever a full account of those might be (“full account” being itself a matter for thinking about), their result is that we can’t easily attend.

This section was difficult to quote as it weaves in specific details from the original student’s letter, but the gist is that people assume that universities are good places for intellectual pursuits. Griffiths responds that this may or may not be the case, and, in fact, is less likely to be true as the 21st century progresses.

Instead, we need to cultivate attention, which he describes as being almost like a muscle. Griffiths suggests “intentionally engaging in repetitive activity” such as “practicing a musical instrument, attending Mass daily, meditating on the rhythms of your breath, taking the same walk every day (Kant in Königsberg)” to “foster attentiveness”.

[The] fourth requirement is interlocutors. You can’t develop the needed skills or appropriate the needed body of knowledge without them. You can’t do it by yourself. Solitude and loneliness, yes, very well; but that solitude must grow out of and continually be nourished by conversation with others who’ve thought and are thinking about what you’re thinking about. Those are your interlocutors. They may be dead, in which case they’ll be available to you in their postmortem traces: written texts, recordings, reports by others, and so on. Or they may be living, in which case you may benefit from face-to-face interactions, whether public or private. But in either case, you need them. You can neither decide what to think about nor learn to think about it well without getting the right training, and the best training is to be had by apprenticeship: Observe the work—or the traces of the work—of those who’ve done what you’d like to do; try to discriminate good instances of such work from less good; and then be formed by imitation.

I talked in my thesis about the impossibility of being ‘literate’ unless you’ve got a community in which to engage in literate practices. The same is true of intellectual activity: you can’t be an intellectual in a vacuum.

As a society, we worship at the altar of the lone genius but, in fact, that idea is fundamentally flawed. Progress and breakthroughs come through discussion and collaboration, not sitting in a darkened room by yourself with a wet tea-towel over your head, thinking very hard.

Interestingly, and importantly, Griffiths points out to the student to whom he’s replying that the life of an intellectual might seem attractive, but that it’s a long, hard road.

And lastly: Don’t do any of the things I’ve recommended unless it seems to you that you must. The world doesn’t need many intellectuals. Most people have neither the talent nor the taste for intellectual work, and most that is admirable and good about human life (love, self-sacrifice, justice, passion, martyrdom, hope) has little or nothing to do with what intellectuals do. Intellectual skill, and even intellectual greatness, is as likely to be accompanied by moral vice as moral virtue. And the world—certainly the American world—has little interest in and few rewards for intellectuals. The life of an intellectual is lonely, hard, and usually penurious; don’t undertake it if you hope for better than that. Don’t undertake it if you think the intellectual vocation the most important there is: It isn’t. Don’t undertake it if you have the least tincture in you of contempt or pity for those without intellectual talents: You shouldn’t. Don’t undertake it if you think it will make you a better person: It won’t. Undertake it if, and only if, nothing else seems possible.

A long read, but a rewarding one.

Source: First Things

Craig Mod’s subtle redesign of the hardware Kindle

I like Craig Mod’s writing. He’s the guy that’s written on his need to walk, drawing his own calendar, and getting his attention back.

This article is hardware Kindle devices — the  distinction being important given that you can read your books via the Kindle Cloud Reader or, indeed, via an app on pretty much any platform.

As he points out, the user interface remains sub-optimal:

Tap most of the screen to go forward a page. Tap the left edge to go back. Tap the top-ish area to open the menu. Tap yet another secret top-right area to bookmark. This model co-opts the physical space of the page to do too much.

The problem is that the text is also an interface element. But it’s a lower-level element. Activated through a longer tap. In essence, the Kindle hardware and software team has decided to “function stack” multiple layers of interface onto the same plane.

And so this model has never felt right.

He suggests an alternative to this which involves physical buttons on the device itself:

Hardware buttons:

  • Page forward
  • Page back
  • Menu
  • (Power/Sleep)

What does this get us?

It means we can now assume that — when inside of a book — any tap on the screen is explicitly to interact with content: text or images within the text. This makes the content a first-class object in the interaction model. Right now it’s secondary, engaged only if you tap and hold long enough on the screen. Otherwise, page turn and menu invocations take precedence.

I can see why he proposes this, but I’m not so sure about the physical buttons for page turns. The reason I’d say that, is that although I now use a Linux-based bq Cervantes e-reader, before 2015 I had almost every iteration of the hardware Kindle. There’s a reason Amazon removed hardware buttons for page turns.

I read in lots of places, but I read in bed with my wife every day and if there’s one thing she couldn’t stand, it was the clicking noise of me turning the page on my Kindle. Even if I tried to press it quietly, it annoyed her. Touchscreen page turns are much better.

The e-reader I use has a similar touch interaction to the Kindle, so I see where Craig Mod is coming from when he says:

When content becomes the first-class object, every interaction is suddenly bounded and clear. Want the menu? Press the (currently non-existent) menu button towards the top of the Kindle. Want to turn the page? Press the page turn button. Want to interact with the text? Touch it. Nothing is “hidden.” There is no need to discover interactions. And because each interaction is clear, it invites more exploration and play without worrying about losing your place.

This, if you haven’t come across it before, is user interface design, or UI design for short. It’s important stuff, for as Steve Jobs famously said: “Everything in this world… was created by people no smarter than you” — and that’s particularly true in tech.

Source: Craig Mod

Profiting from your enemies

While I don’t feel like I’ve got any enemies, I’m sure there’s plenty of people who don’t like me, for whatever reason. I’ve never thought about framing it this way, though:

In Plutarch’s “How to Profit by One’s Enemies,” he advises that rather than lashing out at your enemies or completely ignoring them, you should study them and see if they can be useful to you in some way. He writes that because our friends are not always frank and forthcoming with us about our shortcomings, “we have to depend on our enemies to hear the truth.” Your enemy will point out your weak spots for you, and even if he says something untrue, you can then analyze what made him say it.

People close to us don’t want to offend or upset us, so they don’t point out areas where we could improve. So we should take negative comments and, rather than ‘feed the trolls’ use it as a way to get better (without even ever referencing the ‘enemy’).

Source: Austin Kleon

"Without acknowledging the ever-present gaze of death, the superficial will appear important, and the important will appear superficial. Death is the only thing we can know with any certainty. And as such, it must be the compass by which we orient all our other values and decisions. It is the correct answer to all of the questions we should ask but never do. The only way to be comfortable with death is to understand and see yourself as something bigger than yourself; to choose values that stretch beyond serving yourself, that are simple and immediate and controllable and tolerant of the chaotic world around you. This is the basic root of all happiness." (Mark Manson)

Random Street View does exactly what you think it does

Today’s a non-work day for me but, after reviewing resource-centric social media sites as part of my Moodle work yesterday, I rediscovered the joy of StumbleUpon.

That took me to lots of interesting sites which, if you haven’t used the service before, become more relevant to your tastes as time goes on if you use the thumbs up / thumbs down tool.

I came across this Random Street View site which I’ve a sneaking suspicion I’ve seen before. Not only is it a fascinating way to ‘visit’ lesser-known parts of the world, it also shows the scale of Google’s Street View programme.

The teacher in me imagines using this as the starting point for some kind of project. It could be a writing prompt, you could use it to randomly find somewhere to do some research on, or it could even be an art project.

Great stuff.

Source: Random Street View

"To truly appreciate something, you must confine yourself to it. There's a certain level of joy and meaning that you reach in life only when you've spent decades investing in a single relationship, a single craft, a single career. And you cannot achieve those decades of investment without rejecting the alternatives." (Mark Manson)