Everything is potentially a meme

    Despite — or perhaps because of — my feelings towards the British monarchy, this absolutely made my day:

    Town crier meme - library Town crier meme - Virgin media

    Isn’t the internet great?

    Source: Haha

    How to be super-productive

    Not a huge sample size, but this article has studied what makes ‘super-productive’ people tick:

    We collected data on over 7,000 people who were rated by their manager on their level of their productivity and 48 specific behaviors. Each person was also rated by an average of 11 other people, including peers, subordinates, and others. We identified the specific behaviors that were correlated with high levels of productivity — the top 10% in our sample — and then performed a factor analysis.
    Here's the list of seven things that came out of the study:
    1. Set stretch goals
    2. Show consistency
    3. Have knowledge and technical expertise
    4. Drive for results
    5. Anticipate and solve problems
    6. Take initiative
    7. Be collaborative
    In my experience, you could actually just focus on helping people with three things:
    • Show up
    • Be proactive
    • Collaborate
    That's certainly been my experience of high-performers over my career so far!

    Source: Harvard Business Review (via Ian O’Byrne)

    Thinking outdoors

    “We do not belong to those who have ideas only among books, when stimulated by books. It is our habit to think outdoors — walking, leaping, climbing, dancing, preferably on lonely mountains or near the sea where even the trails become thoughtful.” (Friedrich Nietzsche)

    Issue #301: Endless horse

    The latest issue of the newsletter hit inboxes earlier today!

    💥 Read

    🔗 Subscribe

    Clickbait and switch?

    Should you design for addiction or for loyalty? That’s the question posed by Michelle Manafy in this post for Nieman Lab. It all depends, she says, on whether you’re trying to attract users or an audience.

    With advertising as the primary driver of web revenue, many publishers have chased the click dragon. Seeking to meet marketers’ insatiable desire for impressions, publishers doubled down on quick clicks. Headlines became little more than a means to a clickthrough, often regardless of whether the article would pay off or even if the topic was worthy of coverage. And — since we all know there are still plenty of publications focusing on hot headlines over substance — this method pays off. In short-term revenue, that is.

    However, the reader experience that shallow clicks deliver doesn’t develop brand affinity or customer loyalty. And the negative consumer experience has actually been shown to extend to any advertising placed in its context. Sure, there are still those seeking a quick buck — but these days, we all see clickbait for what it is.

    Audiences mature over time and become wary of particular approaches. Remember “…and you’ll not believe what came next” approaches?

    Ask Manafy notes, it’s much easier to design for addiction than to build an audience. The former just requires lots and lots of tracking — something at which the web has become spectacularly good at, due to advertising.

    For example, many push notifications are specifically designed to leverage the desire for human interaction to generate clicks (such as when a user is alerted that their friend liked an article). Push notifications and alerts are also unpredictable (Will we have likes? Mentions? New followers? Negative comments?). And this unpredictability, or B.F. Skinner’s principle of variable rewards, is the same one used in those notoriously addictive slot machines. They’re also lucrative — generating more revenue in the U.S. than baseball, theme parks, and movies combined. A pull-to-refresh even smacks of a slot machine lever.
    The problem is that designing for addiction isn't a long-term strategy. Who plays Farmville these days? And the makers of Candy Crush aren't exactly crushing it with their share price these days.
    Sure, an addict is “engaged” — clicking, liking, swiping — but what if they discover that your product is bad for them? Or that it’s not delivering as much value as it does harm? The only option for many addicts is to quit, cold turkey. Sure, many won’t have the willpower, and you can probably generate revenue off these users (yes, users). But is that a long-term strategy you can live with? And is it a growth strategy, should the philosophical, ethical, or regulatory tide turn against you?
    The 'regulatory tide' referenced here is exemplified through GDPR, which is already causing a sea change in attitude towards user data. Compliance with teeth, it seems, gets results.

    Designing for sustainability isn’t just good from a regulatory point of view, it’s good for long-term business, argues Manafy:

    Where addiction relies on an imbalanced and unstable relationship, loyal customers will return willingly time and again. They’ll refer you to others. They’ll be interested in your new offerings, because they will already rely on you to deliver. And, as an added bonus, these feelings of goodwill will extend to any advertising you deliver too. Through the provision of quality content, delivered through excellent experiences at predictable and optimal times, content can become a trusted ally, not a fleeting infatuation or unhealthy compulsion.
    Instead of thinking of your audience as 'users' waiting for their next hit, she suggests, think of them as your audience. That's a much better approach and will help you make much better design decisions.

    Source: Nieman Lab

    Read for freedom

    "Once you learn to read, you will be forever free."


    (Frederick Douglass)

    Soviet-era industrial design

    While the prospects of me learning the Russian language anytime soon are effectively zero, I do have a soft spot for the country. My favourite novels are 19th century Russian fiction, the historical time period I’m most fond of is the Russian revolutions of 1917*, and I really like some of the designs that came out of Bolshevik and Stalinist Russia. (That doesn’t mean I condone the atrocities, of course.)

    The Soviet era, from 1950 onwards, isn’t really a time period I’ve studied in much depth. I taught it as a History teacher as part of a module on the Cold War, but that was very much focused on the American and British side of things. So I’ve missed out on some of the wonderful design that came out of that time period. Here’s a couple of my favourites featured in this article. I may have to buy the book it mentions!

    Soviet radio Soviet textiles

    Source: Atlas Obscura

    Conversational implicature

    In references for jobs, former employers are required to be positive. Therefore, a reference that focuses on how polite and punctual someone is could actually be a damning indictment of their ability. Such ‘conversational implicature’ is the focus of this article:

    When we convey a message indirectly like this, linguists say that we implicate the meaning, and they refer to the meaning implicated as an implicature. These terms were coined by the British philosopher Paul Grice (1913-88), who proposed an influential account of implicature in his classic paper ‘Logic and Conversation’ (1975), reprinted in his book Studies in the Way of Words (1989). Grice distinguished several forms of implicature, the most important being conversational implicature. A conversational implicature, Grice held, depends, not on the meaning of the words employed (their semantics), but on the way that the words are used and interpreted (their pragmatics).
    From my point of view, this is similar to the difference between productive and unproductive ambiguity.
    The distinction between what is said and what is conversationally implicated isn’t just a technical philosophical one. It highlights the extent to which human communication is pragmatic and non-literal. We routinely rely on conversational implicature to supplement and enrich our utterances, thus saving time and providing a discreet way of conveying sensitive information. But this convenience also creates ethical and legal problems. Are we responsible for what we implicate as well as for what we actually say?
    For example, and as the article notes, "shall we go upstairs?" can mean a sexual invitation, which may or may not later imply consent. It's a tricky area.

    I’ve noted that the more technically-minded a person, the less they use conversational implicature. In addition, and I’m not sure if this is true or just my own experience, I’ve found that Americans tend to be more literal in their communication than Europeans.

     To avoid disputes and confusion, perhaps we should use implicature less and communicate more explicitly? But is that recommendation feasible, given the extent to which human communication relies on pragmatics?
    To use conversational implicature is human. It can be annoying. It can turn political. But it's an extremely useful tool, and certainly lubricates us all rubbing along together.

    Source: Aeon

    Ryan Holiday's 13 daily life-changing habits

    Articles like this are usually clickbait with two or three useful bits of advice that you’ve already read elsewhere, coupled with some other random things to pad it out. That’s not the case with Ryan Holiday’s post, which lists:

    1. Prepare for the hours ahead
    2. Go for a walk
    3. Do the deep work
    4. Do a kindness
    5. Read. Read. Read.
    6. Find true quiet
    7. Make time for strenuous exercise
    8. Think about death
    9. Seize the alive time
    10. Say thanks — to the good and bad
    11. Put the day up for review
    12. Find a way to connect to something big
    13. Get eight hours of sleep
    I'm doing pretty well on all of these at the moment, except perhaps number eleven. I used to 'call myself into the office' each month. Perhaps I should start doing that again?

     

    Source: Thought Catalog

    Valuing and signalling your skills

    When I rocked up to the MoodleMoot in Miami back in November last year, I ran a workshop that involved human spectrograms, post-it notes, and participatory activities. Although I work in tech and my current role is effectively a product manager for Moodle, I still see myself primarily as an educator.

    This, however, was a surprise for some people who didn’t know me very well before I joined Moodle. As one person put it, “I didn’t know you had that in your toolbox”. The same was true at Mozilla; some people there just saw me as a quasi-academic working on web literacy stuff.

    Given this, I was particularly interested in a post from Steve Blank which outlined why he enjoys working with startup-like organisations rather than large, established companies:

    It never crossed my mind that I gravitated to startups because I thought more of my abilities than the value a large company would put on them. At least not consciously. But that’s the conclusion of a provocative research paper, Asymmetric Information and Entrepreneurship, that explains a new theory of why some people choose to be entrepreneurs. The authors’ conclusion — Entrepreneurs think they are better than their resumes show and realize they can make more money by going it alone.And in most cases, they are right.
    If you stop and think for a moment, it's entirely obvious that you know your skills, interests, and knowledge better than anyone who hires you for a specific role. Ordinarily, they're interested in the version of you that fits the job description, rather than you as a holistic human being.

    The paper that Blank cites covers research which followed 12,686 people over 30+ years. It comes up with seven main findings, but the most interesting thing for me (given my work on badges) is the following:

    If the authors are right, the way we signal ability (resumes listing education and work history) is not only a poor predictor of success, but has implications for existing companies, startups, education, and public policy that require further thought and research.
    It's perhaps a little simplistic as a binary, but Blank cites a 1970s paper that uses 'lemons' and 'cherries' as a metaphors to compare workers:
    Lemons Versus Cherries. The most provocative conclusion in the paper is that asymmetric information about ability leads existing companies to employ only “lemons,” relatively unproductive workers. The talented and more productive choose entrepreneurship. (Asymmetric Information is when one party has more or better information than the other.) In this case the entrepreneurs know something potential employers don’t – that nowhere on their resume does it show resiliency, curiosity, agility, resourcefulness, pattern recognition, tenacity and having a passion for products.

    This implication, that entrepreneurs are, in fact, “cherries” contrasts with a large body of literature in social science, which says that the entrepreneurs are the “lemons”— those who cannot find, cannot hold, or cannot stand “real jobs.”

    My main takeaway from this isn’t necessarily that entrepreneurship is always the best option, but that we’re really bad at signalling abilities and finding the right people to work with. I’m convinced that using digital credentials can improve that, but only if we use them in transformational ways, rather than replicate the status quo.

    Source: Steve Blank

    Intimate data analytics in education

    The ever-relevant and compulsively-readable Ben Williamson turns his attention to ‘precision education’ in his latest post. It would seem that now that the phrase ‘personalised learning’ has jumped the proverbial shark, people are doubling down on the rather dangerous assumption that we just need more data to provide better learning experiences.

    In some ways, precision education looks a lot like a raft of other personalized learning practices and platform developments that have taken shape over the past few years. Driven by developments in learning analytics and adaptive learning technologies, personalized learning has become the dominant focus of the educational technology industry and the main priority for philanthropic funders such as Bill Gates and Mark Zuckerberg.

    […]

    A particularly important aspect of precision education as it is being advocated by others, however, is its scientific basis. Whereas most personalized learning platforms tend to focus on analysing student progress and outcomes, precision education requires much more intimate data to be collected from students. Precision education represents a shift from the collection of assessment-type data about educational outcomes, to the generation of data about the intimate interior details of students’ genetic make-up, their psychological characteristics, and their neural functioning.

    As Williamson points out, the collection of ‘intimate data’ is particularly concerning, particularly in the wake of the Cambridge Analytica revelations.

    Many people will find the ideas behind precision education seriously concerning. For a start, there appear to be some alarming symmetries between the logics of targeted learning and targeted advertising that have generated heated public and media attention already in 2018. Data protection and privacy are obvious risks when data are collected about people’s private, intimate and interior lives, bodies and brains. The ethical stakes in using genetics, neural information and psychological profiles to target students with differentiated learning inputs are significant.
    There's a very definite worldview which presupposes that we just need to throw more technology at a problem until it goes away. That may be true in some situations, but at what cost? And to what extent is the outcome an artefact of the constraints of the technologies? Hopefully my own kids will be finished school before this kind of nonsense becomes mainstream. I do, however, worry about my grandchildren.
    The technical machinery alone required for precision education would be vast. It would have to include neurotechnologies for gathering brain data, such as neuroheadsets for EEG monitoring. It would require new kinds of tests, such as those of personality and noncognitive skills, as well as real-time analytics programs of the kind promoted by personalized-learning enthusiasts. Gathering intimate data might also require genetics testing technologies, and perhaps wearable-enhanced learning devices for capturing real-time data from students’ bodies as proxy psychometric measures of their responses to learning inputs and materials.
    Thankfully, Williamson cites the work of academics who are proposing a different way forward. Something that respects the social aspect of learning rather than a reductionist view that focuses on inputs and outputs.
    One productive way forward might be to approach precision education from a ‘biosocial’ perspective. As Deborah Youdell  argues, learning may be best understood as the result of ‘social and biological entanglements.’ She advocates collaborative, inter-disciplinary research across social and biological sciences to understand learning processes as the dynamic outcomes of biological, genetic and neural factors combined with socially and culturally embedded interactions and meaning-making processes. A variety of biological and neuroscientific ideas are being developed in education, too, making policy and practice more bio-inspired.
    The trouble is, of course, is that it's not enough for academics to write papers about things. Or even journalists to write newspaper articles. Even with all of the firestorm over Facebook recently, people are still using the platform. If the advocates of 'precision education'  have their way, I wonder who will actually create something meaningful that opposes their technocratic worldview?

    Source: Code Acts in Education

    All killer, no filler

    This short posts cites a talk entitled 10 Timeframes given by Paul Ford back in 2012:

    Ford asks a deceivingly simple question: when you spend a portion of your life (that is, your time) working on a project, do you take into account how your work will consume, spend, or use portions of other lives? How does the ‘thing’ you are working on right now play out in the future when there are “People using your systems, playing with your toys, [and] fiddling with your abstractions”?
    In the talk, Ford mentions that in a 200-seat auditorium, his speaking for an extra minute wastes over three hours of human time, all told. Not to mention those who watch the recording, of course.

    When we’re designing things for other people, or indeed working with our colleagues, we need to think not only about our own productivity but how that will impact others. I find it sad when people don’t do the extra work to make it easier for the person they have the power to impact. That could be as simple as sending an email that, you know, includes the link to the think being referenced. Or it could be an entire operating system, a building, or a new project management procedure.

    I often think about this when editing video: does this one-minute section respect the time of future viewers? A minute multiplied by the number of times a video might be video suddenly represents a sizeable chunk of collective human resources. In this respect, ‘filler’ is irresponsible: if you know something is not adding value or meaning to future ‘consumers,’ then you are, in a sense, robbing life from them. It seems extreme to say that, yes, but hopefully the contemplating the proposition has not wasted your time.
    My son's at an age where he's started to watch a lot of YouTube videos. Due to the financial incentives of advertising, YouTubers fill the first minute (at least) with tell you what you're going to find out, or with meaningless drivel. Unfortunately, my son's too young to have worked that out for himself yet. And at eleven years old, you can't just be told.

    In my own life and practice, I go out of my way to make life easier for other people. Ultimately, of course, it makes life easier for me. By modelling behaviours that other people can copy, you’re more likely to be the recipient of time-saving practices and courteous behaviour. I’ve still a lot to learn, but it’s nice to be nice.

    Source: James Shelley (via Adam Procter)

    All killer, no filler

    This short posts cites a talk entitled 10 Timeframes given by Paul Ford back in 2012:

    Ford asks a deceivingly simple question: when you spend a portion of your life (that is, your time) working on a project, do you take into account how your work will consume, spend, or use portions of other lives? How does the ‘thing’ you are working on right now play out in the future when there are “People using your systems, playing with your toys, [and] fiddling with your abstractions”?
    In the talk, Ford mentions that in a 200-seat auditorium, his speaking for an extra minute wastes over three hours of human time, all told. Not to mention those who watch the recording, of course.

    When we’re designing things for other people, or indeed working with our colleagues, we need to think not only about our own productivity but how that will impact others. I find it sad when people don’t do the extra work to make it easier for the person they have the power to impact. That could be as simple as sending an email that, you know, includes the link to the think being referenced. Or it could be an entire operating system, a building, or a new project management procedure.

    I often think about this when editing video: does this one-minute section respect the time of future viewers? A minute multiplied by the number of times a video might be video suddenly represents a sizeable chunk of collective human resources. In this respect, ‘filler’ is irresponsible: if you know something is not adding value or meaning to future ‘consumers,’ then you are, in a sense, robbing life from them. It seems extreme to say that, yes, but hopefully the contemplating the proposition has not wasted your time.
    My son's at an age where he's started to watch a lot of YouTube videos. Due to the financial incentives of advertising, YouTubers fill the first minute (at least) with tell you what you're going to find out, or with meaningless drivel. Unfortunately, my son's too young to have worked that out for himself yet. And at eleven years old, you can't just be told.

    In my own life and practice, I go out of my way to make life easier for other people. Ultimately, of course, it makes life easier for me. By modelling behaviours that other people can copy, you’re more likely to be the recipient of time-saving practices and courteous behaviour. I’ve still a lot to learn, but it’s nice to be nice.

    Source: James Shelley (via Adam Procter)

    Do what you can

    “Do what you can, with what you have, where you are.”

    (Theodore Roosevelt)

    Systems thinking and AI

    Edge is an interesting website. Its aim is:

    To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.
    One recent article on the site is from Mary Catherine Bateson, a writer and cultural anthropologist who retired in 2004 from her position as Professor in Anthropology and English at George Mason University. She's got some interesting insights into systems thinking and artificial intelligence.
    We all think with metaphors of various sorts, and we use metaphors to deal with complexity, but the way human beings use computers and AI depends on their basic epistemologies—whether they’re accustomed to thinking in systemic terms, whether they’re mainly interested in quantitative issues, whether they’re used to using games of various sorts. A great deal of what people use AI for is to simulate some pattern outside in the world. On the other hand, people use one pattern in the world as a metaphor for another one all the time.
    That's such an interesting way of putting it, the insinuation being that some people have epistemologies (theories of knowledge) that are not really nuanced enough to deal with the world in all of its complexity. As a result, they use reductive metaphors that don't really work that well. This is obviously problematic when dealing with AI that you want to do some work for you, hence the bias (racism, sexism) which has plagued the field.
    One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it's willing to make projections when it hasn’t been provided with everything that would be relevant to those projections. How do we get there? I don’t know. It’s important to be aware of it, to realize that there are limits to what we can do with AI. It’s great for computation and arithmetic, and it saves huge amounts of labor. It seems to me that it lacks humility, lacks imagination, and lacks humor. It doesn’t mean you can’t bring those things into your interactions with your devices, particularly, in communicating with other human beings. But it does mean that elements of intelligence and wisdom—I like the word wisdom, because it's more multi-dimensional—are going to be lacking.
    Something I always say is that technology is not neutral and that anyone who claims it to be so is a charlatan. Technologies are always designed by a person, or group of people, for a particular purpose. That person, or people, has hopes, fears, dreams, opinions, and biases. Therefore, AI has limits.
    You don’t have to know a lot of technical terminology to be a systems thinker. One of the things that I’ve been realizing lately, and that I find fascinating as an anthropologist, is that if you look at belief systems and religions going way back in history, around the world, very often what you realize is that people have intuitively understood systems and used metaphors to think about them. The example that grabbed me was thinking about the pantheon of Greek gods—Zeus and Hera, Apollo and Demeter, and all of them. I suddenly realized that in the mythology they’re married, they have children, the sun and the moon are brother and sister. There are quarrels among the gods, and marriages, divorces, and so on. So you can use the Greek pantheon, because it is based on kinship, to take advantage of what people have learned from their observation of their friends and relatives.
    I like the way that Bateson talks about the difference between computer science and systems theory. It's a bit like the argument I gave about why kids need to learn to code back in 2013: it's more about algorithmic thinking than it is about syntax.
    The tragedy of the cybernetic revolution, which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in.
    The article is worth reading in its entirety, as Bateson goes off at tangents that make it difficult to quote sections here. It reminds me that I need to revisit the work of Donella Meadows.

    Source: Edge

    Issue #300: Tricentennial

    The latest issue of the newsletter hit inboxes earlier today!

    💥 Read

    🔗 Subscribe

    The four things you need to become an intellectual

    I came across this, I think, via one of the aggregation sites I skim. It’s a letter in the form of an article by Paul J. Griffiths, who is a Professor of Catholic Theology at Duke Divinity School. In it, he replies to a student who has asked how to become an intellectual.

    Griffiths breaks it down into four requirements, and then at the end gives a warning.

    The first requirement is that you find something to think about. This may be easy to arrive at, or almost impossibly difficult. It’s something like falling in love. There’s an infinite number of topics you might think about, just as there’s an almost infinite number of people you might fall in love with. But in neither case is the choice made by consulting all possibilities and choosing among them. You can only love what you see, and what you see is given, in large part, by location and chance.
    There's a tension here, isn't there? Given the almost infinite multiplicity of things it's possible to spend life thinking about and concentrating upon, how does one choose between them? Griffiths mentions the role of location and chance, but I'd also through in tendencies. If you notice yourself liking a particular style of art, captivated by a certain style of writing, or enthralled by a way of approaching the world, this may be a clue that you should investigate it further.
    The second requirement is time: You need a life in which you can spend a minimum of three uninterrupted hours every day, excepting sabbaths and occasional vacations, on your intellectual work. Those hours need to be free from distractions: no telephone calls, no email, no texts, no visits. Just you. Just thinking and whatever serves as a direct aid to and support of thinking (reading, writing, experiment, etc.). Nothing else. You need this because intellectual work is, typically, cumulative and has momentum. It doesn’t leap from one eureka moment to the next, even though there may be such moments in your life if you’re fortunate. No, it builds slowly from one day to the next, one month to the next. Whatever it is you’re thinking about will demand of you that you think about it a lot and for a long time, and you won’t be able to do that if you’re distracted from moment to moment, or if you allow long gaps between one session of work and the next. Undistracted time is the space in which intellectual work is done: It’s the space for that work in the same way that the factory floor is the space for the assembly line.
    This chimes with a quotation from Mark Manson I referenced yesterday, in which he talks about the joy you feel and meaning you experience when you've spent decades dedicated to one thing in particular. You have to carve out time for that, whether through your occupation, or through putting aside leisure time to pursue it.
    The third requirement is training. Once you know what you want to think about, you need to learn whatever skills are necessary for good thinking about it, and whatever body of knowledge is requisite for such thinking. These days we tend to think of this as requiring university studies.

    […]

    The most essential skill is surprisingly hard to come by. That skill is attention. Intellectuals always think about something, and that means they need to know how to attend to what they’re thinking about. Attention can be thought of as a long, slow, surprised gaze at whatever it is.

    […]

    The long, slow, surprised gaze requires cultivation. We’re quickly and easily habituated, with the result that once we’ve seen something a few times it comes to seem unsurprising, and if it’s neither threatening nor useful it rapidly becomes invisible. There are many reasons for this (the necessities of survival; the fact of the Fall), but whatever a full account of those might be (“full account” being itself a matter for thinking about), their result is that we can’t easily attend.

    This section was difficult to quote as it weaves in specific details from the original student’s letter, but the gist is that people assume that universities are good places for intellectual pursuits. Griffiths responds that this may or may not be the case, and, in fact, is less likely to be true as the 21st century progresses.

    Instead, we need to cultivate attention, which he describes as being almost like a muscle. Griffiths suggests “intentionally engaging in repetitive activity” such as “practicing a musical instrument, attending Mass daily, meditating on the rhythms of your breath, taking the same walk every day (Kant in Königsberg)” to “foster attentiveness”.

    [The] fourth requirement is interlocutors. You can’t develop the needed skills or appropriate the needed body of knowledge without them. You can’t do it by yourself. Solitude and loneliness, yes, very well; but that solitude must grow out of and continually be nourished by conversation with others who’ve thought and are thinking about what you’re thinking about. Those are your interlocutors. They may be dead, in which case they’ll be available to you in their postmortem traces: written texts, recordings, reports by others, and so on. Or they may be living, in which case you may benefit from face-to-face interactions, whether public or private. But in either case, you need them. You can neither decide what to think about nor learn to think about it well without getting the right training, and the best training is to be had by apprenticeship: Observe the work—or the traces of the work—of those who’ve done what you’d like to do; try to discriminate good instances of such work from less good; and then be formed by imitation.
    I talked in my thesis about the impossibility of being 'literate' unless you've got a community in which to engage in literate practices. The same is true of intellectual activity: you can't be an intellectual in a vacuum.

    As a society, we worship at the altar of the lone genius but, in fact, that idea is fundamentally flawed. Progress and breakthroughs come through discussion and collaboration, not sitting in a darkened room by yourself with a wet tea-towel over your head, thinking very hard.

    Interestingly, and importantly, Griffiths points out to the student to whom he’s replying that the life of an intellectual might seem attractive, but that it’s a long, hard road.

    And lastly: Don’t do any of the things I’ve recommended unless it seems to you that you must. The world doesn’t need many intellectuals. Most people have neither the talent nor the taste for intellectual work, and most that is admirable and good about human life (love, self-sacrifice, justice, passion, martyrdom, hope) has little or nothing to do with what intellectuals do. Intellectual skill, and even intellectual greatness, is as likely to be accompanied by moral vice as moral virtue. And the world—certainly the American world—has little interest in and few rewards for intellectuals. The life of an intellectual is lonely, hard, and usually penurious; don’t undertake it if you hope for better than that. Don’t undertake it if you think the intellectual vocation the most important there is: It isn’t. Don’t undertake it if you have the least tincture in you of contempt or pity for those without intellectual talents: You shouldn’t. Don’t undertake it if you think it will make you a better person: It won’t. Undertake it if, and only if, nothing else seems possible.
    A long read, but a rewarding one.

    Source: First Things

    Craig Mod's subtle redesign of the hardware Kindle

    I like Craig Mod’s writing. He’s the guy that’s written on his need to walk, drawing his own calendar, and getting his attention back.

    This article is hardware Kindle devices — the  distinction being important given that you can read your books via the Kindle Cloud Reader or, indeed, via an app on pretty much any platform.

    As he points out, the user interface remains sub-optimal:

    Tap most of the screen to go forward a page. Tap the left edge to go back. Tap the top-ish area to open the menu. Tap yet another secret top-right area to bookmark. This model co-opts the physical space of the page to do too much.

    The problem is that the text is also an interface element. But it’s a lower-level element. Activated through a longer tap. In essence, the Kindle hardware and software team has decided to “function stack” multiple layers of interface onto the same plane.

    And so this model has never felt right.

    He suggests an alternative to this which involves physical buttons on the device itself:

    Hardware buttons:

    • Page forward
    • Page back
    • Menu
    • (Power/Sleep)

    What does this get us?

    It means we can now assume that — when inside of a book — any tap on the screen is explicitly to interact with content: text or images within the text. This makes the content a first-class object in the interaction model. Right now it’s secondary, engaged only if you tap and hold long enough on the screen. Otherwise, page turn and menu invocations take precedence.

    I can see why he proposes this, but I'm not so sure about the physical buttons for page turns. The reason I'd say that, is that although I now use a Linux-based bq Cervantes e-reader, before 2015 I had almost every iteration of the hardware Kindle. There's a reason Amazon removed hardware buttons for page turns.

    I read in lots of places, but I read in bed with my wife every day and if there’s one thing she couldn’t stand, it was the clicking noise of me turning the page on my Kindle. Even if I tried to press it quietly, it annoyed her. Touchscreen page turns are much better.

    The e-reader I use has a similar touch interaction to the Kindle, so I see where Craig Mod is coming from when he says:

    When content becomes the first-class object, every interaction is suddenly bounded and clear. Want the menu? Press the (currently non-existent) menu button towards the top of the Kindle. Want to turn the page? Press the page turn button. Want to interact with the text? Touch it. Nothing is “hidden.” There is no need to discover interactions. And because each interaction is clear, it invites more exploration and play without worrying about losing your place.

    This, if you haven't come across it before, is user interface design, or UI design for short. It's important stuff, for as Steve Jobs famously said: "Everything in this world... was created by people no smarter than you" — and that's particularly true in tech.

    Source: Craig Mod

    Profiting from your enemies

    While I don’t feel like I’ve got any enemies, I’m sure there’s plenty of people who don’t like me, for whatever reason. I’ve never thought about framing it this way, though:

    In Plutarch’s “How to Profit by One’s Enemies,” he advises that rather than lashing out at your enemies or completely ignoring them, you should study them and see if they can be useful to you in some way. He writes that because our friends are not always frank and forthcoming with us about our shortcomings, “we have to depend on our enemies to hear the truth.” Your enemy will point out your weak spots for you, and even if he says something untrue, you can then analyze what made him say it.

    People close to us don't want to offend or upset us, so they don't point out areas where we could improve. So we should take negative comments and, rather than 'feed the trolls' use it as a way to get better (without even ever referencing the 'enemy').

    Source: Austin Kleon

    The root of all happiness

    “Without acknowledging the ever-present gaze of death, the superficial will appear important, and the important will appear superficial. Death is the only thing we can know with any certainty. And as such, it must be the compass by which we orient all our other values and decisions. It is the correct answer to all of the questions we should ask but never do. The only way to be comfortable with death is to understand and see yourself as something bigger than yourself; to choose values that stretch beyond serving yourself, that are simple and immediate and controllable and tolerant of the chaotic world around you. This is the basic root of all happiness.”

    (Mark Manson)

← Newer Posts Older Posts →