Valuing and signalling your skills

When I rocked up to the MoodleMoot in Miami back in November last year, I ran a workshop that involved human spectrograms, post-it notes, and participatory activities. Although I work in tech and my current role is effectively a product manager for Moodle, I still see myself primarily as an educator.

This, however, was a surprise for some people who didn’t know me very well before I joined Moodle. As one person put it, “I didn’t know you had that in your toolbox”. The same was true at Mozilla; some people there just saw me as a quasi-academic working on web literacy stuff.

Given this, I was particularly interested in a post from Steve Blank which outlined why he enjoys working with startup-like organisations rather than large, established companies:

It never crossed my mind that I gravitated to startups because I thought more of my abilities than the value a large company would put on them. At least not consciously. But that’s the conclusion of a provocative research paper, Asymmetric Information and Entrepreneurship, that explains a new theory of why some people choose to be entrepreneurs. The authors’ conclusion — Entrepreneurs think they are better than their resumes show and realize they can make more money by going it alone.And in most cases, they are right.
If you stop and think for a moment, it's entirely obvious that you know your skills, interests, and knowledge better than anyone who hires you for a specific role. Ordinarily, they're interested in the version of you that fits the job description, rather than you as a holistic human being.

The paper that Blank cites covers research which followed 12,686 people over 30+ years. It comes up with seven main findings, but the most interesting thing for me (given my work on badges) is the following:

If the authors are right, the way we signal ability (resumes listing education and work history) is not only a poor predictor of success, but has implications for existing companies, startups, education, and public policy that require further thought and research.
It's perhaps a little simplistic as a binary, but Blank cites a 1970s paper that uses 'lemons' and 'cherries' as a metaphors to compare workers:
Lemons Versus Cherries. The most provocative conclusion in the paper is that asymmetric information about ability leads existing companies to employ only “lemons,” relatively unproductive workers. The talented and more productive choose entrepreneurship. (Asymmetric Information is when one party has more or better information than the other.) In this case the entrepreneurs know something potential employers don’t – that nowhere on their resume does it show resiliency, curiosity, agility, resourcefulness, pattern recognition, tenacity and having a passion for products.

This implication, that entrepreneurs are, in fact, “cherries” contrasts with a large body of literature in social science, which says that the entrepreneurs are the “lemons”— those who cannot find, cannot hold, or cannot stand “real jobs.”

My main takeaway from this isn’t necessarily that entrepreneurship is always the best option, but that we’re really bad at signalling abilities and finding the right people to work with. I’m convinced that using digital credentials can improve that, but only if we use them in transformational ways, rather than replicate the status quo.

Source: Steve Blank

Intimate data analytics in education

The ever-relevant and compulsively-readable Ben Williamson turns his attention to ‘precision education’ in his latest post. It would seem that now that the phrase ‘personalised learning’ has jumped the proverbial shark, people are doubling down on the rather dangerous assumption that we just need more data to provide better learning experiences.

In some ways, precision education looks a lot like a raft of other personalized learning practices and platform developments that have taken shape over the past few years. Driven by developments in learning analytics and adaptive learning technologies, personalized learning has become the dominant focus of the educational technology industry and the main priority for philanthropic funders such as Bill Gates and Mark Zuckerberg.

[…]

A particularly important aspect of precision education as it is being advocated by others, however, is its scientific basis. Whereas most personalized learning platforms tend to focus on analysing student progress and outcomes, precision education requires much more intimate data to be collected from students. Precision education represents a shift from the collection of assessment-type data about educational outcomes, to the generation of data about the intimate interior details of students’ genetic make-up, their psychological characteristics, and their neural functioning.

As Williamson points out, the collection of ‘intimate data’ is particularly concerning, particularly in the wake of the Cambridge Analytica revelations.

Many people will find the ideas behind precision education seriously concerning. For a start, there appear to be some alarming symmetries between the logics of targeted learning and targeted advertising that have generated heated public and media attention already in 2018. Data protection and privacy are obvious risks when data are collected about people’s private, intimate and interior lives, bodies and brains. The ethical stakes in using genetics, neural information and psychological profiles to target students with differentiated learning inputs are significant.
There's a very definite worldview which presupposes that we just need to throw more technology at a problem until it goes away. That may be true in some situations, but at what cost? And to what extent is the outcome an artefact of the constraints of the technologies? Hopefully my own kids will be finished school before this kind of nonsense becomes mainstream. I do, however, worry about my grandchildren.
The technical machinery alone required for precision education would be vast. It would have to include neurotechnologies for gathering brain data, such as neuroheadsets for EEG monitoring. It would require new kinds of tests, such as those of personality and noncognitive skills, as well as real-time analytics programs of the kind promoted by personalized-learning enthusiasts. Gathering intimate data might also require genetics testing technologies, and perhaps wearable-enhanced learning devices for capturing real-time data from students’ bodies as proxy psychometric measures of their responses to learning inputs and materials.
Thankfully, Williamson cites the work of academics who are proposing a different way forward. Something that respects the social aspect of learning rather than a reductionist view that focuses on inputs and outputs.
One productive way forward might be to approach precision education from a ‘biosocial’ perspective. As Deborah Youdell  argues, learning may be best understood as the result of ‘social and biological entanglements.’ She advocates collaborative, inter-disciplinary research across social and biological sciences to understand learning processes as the dynamic outcomes of biological, genetic and neural factors combined with socially and culturally embedded interactions and meaning-making processes. A variety of biological and neuroscientific ideas are being developed in education, too, making policy and practice more bio-inspired.
The trouble is, of course, is that it's not enough for academics to write papers about things. Or even journalists to write newspaper articles. Even with all of the firestorm over Facebook recently, people are still using the platform. If the advocates of 'precision education'  have their way, I wonder who will actually create something meaningful that opposes their technocratic worldview?

Source: Code Acts in Education

All killer, no filler

This short posts cites a talk entitled 10 Timeframes given by Paul Ford back in 2012:

Ford asks a deceivingly simple question: when you spend a portion of your life (that is, your time) working on a project, do you take into account how your work will consume, spend, or use portions of other lives? How does the ‘thing’ you are working on right now play out in the future when there are “People using your systems, playing with your toys, [and] fiddling with your abstractions”?
In the talk, Ford mentions that in a 200-seat auditorium, his speaking for an extra minute wastes over three hours of human time, all told. Not to mention those who watch the recording, of course.

When we’re designing things for other people, or indeed working with our colleagues, we need to think not only about our own productivity but how that will impact others. I find it sad when people don’t do the extra work to make it easier for the person they have the power to impact. That could be as simple as sending an email that, you know, includes the link to the think being referenced. Or it could be an entire operating system, a building, or a new project management procedure.

I often think about this when editing video: does this one-minute section respect the time of future viewers? A minute multiplied by the number of times a video might be video suddenly represents a sizeable chunk of collective human resources. In this respect, ‘filler’ is irresponsible: if you know something is not adding value or meaning to future ‘consumers,’ then you are, in a sense, robbing life from them. It seems extreme to say that, yes, but hopefully the contemplating the proposition has not wasted your time.
My son's at an age where he's started to watch a lot of YouTube videos. Due to the financial incentives of advertising, YouTubers fill the first minute (at least) with tell you what you're going to find out, or with meaningless drivel. Unfortunately, my son's too young to have worked that out for himself yet. And at eleven years old, you can't just be told.

In my own life and practice, I go out of my way to make life easier for other people. Ultimately, of course, it makes life easier for me. By modelling behaviours that other people can copy, you’re more likely to be the recipient of time-saving practices and courteous behaviour. I’ve still a lot to learn, but it’s nice to be nice.

Source: James Shelley (via Adam Procter)

All killer, no filler

This short posts cites a talk entitled 10 Timeframes given by Paul Ford back in 2012:

Ford asks a deceivingly simple question: when you spend a portion of your life (that is, your time) working on a project, do you take into account how your work will consume, spend, or use portions of other lives? How does the ‘thing’ you are working on right now play out in the future when there are “People using your systems, playing with your toys, [and] fiddling with your abstractions”?
In the talk, Ford mentions that in a 200-seat auditorium, his speaking for an extra minute wastes over three hours of human time, all told. Not to mention those who watch the recording, of course.

When we’re designing things for other people, or indeed working with our colleagues, we need to think not only about our own productivity but how that will impact others. I find it sad when people don’t do the extra work to make it easier for the person they have the power to impact. That could be as simple as sending an email that, you know, includes the link to the think being referenced. Or it could be an entire operating system, a building, or a new project management procedure.

I often think about this when editing video: does this one-minute section respect the time of future viewers? A minute multiplied by the number of times a video might be video suddenly represents a sizeable chunk of collective human resources. In this respect, ‘filler’ is irresponsible: if you know something is not adding value or meaning to future ‘consumers,’ then you are, in a sense, robbing life from them. It seems extreme to say that, yes, but hopefully the contemplating the proposition has not wasted your time.
My son's at an age where he's started to watch a lot of YouTube videos. Due to the financial incentives of advertising, YouTubers fill the first minute (at least) with tell you what you're going to find out, or with meaningless drivel. Unfortunately, my son's too young to have worked that out for himself yet. And at eleven years old, you can't just be told.

In my own life and practice, I go out of my way to make life easier for other people. Ultimately, of course, it makes life easier for me. By modelling behaviours that other people can copy, you’re more likely to be the recipient of time-saving practices and courteous behaviour. I’ve still a lot to learn, but it’s nice to be nice.

Source: James Shelley (via Adam Procter)

Do what you can

“Do what you can, with what you have, where you are.”

(Theodore Roosevelt)

Systems thinking and AI

Edge is an interesting website. Its aim is:

To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.
One recent article on the site is from Mary Catherine Bateson, a writer and cultural anthropologist who retired in 2004 from her position as Professor in Anthropology and English at George Mason University. She's got some interesting insights into systems thinking and artificial intelligence.
We all think with metaphors of various sorts, and we use metaphors to deal with complexity, but the way human beings use computers and AI depends on their basic epistemologies—whether they’re accustomed to thinking in systemic terms, whether they’re mainly interested in quantitative issues, whether they’re used to using games of various sorts. A great deal of what people use AI for is to simulate some pattern outside in the world. On the other hand, people use one pattern in the world as a metaphor for another one all the time.
That's such an interesting way of putting it, the insinuation being that some people have epistemologies (theories of knowledge) that are not really nuanced enough to deal with the world in all of its complexity. As a result, they use reductive metaphors that don't really work that well. This is obviously problematic when dealing with AI that you want to do some work for you, hence the bias (racism, sexism) which has plagued the field.
One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it's willing to make projections when it hasn’t been provided with everything that would be relevant to those projections. How do we get there? I don’t know. It’s important to be aware of it, to realize that there are limits to what we can do with AI. It’s great for computation and arithmetic, and it saves huge amounts of labor. It seems to me that it lacks humility, lacks imagination, and lacks humor. It doesn’t mean you can’t bring those things into your interactions with your devices, particularly, in communicating with other human beings. But it does mean that elements of intelligence and wisdom—I like the word wisdom, because it's more multi-dimensional—are going to be lacking.
Something I always say is that technology is not neutral and that anyone who claims it to be so is a charlatan. Technologies are always designed by a person, or group of people, for a particular purpose. That person, or people, has hopes, fears, dreams, opinions, and biases. Therefore, AI has limits.
You don’t have to know a lot of technical terminology to be a systems thinker. One of the things that I’ve been realizing lately, and that I find fascinating as an anthropologist, is that if you look at belief systems and religions going way back in history, around the world, very often what you realize is that people have intuitively understood systems and used metaphors to think about them. The example that grabbed me was thinking about the pantheon of Greek gods—Zeus and Hera, Apollo and Demeter, and all of them. I suddenly realized that in the mythology they’re married, they have children, the sun and the moon are brother and sister. There are quarrels among the gods, and marriages, divorces, and so on. So you can use the Greek pantheon, because it is based on kinship, to take advantage of what people have learned from their observation of their friends and relatives.
I like the way that Bateson talks about the difference between computer science and systems theory. It's a bit like the argument I gave about why kids need to learn to code back in 2013: it's more about algorithmic thinking than it is about syntax.
The tragedy of the cybernetic revolution, which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in.
The article is worth reading in its entirety, as Bateson goes off at tangents that make it difficult to quote sections here. It reminds me that I need to revisit the work of Donella Meadows.

Source: Edge

Issue #300: Tricentennial

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

The four things you need to become an intellectual

I came across this, I think, via one of the aggregation sites I skim. It’s a letter in the form of an article by Paul J. Griffiths, who is a Professor of Catholic Theology at Duke Divinity School. In it, he replies to a student who has asked how to become an intellectual.

Griffiths breaks it down into four requirements, and then at the end gives a warning.

The first requirement is that you find something to think about. This may be easy to arrive at, or almost impossibly difficult. It’s something like falling in love. There’s an infinite number of topics you might think about, just as there’s an almost infinite number of people you might fall in love with. But in neither case is the choice made by consulting all possibilities and choosing among them. You can only love what you see, and what you see is given, in large part, by location and chance.
There's a tension here, isn't there? Given the almost infinite multiplicity of things it's possible to spend life thinking about and concentrating upon, how does one choose between them? Griffiths mentions the role of location and chance, but I'd also through in tendencies. If you notice yourself liking a particular style of art, captivated by a certain style of writing, or enthralled by a way of approaching the world, this may be a clue that you should investigate it further.
The second requirement is time: You need a life in which you can spend a minimum of three uninterrupted hours every day, excepting sabbaths and occasional vacations, on your intellectual work. Those hours need to be free from distractions: no telephone calls, no email, no texts, no visits. Just you. Just thinking and whatever serves as a direct aid to and support of thinking (reading, writing, experiment, etc.). Nothing else. You need this because intellectual work is, typically, cumulative and has momentum. It doesn’t leap from one eureka moment to the next, even though there may be such moments in your life if you’re fortunate. No, it builds slowly from one day to the next, one month to the next. Whatever it is you’re thinking about will demand of you that you think about it a lot and for a long time, and you won’t be able to do that if you’re distracted from moment to moment, or if you allow long gaps between one session of work and the next. Undistracted time is the space in which intellectual work is done: It’s the space for that work in the same way that the factory floor is the space for the assembly line.
This chimes with a quotation from Mark Manson I referenced yesterday, in which he talks about the joy you feel and meaning you experience when you've spent decades dedicated to one thing in particular. You have to carve out time for that, whether through your occupation, or through putting aside leisure time to pursue it.
The third requirement is training. Once you know what you want to think about, you need to learn whatever skills are necessary for good thinking about it, and whatever body of knowledge is requisite for such thinking. These days we tend to think of this as requiring university studies.

[…]

The most essential skill is surprisingly hard to come by. That skill is attention. Intellectuals always think about something, and that means they need to know how to attend to what they’re thinking about. Attention can be thought of as a long, slow, surprised gaze at whatever it is.

[…]

The long, slow, surprised gaze requires cultivation. We’re quickly and easily habituated, with the result that once we’ve seen something a few times it comes to seem unsurprising, and if it’s neither threatening nor useful it rapidly becomes invisible. There are many reasons for this (the necessities of survival; the fact of the Fall), but whatever a full account of those might be (“full account” being itself a matter for thinking about), their result is that we can’t easily attend.

This section was difficult to quote as it weaves in specific details from the original student’s letter, but the gist is that people assume that universities are good places for intellectual pursuits. Griffiths responds that this may or may not be the case, and, in fact, is less likely to be true as the 21st century progresses.

Instead, we need to cultivate attention, which he describes as being almost like a muscle. Griffiths suggests “intentionally engaging in repetitive activity” such as “practicing a musical instrument, attending Mass daily, meditating on the rhythms of your breath, taking the same walk every day (Kant in Königsberg)” to “foster attentiveness”.

[The] fourth requirement is interlocutors. You can’t develop the needed skills or appropriate the needed body of knowledge without them. You can’t do it by yourself. Solitude and loneliness, yes, very well; but that solitude must grow out of and continually be nourished by conversation with others who’ve thought and are thinking about what you’re thinking about. Those are your interlocutors. They may be dead, in which case they’ll be available to you in their postmortem traces: written texts, recordings, reports by others, and so on. Or they may be living, in which case you may benefit from face-to-face interactions, whether public or private. But in either case, you need them. You can neither decide what to think about nor learn to think about it well without getting the right training, and the best training is to be had by apprenticeship: Observe the work—or the traces of the work—of those who’ve done what you’d like to do; try to discriminate good instances of such work from less good; and then be formed by imitation.
I talked in my thesis about the impossibility of being 'literate' unless you've got a community in which to engage in literate practices. The same is true of intellectual activity: you can't be an intellectual in a vacuum.

As a society, we worship at the altar of the lone genius but, in fact, that idea is fundamentally flawed. Progress and breakthroughs come through discussion and collaboration, not sitting in a darkened room by yourself with a wet tea-towel over your head, thinking very hard.

Interestingly, and importantly, Griffiths points out to the student to whom he’s replying that the life of an intellectual might seem attractive, but that it’s a long, hard road.

And lastly: Don’t do any of the things I’ve recommended unless it seems to you that you must. The world doesn’t need many intellectuals. Most people have neither the talent nor the taste for intellectual work, and most that is admirable and good about human life (love, self-sacrifice, justice, passion, martyrdom, hope) has little or nothing to do with what intellectuals do. Intellectual skill, and even intellectual greatness, is as likely to be accompanied by moral vice as moral virtue. And the world—certainly the American world—has little interest in and few rewards for intellectuals. The life of an intellectual is lonely, hard, and usually penurious; don’t undertake it if you hope for better than that. Don’t undertake it if you think the intellectual vocation the most important there is: It isn’t. Don’t undertake it if you have the least tincture in you of contempt or pity for those without intellectual talents: You shouldn’t. Don’t undertake it if you think it will make you a better person: It won’t. Undertake it if, and only if, nothing else seems possible.
A long read, but a rewarding one.

Source: First Things

Craig Mod's subtle redesign of the hardware Kindle

I like Craig Mod’s writing. He’s the guy that’s written on his need to walk, drawing his own calendar, and getting his attention back.

This article is hardware Kindle devices — the  distinction being important given that you can read your books via the Kindle Cloud Reader or, indeed, via an app on pretty much any platform.

As he points out, the user interface remains sub-optimal:

Tap most of the screen to go forward a page. Tap the left edge to go back. Tap the top-ish area to open the menu. Tap yet another secret top-right area to bookmark. This model co-opts the physical space of the page to do too much.

The problem is that the text is also an interface element. But it’s a lower-level element. Activated through a longer tap. In essence, the Kindle hardware and software team has decided to “function stack” multiple layers of interface onto the same plane.

And so this model has never felt right.

He suggests an alternative to this which involves physical buttons on the device itself:

Hardware buttons:

  • Page forward
  • Page back
  • Menu
  • (Power/Sleep)

What does this get us?

It means we can now assume that — when inside of a book — any tap on the screen is explicitly to interact with content: text or images within the text. This makes the content a first-class object in the interaction model. Right now it’s secondary, engaged only if you tap and hold long enough on the screen. Otherwise, page turn and menu invocations take precedence.

I can see why he proposes this, but I'm not so sure about the physical buttons for page turns. The reason I'd say that, is that although I now use a Linux-based bq Cervantes e-reader, before 2015 I had almost every iteration of the hardware Kindle. There's a reason Amazon removed hardware buttons for page turns.

I read in lots of places, but I read in bed with my wife every day and if there’s one thing she couldn’t stand, it was the clicking noise of me turning the page on my Kindle. Even if I tried to press it quietly, it annoyed her. Touchscreen page turns are much better.

The e-reader I use has a similar touch interaction to the Kindle, so I see where Craig Mod is coming from when he says:

When content becomes the first-class object, every interaction is suddenly bounded and clear. Want the menu? Press the (currently non-existent) menu button towards the top of the Kindle. Want to turn the page? Press the page turn button. Want to interact with the text? Touch it. Nothing is “hidden.” There is no need to discover interactions. And because each interaction is clear, it invites more exploration and play without worrying about losing your place.

This, if you haven't come across it before, is user interface design, or UI design for short. It's important stuff, for as Steve Jobs famously said: "Everything in this world... was created by people no smarter than you" — and that's particularly true in tech.

Source: Craig Mod

Profiting from your enemies

While I don’t feel like I’ve got any enemies, I’m sure there’s plenty of people who don’t like me, for whatever reason. I’ve never thought about framing it this way, though:

In Plutarch’s “How to Profit by One’s Enemies,” he advises that rather than lashing out at your enemies or completely ignoring them, you should study them and see if they can be useful to you in some way. He writes that because our friends are not always frank and forthcoming with us about our shortcomings, “we have to depend on our enemies to hear the truth.” Your enemy will point out your weak spots for you, and even if he says something untrue, you can then analyze what made him say it.

People close to us don't want to offend or upset us, so they don't point out areas where we could improve. So we should take negative comments and, rather than 'feed the trolls' use it as a way to get better (without even ever referencing the 'enemy').

Source: Austin Kleon

The root of all happiness

“Without acknowledging the ever-present gaze of death, the superficial will appear important, and the important will appear superficial. Death is the only thing we can know with any certainty. And as such, it must be the compass by which we orient all our other values and decisions. It is the correct answer to all of the questions we should ask but never do. The only way to be comfortable with death is to understand and see yourself as something bigger than yourself; to choose values that stretch beyond serving yourself, that are simple and immediate and controllable and tolerant of the chaotic world around you. This is the basic root of all happiness.”

(Mark Manson)

Random Street View does exactly what you think it does

Today’s a non-work day for me but, after reviewing resource-centric social media sites as part of my Moodle work yesterday, I rediscovered the joy of StumbleUpon.

That took me to lots of interesting sites which, if you haven’t used the service before, become more relevant to your tastes as time goes on if you use the thumbs up / thumbs down tool.

I came across this Random Street View site which I’ve a sneaking suspicion I’ve seen before. Not only is it a fascinating way to ‘visit’ lesser-known parts of the world, it also shows the scale of Google’s Street View programme.

The teacher in me imagines using this as the starting point for some kind of project. It could be a writing prompt, you could use it to randomly find somewhere to do some research on, or it could even be an art project.

Great stuff.

Source: Random Street View

Long-term investments

“To truly appreciate something, you must confine yourself to it. There’s a certain level of joy and meaning that you reach in life only when you’ve spent decades investing in a single relationship, a single craft, a single career. And you cannot achieve those decades of investment without rejecting the alternatives.”

(Mark Manson)

Deciding what to do next

This post by Daniel Gross, partner in a well-known startup accelerator is written for an audience of people in tech looking to build their next company. However, I think there’s more widely-applicable takeaways from it.

Gross mentions the following:

  1. If you want to make something grand, don’t start with grand ambitions
  2. Focus on the repeat offenders
  3. Tell your friends what you’re doing
  4. Make sure you enjoy thinking about it
  5. Get in the habit of simplifying
  6. Validate your market
  7. Launch uncomfortably quickly
To explain and unpack, point two is getting at those things that you think about every so often, those things you're curious about. Points six and seven are, of course, focused on putting products in a marketplace, but I think there's a way to think about this from a different perspective.

Take someone who’s looking for the next thing to do. Perhaps they’re dissatisfied with their current line of work, and so want to pursue opportunities in a different sector. It’s useful for them to look at what’s ‘normal’ (for example, teachers and lawyers work long hours). Once you’ve done your due diligence, it’s worth just getting started. Go and do something to set yourself on the road.

If there’s anything you remember from the post, let it be these two words: perpetual motion. Just Do It. Make little steps every day. One day that’ll add up to the next Google, Apple or Facebook.
...or, indeed, a role that you much prefer to the one you're performing now!

Source: Daniel Gross

Designing for privacy

Someone described the act of watching Mark Zuckerberg, CEO of Facebook, testifying before Congress as “low level self-harm”. In this post, Joe Edelman explains why:

Zuckerberg and the politicians—they imagine privacy as if it were a software feature. They imagine a system has “good privacy” if it’s consensual and configurable; that is, if people explicitly agree to something, and understand what they agree to, that’s somehow “good for privacy”. Even usually-sophisticated-analysts like Zeynep Tufekci are missing all the nuance here.

Giving the example of a cocktail party where you're talking to a friend about something confidential and someone else you don't know comes along, Edelman introduces this definition of privacy:
Privacy, n. Maintaining a sense of what to show in each environment; Locating social spaces for aspects of yourself which aren’t ready for public display, where you can grow those parts of yourself until they can be more public.
I really like this definition, especially the part around "locating social spaces for aspects of yourself which aren't ready for public display". I think educators in particular should note this.

Referencing his HSC1 Curriculum which is the basis for workshops he runs for staff from major tech companies, Edelman includes a graphic on the structural features of privacy. I’ll type this out here for the sake of legibility:

  • Relational depth (close friends / acquaintances / strangers / anonymous / mixed)
  • Presentation (crafted / basic / disheveled)
  • Connectivity (transient / pairwise / whole-group)
  • Stakes (high / low)
  • Status levels (celebrities / rank / flat)
  • Reliance (interdependent / independent)
  • Time together (none / brief / slow)
  • Audience size (big / small / unclear)
  • Audience loyalty (loyal / transient / unclear)
  • Participation (invited / uninvited)
  • Pretext (shared goal / shared values / shared topic / many goals (exchange) / emergent)
  • Social Gestures (like / friend / follow / thank / review / comment / join / commit / request / buy)
The post is, of course, both an expert response to the zeitgeist, and a not-too-subtle hint that people should take his course. I'm sure Edelman goes into more depth about each of these structural features in his workshops.

Nevertheless, and even without attending his sessions (which I’m sure are great) there’s value in thinking through each of these elements for the work I’m doing around the MoodleNet project. I’ve probably done some thinking around 70% of these, but it’s great to have a list that helps me organise my thinking a little more.

Source: Joe Edelman

Multiple income streams

Right now, I’m splitting my time between being employed (four days per week with Moodle), my consultancy and the co-op which I co-founded (one day per week combined). In other words, I have more than one income stream, as this article suggests:

Having multiple income streams can come in handy if one income stream dries up. After two years in business, I've learned that you'll always have peaks and valleys. Sometimes everyone is paying you, and sometimes your lead pipeline can look barren. I started a marketing and PR agency and spent that first year working my startup muscles: planning, strategizing, defining markets. If I hit a slow month, I kept working those same exercises. While it helped grow my business, I sometimes needed an intellectual rest day.

People who have only ever been employed (which was me until three years ago!) wonder about the insecurity of consulting. But the truth is that every occupation these days is precarious — it's just hidden if you're employed.

This is a short article, but it's useful as both a call-to-action and to reinforce existing practices:

Developing a secondary income stream is easier than you may think. Think about how you like to spend your off hours and research potential markets. Maybe you're really good at explaining something that is a difficult concept for other people--create a course on an on-demand training site like Udemy or Skillshare.

In general, we think more people are paying attention to us than they actually are. Your first endeavour doesn't have to set the world on fire, be a smash hit, or a bestseller. The important thing is to get out there and provide something that people want.

Through volunteering, putting myself out there, and developing my network, I haven't had to apply for a job since 2010. Also, with my consultancy, it's all inbound stuff. Some call it luck but, as Thomas Edison is quoted as saying:

Opportunity is missed by most people because it is dressed in overalls and looks like work.
I'd add that knowledge work doesn't look like work. But that's a whole other post.

Source: Inc.

Multiple income streams

Right now, I’m splitting my time between being employed (four days per week with Moodle), my consultancy and the co-op which I co-founded (one day per week combined). In other words, I have more than one income stream, as this article suggests:

Having multiple income streams can come in handy if one income stream dries up. After two years in business, I've learned that you'll always have peaks and valleys. Sometimes everyone is paying you, and sometimes your lead pipeline can look barren. I started a marketing and PR agency and spent that first year working my startup muscles: planning, strategizing, defining markets. If I hit a slow month, I kept working those same exercises. While it helped grow my business, I sometimes needed an intellectual rest day.

People who have only ever been employed (which was me until three years ago!) wonder about the insecurity of consulting. But the truth is that every occupation these days is precarious — it's just hidden if you're employed.

This is a short article, but it's useful as both a call-to-action and to reinforce existing practices:

Developing a secondary income stream is easier than you may think. Think about how you like to spend your off hours and research potential markets. Maybe you're really good at explaining something that is a difficult concept for other people--create a course on an on-demand training site like Udemy or Skillshare.

In general, we think more people are paying attention to us than they actually are. Your first endeavour doesn't have to set the world on fire, be a smash hit, or a bestseller. The important thing is to get out there and provide something that people want.

Through volunteering, putting myself out there, and developing my network, I haven't had to apply for a job since 2010. Also, with my consultancy, it's all inbound stuff. Some call it luck but, as Thomas Edison is quoted as saying:

Opportunity is missed by most people because it is dressed in overalls and looks like work.
I'd add that knowledge work doesn't look like work. But that's a whole other post.

Source: Inc.

In praise of ordinary lives

This richly-illustrated post uses as a touchstone the revolution in art that took place in the 17th century with Johannes Vermeer’s The Little Street. The painting (which can be seen above) moves away from epic and religious symbolism, and towards the everyday.

Unfortunately, and particularly with celebrity lifestyles on display everywhere, we seem to be moving back to pre-17th century approaches:

Today – in modern versions of epic, aristocratic, or divine art – adverts and movies continually explain to us the appeal of things like sports cars, tropical island holidays, fame, first-class air travel and expansive limestone kitchens. The attractions are often perfectly real. But the cumulative effect is to instill in us the idea that a good life is built around elements that almost no one can afford. The conclusion we too easily draw is that our lives are close to worthless.
A good life isn't one where you get everything you want; that would, in fact, that would be form of torture. Just ask King Midas. Instead, it's made up of lots of little things, as this post outlines:
There is immense skill and true nobility involved in bringing up a child to be reasonably independent and balanced; maintaining a good-enough relationship with a partner over many years despite areas of extreme difficulty; keeping a home in reasonable order; getting an early night; doing a not very exciting or well-paid job responsibly and cheerfully; listening properly to another person and, in general, not succumbing to madness or rage at the paradox and compromises involved in being alive.
As ever, a treasure trove of wisdom and I encourage you to explore further the work of the School of Life.

Source: The Book of Life

In praise of ordinary lives

This richly-illustrated post uses as a touchstone the revolution in art that took place in the 17th century with Johannes Vermeer’s The Little Street. The painting (which can be seen above) moves away from epic and religious symbolism, and towards the everyday.

Unfortunately, and particularly with celebrity lifestyles on display everywhere, we seem to be moving back to pre-17th century approaches:

Today – in modern versions of epic, aristocratic, or divine art – adverts and movies continually explain to us the appeal of things like sports cars, tropical island holidays, fame, first-class air travel and expansive limestone kitchens. The attractions are often perfectly real. But the cumulative effect is to instill in us the idea that a good life is built around elements that almost no one can afford. The conclusion we too easily draw is that our lives are close to worthless.
A good life isn't one where you get everything you want; that would, in fact, that would be form of torture. Just ask King Midas. Instead, it's made up of lots of little things, as this post outlines:
There is immense skill and true nobility involved in bringing up a child to be reasonably independent and balanced; maintaining a good-enough relationship with a partner over many years despite areas of extreme difficulty; keeping a home in reasonable order; getting an early night; doing a not very exciting or well-paid job responsibly and cheerfully; listening properly to another person and, in general, not succumbing to madness or rage at the paradox and compromises involved in being alive.
As ever, a treasure trove of wisdom and I encourage you to explore further the work of the School of Life.

Source: The Book of Life

Issue #299: Jersey shore

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe