Space as a service
This isn’t the most well-written post I’ve read this year, but it does point to a shift that I’ve noticed — perhaps because I work remotely.
Increasingly we are moving to an almost post consumer world where we are less bothered about accumulating more stuff and much more interested in being provided with services, experiences and ephemeral pleasures.Some might think that these are things ‘Millennials’ do, but if that generation is defined as those born from 1980 onwards then some of those are almost 40 years old. It’s not a trend that’s going away.So Uber instead of Cars, Spotify instead of CD’s, Netflix instead of DVD’s: on-demand this, on-demand that. Why bother to own something you seldom use, that becomes out of date rapidly, or that you really cannot afford. Rent it when you need it.
When you’re used to paying monthly for software, streaming music and films instead of buying them, and renting accommodation (because you’re priced out of the housing market), then you start thinking differently about the world.
Just as it is now easy to buy almost any Software as a Service, so it will become with real estate. Space, as a Service, is the future of real estate. On demand and where you buy exactly the features, and services, you need, whenever and wherever you are.So for businesses who employ people who can do most of what they do from anywhere, the problem becomes co-ordination rather than office space. Former Mozilla colleague John O’Duinn makes this point in his upcoming book.Key though is that this extends beyond spaces rented on-demand; regardless of tenure it will become important to be able to also rent or purchase on-demand all the services one might need to make the most of your space, or to enable the most productive use of that space.
We really do not NEED offices anymore, we really do not NEED shops anymore. In fact we really do not NEED an awful lot of real estate. That is not to say we don’t WANT these spaces, but what we do in them will change.So companies like WeWork are already huge, and continue to grow rapidly.
So how will all this change supply?I think this is a hugely exciting time. I'm just hoping that we see a similar revolution around equity, both in terms of diversity within organisations and shared ownership of them.Well you have people who:
The answer, to me, has to be #Space As a Service - space that takes account of these four trends. Space that is specifically designed to allow humans to do what they are good at.
- Prefer services over products
- Don't need to go to an office to work
- Are used to on-demand
- And are uber connected with vast computing power in their pocket.
Source: Antony Slumbers
Blockchain as a 'futuristic integrity wand'
I’ve no doubt that blockchain technology is useful for super-boring scenarios and underpinning get-rich-quick schemes, but it has very little value to the scenarios in which I work. I’m trying to build trust, not work in an environment where technology serves as a workaround.
This post by Kai Stinchcombe about the blockchain bubble is a fantastic read. The author’s summary?
Fair enough, let's dig in...Blockchain is not only crappy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms, and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: no matter how much blockchain improves it is still headed in the wrong direction.
It's funny seeing people who have close to zero understanding of how blockchain works explain how it's going to 'revolutionise' X, Y, or Z. Again, it's got exciting applicability... for very boring stuff.People have made a number of implausible claims about the future of blockchain—like that you should use it for AI in place of the type of behavior-tracking that google and facebook do, for example. This is based on a misunderstanding of what a blockchain is. A blockchain isn’t an ethereal thing out there in the universe that you can “put” things into, it’s a specific data structure: a linear transaction log, typically replicated by computers whose owners (called miners) are rewarded for logging new transactions.
This is the bit that really grabbed me about the post, the blockchain-as-metaphor section. People are sold on stories, not on technologies. Which is why some people are telling stories that involve magicking away all of their fears and problems with a magic blockchain wand.[H]ere’s what blockchain-the-technology is: “Let’s create a very long sequence of small files — each one containing a hash of the previous file, some new data, and the answer to a difficult math problem — and divide up some money every hour among anyone willing to certify and store those files for us on their computers.”
Now, here’s what blockchain-the-metaphor is: “What if everyone keeps their records in a tamper-proof repository not owned by anyone?”
When, like me, you think that humanity moves forward at the speed of trust and collaboration, blockchain seems like the antithesis of all that.People treat blockchain as a “futuristic integrity wand”—wave a blockchain at the problem, and suddenly your data will be valid. For almost anything people want to be valid, blockchain has been proposed as a solution.
It’s true that tampering with data stored on a blockchain is hard, but it’s false that blockchain is a good way to create data that has integrity.
[...]Blockchain systems do not magically make the data in them accurate or the people entering the data trustworthy, they merely enable you to audit whether it has been tampered with. A person who sprayed pesticides on a mango can still enter onto a blockchain system that the mangoes were organic. A corrupt government can create a blockchain system to count the votes and just allocate an extra million addresses to their cronies. An investment fund whose charter is written in software can still misallocate funds.
Source: Kai StinchcombeProjects based on the elimination of trust have failed to capture customers’ interest because trust is actually so damn valuable. A lawless and mistrustful world where self-interest is the only principle and paranoia is the only source of safety is a not a paradise but a crypto-medieval hellhole.
Profit vs benefit
“The difference between profit and benefit is that operations producing profit can be carried out by another in my place: he would make the profit, unless he was acting on my behalf. But the fact remains that profitable activity can always be carried out by someone else. Hence the principle of competition. On the other hand, what is beneficial to me depends on gestures, acts, living moments which it would be impossible for me to delegate.”
(Frédéric Gros, A Philosophy of Walking)
What can dreams of a communist robot utopia teach us about human nature?
This article in Aeon by Victor Petrov posits that, in the post-industrial age, we no longer see human beings as primarily manual workers, but as thinkers using digital screens to get stuff done. What does that do to our self-image?
The communist parties of eastern Europe grappled with this new question, too. The utopian social order they were promising from Berlin to Vladivostok rested on the claim that proletarian societies would use technology to its full potential, in the service of all working people. Bourgeois information society would alienate workers even more from their own labour, turning them into playthings of the ruling classes; but a socialist information society would free Man from drudgery, unleash his creative powers, and enable him to ‘hunt in the morning … and criticise after dinner’, as Karl Marx put it in 1845. However, socialist society and its intellectuals foresaw many of the anxieties that are still with us today. What would a man do in a world of no labour, and where thinking was done by machines?Bulgaria was a communist country that, after the Second World War, went from producing cigarettes to being one of the world's largest producers of computers. This had a knock-on effect on what people wrote about in the country.
The Bulgarian reader was increasingly treated to debates about what humanity would be in this new age. Some, such as the philosopher Mityu Yankov, argued that what set Man apart from the animals was his ability to change and shape nature. For thousands of years, he had done this through physical means and his own brawn. But the Industrial Revolution had started a change of Man’s own nature, which was culminating with the Information Revolution – humanity now was becoming not a worker but a ‘governor’, a master of nature, and the means of production were not machines or muscles, but the human brain.Lyuben Dilov, a popular sci-fi author, focused on "the boundaries between man and machine, brain and computer". His books were full of societies obsessed with technology.
Added to this, there is technological anxiety, too – what is it to be a man when there are so many machines? Thus, Dilov invents a Fourth Law of Robotics, to supplement Asimov’s famous three, which states that ‘the robot must, in all circumstances, legitimate itself as a robot’. This was a reaction by science to the roboticists’ wish to give their creations ever more human qualities and appearance, making them subordinate to their function – often copying animal or insect forms. Zenon muses on human interactions with robots that start from a young age, giving the child power over the machine from the outset. This undermines our trust in the very machines on which we depend. Humans need a distinction from the robots, they need to know that they are always in power and couldn’t be lied to. For Dilov, the anxiety was about the limits of humanity, at least in its current stage – fearful, humans could not yet treat anything else, including their machines, as equals.This all seems very pertinent at a time when deepfakes make us question what is real online. We're perhaps less worried about a Blade Runner-style dystopia and more concerned about digital 'reality' but, nevertheless, questions about what it means to be human persist.
Bulgarian robots were both to be feared and they were the future. Socialism promised to end meaningless labour but reproduced many of the anxieties that are still with us today in our ever-automating world. What can Man do that a machine cannot do is something we still haven’t solved. But, like Kesarovski, perhaps we need not fear this new world so much, nor give up our reservations for the promise of a better, easier world.Source: Aeon
Escaping from the crush of circumstances
“Today I escaped from the crush of circumstances, or better put, I threw them out, for the crush wasn’t from outside me but in my own assumptions.”
(Marcus Aurelius)
The benefits of reading aloud to children
This article in the New York Times by Perri Klass, M.D. focuses on studies that show a link between parents reading to their children and a reduction in problematic behaviour.
I really like the way that they focus on the positives and point out how much the child loves the interaction with their parent through the text.This study involved 675 families with children from birth to 5; it was a randomized trial in which 225 families received the intervention, called the Video Interaction Project, and the other families served as controls. The V.I.P. model was originally developed in 1998, and has been studied extensively by this research group.
Participating families received books and toys when they visited the pediatric clinic. They met briefly with a parenting coach working with the program to talk about their child’s development, what the parents had noticed, and what they might expect developmentally, and then they were videotaped playing and reading with their child for about five minutes (or a little longer in the part of the study which continued into the preschool years). Immediately after, they watched the videotape with the study interventionist, who helped point out the child’s responses.
I don't know enough about the causes of ADHD to be able to comment, but as a teacher and parent, I do know there's a link between the attention you give and the attention you receive.The Video Interaction Project started as an infant-toddler program, working with low-income urban families in New York during clinic visits from birth to 3 years of age. Previously published data from a randomized controlled trial funded by the National Institute of Child Health and Human Development showed that the 3-year-olds who had received the intervention had improved behavior — that is, they were significantly less likely to be aggressive or hyperactive than the 3-year-olds in the control group.
It is a bit sad when we have to encourage parents to play with children between the ages of birth and three, but I guess in the age of smartphone addiction, we kind of have to.“The reduction in hyperactivity is a reduction in meeting clinical levels of hyperactivity,” Dr. Mendelsohn said. “We may be helping some children so they don’t need to have certain kinds of evaluations.” Children who grow up in poverty are at much higher risk of behavior problems in school, so reducing the risk of those attention and behavior problems is one important strategy for reducing educational disparities — as is improving children’s language skills, another source of school problems for poor children.
Source: The New York Times
Image CC BY Jason Lander
You need more daylight to sleep better
An an historian, I’ve often been fascinated about what life must have been like before the dawn of electricity. I have a love-hate relationship with artificial light. On the one hand, I use a lightbox to stave off Seasonal Affective Disorder. On the other hand, I’ve got (my optician tells me) not only pale blue irises but very thin corneas. That makes me photophobic and subject to the kind of glare on a regular basis I can only imagine ‘normal’ people get after staring at a lightbulb for a while.
In this article, Linda Geddes describes an experiment in which she decided to forgo artificial life for a number of weeks to see what effect it had on her health and, most importantly, her sleep.
Working with sleep researchers Derk-Jan Dijk and Nayantara Santhi at the University of Surrey, I designed a programme to go cold-turkey on artificial light after dark, and to try to maximise exposure to natural light during the day – all while juggling an office job and busy family life in urban Bristol.By the end of 2017, instead of having to manually install something like f.lux on my devices, they all started to have it built-in. There's a general realisation that blue light before bedtime is a bad idea. What this article points out, however, is another factor: how bright the light is that you're subjected to during the day.
Light enables us to see, but it affects many other body systems as well. Light in the morning advances our internal clock, making us more lark-like, while light at night delays the clock, making us more owlish. Light also suppresses a hormone called melatonin, which signals to the rest of the body that it’s night-time – including the parts that regulate sleep. “Apart from vision, light has a powerful non-visual effect on our body and mind, something to remember when we stay indoors all day and have lights on late into the night,” says Santhi, who previously demonstrated that the evening light in our homes suppresses melatonin and delays the timing of our sleep.The important correlation here is between the strength of light Geddes experienced during her waking hours, and the quality of her sleep.
But when I correlated my sleep with the amount of light I was exposed to during the daytime, an interesting pattern emerged. On the brightest days, I went to bed earlier. And for every 100 lux increase in my average daylight exposure, I experienced an increase in sleep efficiency of almost 1% and got an extra 10 minutes of sleep.This isn't just something that Geddes has experienced; studies have also found this kind of correlation.
In March 2007, Dijk and his colleagues replaced the light bulbs on two floors of an office block in northern England, housing an electronic parts distribution company. Workers on one floor of the building were exposed to blue-enriched lighting for four weeks; those on the other floor were exposed to white light. Then the bulbs were switched, meaning both groups were ultimately exposed to both types of light. They found that exposure to the blue-enriched white light during daytime hours improved the workers’ subjective alertness, performance, and evening fatigue. They also reported better quality and longer sleep.So the key takeaway message?
It’s ridiculously simple. But spending more time outdoors during the daytime and dimming the lights in the evening really could be a recipe for better sleep and health. For millennia, humans have lived in synchrony with the Sun. Perhaps it's time we got reacquainted.Source: BBC Future
On the cultural value of memes
I’ve always been a big fan of memes. In fact, I discuss them in my thesis, ebook, and TEDx talk. This long-ish article from Jay Owens digs into their relationship with fake news and what he calls ‘post-authenticity’. What I’m really interested in, though, comes towards the end. He gets into the power of memes and why they’re the perfect form of online cultural expression.
So through humour, exaggeration, and irony — a truth emerges about how people are actually feeling. A truth that they may not have felt able to express straightforwardly. And there’s just as much, and potentially more, community present in these groups as in many of the more traditional civic-oriented groups Zuckerberg’s strategy may have had in mind.The thing that can be missing from text-based interactions is empathy. The right kind of meme, however, speaks using images, words, but also to something else that a group have in common.
Meme formats — from this week’s American Chopper dialectic model to now classics like the “Exploding Brain,” “Distracted Boyfriend,” and “Tag Yourself” templates — are by their very nature iterative and quotable. That is how the meme functions, through reference to the original context and memes that have come before, coupled with creative remixing to speak to a particular audience, topic, or moment. Each new instance of a meme is thereby automatically familiar and recognisable. The format carries a meta-message to the audience: “This is familiar, not weird.” And the audience is prepared to know how to react: you like, you respond with laughter-referencing emoji, you tag your friends in the comments.Let's take this example, that Owens cites in the article. I sent it to my wife via Telegram, which an instant messaging app that we use as a permanent backchannel).

Her response, inevitably was: 😂
It’s funny because it’s true. But it also quickly communicates solidarity and empathy.
The format acts as a kind of Trojan horse, then, for sharing difficult feelings — because the format primes the audience to respond hospitably. There isn’t that moment of feeling stuck over how to respond to a friend’s emotional disclosure, because she hasn’t made the big statement directly, but instead through irony and cultural quotation — distancing herself from the topic through memes, typically by using stock photography (as Leigh Alexander notes) rather than anything as gauche as a picture of oneself. This enables you the viewer to sidestep the full intensity of it in your response, should you choose, but still, crucially, to respond). And also to DM your friend and ask, “Hey, are you alright?” and cut to the realtalk should you so choose to.So, effectively, you can be communicating different things to different people. If, instead of sending the 90s kids image above directly to my wife via Telegram, I'd shared it to my Twitter followers, it may have elicited a different response. Some people would have liked and retweeted it, for sure, but someone who knows me well might ask if I'm OK. After all, there's a subtext in there of feeling like you're "stuck".
Owens goes on to talk about how that memetic culture means that we’re living in a ‘post authentic’ world. But did such authenticity ever really exist?
So perhaps to say that this post-authentic moment is one of evolving, increasingly nuanced collective communication norms, able to operate with multi-layered recursive meanings and ironies in disposable pop culture content… is kind of cold comfort.Amen to that.Nonetheless, author Robin Sloan described the genius of the “American Chopper” meme as being that “THIS IS THE ONLY MEME FORMAT THAT ACKNOWLEDGES THE EXISTENCE OF COMPETING INFORMATION, AND AS SUCH IT IS THE ONLY FORMAT SUITED TO THE COMPLEXITY OF OUR WORLD!”
Source: Jay Owens
The résumé is a poor proxy for a human being
I’ve never been a fan of the résumé, or ‘Curriculum Vitae’ (CV) as we tend to call them in the UK. How on earth can a couple of sheets of paper ever hope to sum up an individual in all of their complexity? It inevitably leads to the kind of things that end up on LinkedIn profiles: your academic qualifications, job history, and a list of hobbies that don’t make you sound like a loser.
In this (long-ish) article for Quartz, Oliver Staley looks at what Laszlo Bock is up to with his new startup, with a detour through the history of the résumé.
“Resumes are terrible,” says Laszlo Bock, the former head of human resources at Google, where his team received 50,000 resumes a week. “It doesn’t capture the whole person. At best, they tell you what someone has done in the past and not what they’re capable of doing in the future.”
I really dislike résumés, and I’m delighted that I’ve managed to get my last couple of jobs without having to rely on them. I guess that’s a huge benefit of working openly; the web is your résumé.
Resumes force job seekers to contort their work and life history into corporately acceptable versions of their actual selves, to better conform to the employer’s expectation of the ideal candidate. Unusual or idiosyncratic careers complicate resumes. Gaps between jobs need to be accounted for. Skills and abilities learned outside of formal work or education aren’t easily explained. Employers may say they’re looking for job seekers to distinguish themselves, but the resume requires them to shed their distinguishing characteristics.
Unfortunately, Henry Ford’s ‘faster horses’ rule also applies to résumés. And (cue eye roll) people need to find a way to work in buzzwords like ‘blockchain’.
The resume of the near future will be a document with far more information—and information that is far more useful—than the ones we use now. Farther out, it may not be a resume at all, but rather a digital dossier, perhaps secured on the blockchain (paywall), and uploaded to a global job-pairing engine that is sorting you, and billions of other job seekers, against millions of openings to find the perfect match.
I’m more interested in different approaches, rather than doubling-down on the existing approach, so it’s good to see large multinational companies like Unilever doing away with résumés. They prefer game-like assessments.
Two years ago, the North American division of Unilever—the consumer products giant—stopped asking for resumes for the approximately 150-200 positions it fills from college campuses annually. Instead, it’s relying on a mix of game-like assessments, automated video interviews, and in-person problem solving exercises to winnow down the field of 30,000 applicants.
It all sounds great but, at the end of the day it’s extra unpaid work, and more jumping through hoops.
The games are designed so there are no wrong answers— a weakness in one characteristic, like impulsivity, can reveal strength in another, like efficiency—and pymetrics gives candidates who don’t meet the standards for one position the option to apply for others at the company, or even at other companies. The algorithm matches candidates to the opportunities where they’re most likely to succeed. The goal, Polli says, is to eliminate the “rinse and repeat” process of submitting near identical applications for dozens of jobs, and instead use data science to target the best match of job and employee.
Back to Laszlo Bock, who claims that we should have an algorithmic system that matches people to available positions. I’m guessing he hasn’t read Brave New World.
For the system to work, it would need an understanding of a company’s corporate culture, and how people actually function within its walls—not just what the company says about its culture. And employees and applicants would need to be comfortable handing over their personal data.For-profit entities wouldn’t be trusted as stewards of such sensitive information. Nor would governments, Bock says, noting that in communist Romania, where he was born, “the government literally had dossiers on every single citizen.”
Ultimately, Bock says, the system should be maintained by a not-for-profit, non-governmental organization. “What I’m imagining, no human being should ever look inside this thing. You shouldn’t need to,” he says.
Source: Quartz at Work
OEP (Open Educational Pragmatism?)
This is an interesting post to read, not least because I sat next to the author at the conference he describes last week, and we had a discussion about related issues. Michael Shaw, who’s a great guy and I’ve known for a few years, is in charge of Tes Resources.
Shaw notes he was wary in attending the conference, not only because it's a fairly tight-knit community:I wondered if I would feel like an interloper at the first conference I’ve ever attended on Open Educational Resources (OERs).
It wasn’t a dress code issue (though in hindsight I should have worn trainers) but that most of the attendees at #OER18 were from universities, while only a few of us there worked for education businesses.
I work for a commercial company, one that makes money from advertising and recruitment services, plus — even more controversially in this context — by letting teachers sell resources to each other, and taking a percentage on transactions.However, he found the hosts and participants "incredibly welcoming" and the debates "more open than [he'd] expected on how commercial organisations could play a part" in the ecosystem.
Shaw is keen to point out that the Tes Resources site that he manages is “a potential space for OER-sharing”. He goes on to talk about how he’s an ‘OER pragmatist’ rather than an ‘OER purist’. As a former journalist, Shaw is a great writer. However, I want to tease apart some things I think he conflates.
In his March 2018 post announcing the next phase of development for Tes Resources, Shaw announced that the goal was to create “a community of authors providing high-quality resources for educators”. He conflates that in this post with educators sharing Open Educational Resources. I don’t think the two things are the same, and that’s not because I’m an ‘OER purist’.
The concern that I, and others in the Open Education community, have around commercial players in ecosystem is the tendency to embrace, extend, and extinguish:
So, think of Twitter before they closed their API: a thousand Twitter clients bloomed, and innovations such as pull-to-refresh were invented. Then Twitter decided to 'own the experience' of users and changed their API so that those third-party clients withered.
- Embrace: Development of software substantially compatible with a competing product, or implementing a public standard.
- Extend: Addition and promotion of features not supported by the competing product or part of the standard, creating interoperability problems for customers who try to use the 'simple' standard.
- Extinguish: When extensions become a de facto standard because of their dominant market share, they marginalize competitors that do not or cannot support the new extensions.
Tes Resources, Shaw admitted to me, doesn’t even have an API. It’s a bit like Medium, the place he chose to publish this post. If he’d written the post in something like WordPress, he’d be notified of my reply via web standard technologies. Medium doesn’t adhere to those standards. Nor does Tes Resources. It’s a walled garden.
My call, then, would be for Tes Resources to develop an API so that services such as the MoodleNet project I’m leading, can query and access it. Up until then, it’s not a repository. It’s just another silo.
Source: Michael Shaw
Image: CC BY Jess
Everything is potentially a meme
Despite — or perhaps because of — my feelings towards the British monarchy, this absolutely made my day:


Isn’t the internet great?
Source: Haha
How to be super-productive
Not a huge sample size, but this article has studied what makes ‘super-productive’ people tick:
We collected data on over 7,000 people who were rated by their manager on their level of their productivity and 48 specific behaviors. Each person was also rated by an average of 11 other people, including peers, subordinates, and others. We identified the specific behaviors that were correlated with high levels of productivity — the top 10% in our sample — and then performed a factor analysis.Here's the list of seven things that came out of the study:
- Set stretch goals
- Show consistency
- Have knowledge and technical expertise
- Drive for results
- Anticipate and solve problems
- Take initiative
- Be collaborative
- Show up
- Be proactive
- Collaborate
Source: Harvard Business Review (via Ian O’Byrne)
Thinking outdoors
“We do not belong to those who have ideas only among books, when stimulated by books. It is our habit to think outdoors — walking, leaping, climbing, dancing, preferably on lonely mountains or near the sea where even the trails become thoughtful.” (Friedrich Nietzsche)
Clickbait and switch?
Should you design for addiction or for loyalty? That’s the question posed by Michelle Manafy in this post for Nieman Lab. It all depends, she says, on whether you’re trying to attract users or an audience.
With advertising as the primary driver of web revenue, many publishers have chased the click dragon. Seeking to meet marketers’ insatiable desire for impressions, publishers doubled down on quick clicks. Headlines became little more than a means to a clickthrough, often regardless of whether the article would pay off or even if the topic was worthy of coverage. And — since we all know there are still plenty of publications focusing on hot headlines over substance — this method pays off. In short-term revenue, that is.Audiences mature over time and become wary of particular approaches. Remember “…and you’ll not believe what came next” approaches?However, the reader experience that shallow clicks deliver doesn’t develop brand affinity or customer loyalty. And the negative consumer experience has actually been shown to extend to any advertising placed in its context. Sure, there are still those seeking a quick buck — but these days, we all see clickbait for what it is.
Ask Manafy notes, it’s much easier to design for addiction than to build an audience. The former just requires lots and lots of tracking — something at which the web has become spectacularly good at, due to advertising.
For example, many push notifications are specifically designed to leverage the desire for human interaction to generate clicks (such as when a user is alerted that their friend liked an article). Push notifications and alerts are also unpredictable (Will we have likes? Mentions? New followers? Negative comments?). And this unpredictability, or B.F. Skinner’s principle of variable rewards, is the same one used in those notoriously addictive slot machines. They’re also lucrative — generating more revenue in the U.S. than baseball, theme parks, and movies combined. A pull-to-refresh even smacks of a slot machine lever.The problem is that designing for addiction isn't a long-term strategy. Who plays Farmville these days? And the makers of Candy Crush aren't exactly crushing it with their share price these days.
Sure, an addict is “engaged” — clicking, liking, swiping — but what if they discover that your product is bad for them? Or that it’s not delivering as much value as it does harm? The only option for many addicts is to quit, cold turkey. Sure, many won’t have the willpower, and you can probably generate revenue off these users (yes, users). But is that a long-term strategy you can live with? And is it a growth strategy, should the philosophical, ethical, or regulatory tide turn against you?The 'regulatory tide' referenced here is exemplified through GDPR, which is already causing a sea change in attitude towards user data. Compliance with teeth, it seems, gets results.
Designing for sustainability isn’t just good from a regulatory point of view, it’s good for long-term business, argues Manafy:
Where addiction relies on an imbalanced and unstable relationship, loyal customers will return willingly time and again. They’ll refer you to others. They’ll be interested in your new offerings, because they will already rely on you to deliver. And, as an added bonus, these feelings of goodwill will extend to any advertising you deliver too. Through the provision of quality content, delivered through excellent experiences at predictable and optimal times, content can become a trusted ally, not a fleeting infatuation or unhealthy compulsion.Instead of thinking of your audience as 'users' waiting for their next hit, she suggests, think of them as your audience. That's a much better approach and will help you make much better design decisions.
Source: Nieman Lab
Soviet-era industrial design
While the prospects of me learning the Russian language anytime soon are effectively zero, I do have a soft spot for the country. My favourite novels are 19th century Russian fiction, the historical time period I’m most fond of is the Russian revolutions of 1917*, and I really like some of the designs that came out of Bolshevik and Stalinist Russia. (That doesn’t mean I condone the atrocities, of course.)
The Soviet era, from 1950 onwards, isn’t really a time period I’ve studied in much depth. I taught it as a History teacher as part of a module on the Cold War, but that was very much focused on the American and British side of things. So I’ve missed out on some of the wonderful design that came out of that time period. Here’s a couple of my favourites featured in this article. I may have to buy the book it mentions!


Source: Atlas Obscura
- I’m currently reading October: the story of the Russian Revolution by China Mieville, which I’d recommend.
Conversational implicature
In references for jobs, former employers are required to be positive. Therefore, a reference that focuses on how polite and punctual someone is could actually be a damning indictment of their ability. Such ‘conversational implicature’ is the focus of this article:
When we convey a message indirectly like this, linguists say that we implicate the meaning, and they refer to the meaning implicated as an implicature. These terms were coined by the British philosopher Paul Grice (1913-88), who proposed an influential account of implicature in his classic paper ‘Logic and Conversation’ (1975), reprinted in his book Studies in the Way of Words (1989). Grice distinguished several forms of implicature, the most important being conversational implicature. A conversational implicature, Grice held, depends, not on the meaning of the words employed (their semantics), but on the way that the words are used and interpreted (their pragmatics).From my point of view, this is similar to the difference between productive and unproductive ambiguity.
The distinction between what is said and what is conversationally implicated isn’t just a technical philosophical one. It highlights the extent to which human communication is pragmatic and non-literal. We routinely rely on conversational implicature to supplement and enrich our utterances, thus saving time and providing a discreet way of conveying sensitive information. But this convenience also creates ethical and legal problems. Are we responsible for what we implicate as well as for what we actually say?For example, and as the article notes, "shall we go upstairs?" can mean a sexual invitation, which may or may not later imply consent. It's a tricky area.
I’ve noted that the more technically-minded a person, the less they use conversational implicature. In addition, and I’m not sure if this is true or just my own experience, I’ve found that Americans tend to be more literal in their communication than Europeans.
To avoid disputes and confusion, perhaps we should use implicature less and communicate more explicitly? But is that recommendation feasible, given the extent to which human communication relies on pragmatics?To use conversational implicature is human. It can be annoying. It can turn political. But it's an extremely useful tool, and certainly lubricates us all rubbing along together.
Source: Aeon
Ryan Holiday's 13 daily life-changing habits
Articles like this are usually clickbait with two or three useful bits of advice that you’ve already read elsewhere, coupled with some other random things to pad it out. That’s not the case with Ryan Holiday’s post, which lists:
- Prepare for the hours ahead
- Go for a walk
- Do the deep work
- Do a kindness
- Read. Read. Read.
- Find true quiet
- Make time for strenuous exercise
- Think about death
- Seize the alive time
- Say thanks — to the good and bad
- Put the day up for review
- Find a way to connect to something big
- Get eight hours of sleep
Source: Thought Catalog