Tag: New York Times (page 1 of 3)

Friday fluidity

I wasn’t sure whether to share links about the Coronavirus this week, but obviously, like everyone else, I’ve been reading about it.

Next week, my wife and I are heading to Belgium as I’m speaking at an event, and then we’re spending the weekend in Bruges. I think we’ll be OK. But even if we do contract the virus, the chances of us dying, or even being seriously ill, are vanishingly small. It’s all very well being pragmatic, but you can’t live your life in fear.

Anyway, if you’ve heard enough about potential global pandemics, feel free to skip straight onto the second and third sections, where I share some really interesting links about organisations, productivtiy, security, and more!


How I track the coronavirus

I’ve been tracking it carefully for weeks, and have built up an online search strategy. I’d like to share a description of it here, partly in case it’s useful for readers, and also to request additions in case it’s missing anything.

Bryan Alexander

What I like about this post by Bryan is that he’s sharing both his methods and go-to resources, without simultaneously sharing his conclusions. That’s the mark of an open mind, and that’s why I support him on Patreon.


Coronavirus and World After Capital

The danger we are now finding ourselves in can be directly traced to our reliance on the market mechanism for allocating attention. A global pandemic is an example of the kind of tail risk for which prices cannot exist. This is a key theme of my book World After Capital and I have been using pandemics as an alternative example to the climate crisis (another, while we are at it, are asteroid strikes).

Albert Wenger (Continuations)

I really must sit down and read World After Capital. In this short post, the author (a Venture Capitalist) explains why we need to allocate attention to what he calls ‘tail risks’.


You’re Likely to Get the Coronavirus

Many countries have responded with containment attempts, despite the dubious efficacy and inherent harms of China’s historically unprecedented crackdown. Certain containment measures will be appropriate, but widely banning travel, closing down cities, and hoarding resources are not realistic solutions for an outbreak that lasts years. All of these measures come with risks of their own. Ultimately some pandemic responses will require opening borders, not closing them. At some point the expectation that any area will escape effects of COVID-19 must be abandoned: The disease must be seen as everyone’s problem.

James Hamblin (The Atlantic)

Will you get a cold at some point in your life? Yes, probably most winters in some form. Will you catch ‘flu at some point in your life. Yes, probably, at some point. Will you get the Coronavirus. Almost certainly, but it’s not going to kill you unless your very young, very old, or very weak.


Image by Ivan Bandura
Photo by Ivan Bandura

Work Operating Systems? No, We Need Work Ecosystems.

The principal limitation of the work OS concept is that companies do not operate independently: they are increasingly connected to other organizations. The model of work OS is too inwardly focused, when the real leverage may come from the interactions across company boundaries, or by lessening the barriers to cross-company cooperation. (In a sense, this is just the fullest expression of the ideal of cross-team and cross-department cooperation: if it’s good at the smallest scale, it is great at the largest scale.)

Stowe Boyd (GigaOM)

This post is interesting for a couple of reasons. First, I absolutely agree with the end game that Boyd describes here. Second, our co-op has just started using Monday.com and have found it… fine, and doing what we need, but I can’t wait for some organisation to go beyond the ‘work OS’.


Career Moats 101

A career moat is an individual’s ability to maintain competitive advantages over your competition (say, in the job market) in order to protect your long term prospects, your employability, and your ability to generate sufficient financial returns to support the life you want to live. Just like a medieval castle, the moat serves to protect those inside the fortress and their riches from outsiders.

cedric chin (Commonplace)

I came across links to two different posts on the same blog this week, which made me investigate it further. The central thesis of the blog is that we should aim to build ‘career moats’, which is certainly an interesting way of thinking about things, and this link has some practical advice.


Daily life with the offline laptop

Having access to the Internet is a gift, I can access anything or anyone. But this comes with a few drawbacks. I can waste my time on anything, which is not particularly helpful. There are so many content that I only scratch things, knowing it will still be there when I need it, and jump to something else. The amount of data is impressive, one human can’t absorb that much, we have to deal with it.

Solène Rapenne

I love this idea of having a machine that remains offline and which you use for music and writing. Especially the writing. In fact, I was talking to someone earlier this week about using my old 1080p monitor in portrait mode with a Raspberry Pi to create a ‘writing machine’. I might just do it…


Photo by Lauren McConachie

Spilling over: How working openly with anxiety affects my team

At a fundamental level, I believe work is never done, that there is always another challenge to explore, other ways to have a larger impact. Leaders need to inspire and motivate us to embrace that reality as an exciting opportunity rather than an endless drudge or a source of continual worry.

Sam Knuth (Opensource.com)

This is a great article. As a leader and someone who’s only admitted to myself recently that I am, indeed an ‘anxious person’, I see similarities with my experiences here.


5 tricks to make the internet less distracting, so you can get stuff done

Maybe you want to be more productive at work. Maybe you want to spend more time being creative or learning new skills. Or maybe you just wish you spent more time communicating with the people you love and less time scrolling through websites that bring you brief moments of joy just frequently enough that you’re willing to tolerate the broader feeling of anxiety/jealousy/outrage.

The internet can be an amazing tool for pursuing these goals, but it’s not necessarily designed to push you toward it. You’ve got to work to create the environment for yourself. Here are some ways you can do just that.

Justin Pot (Fast Company)

It’s now over five years since I wrote Curate or Be Curated. The article, and the warning it contains, stands the test of time, I think. The ‘tricks’ shared in this Fast Company article, shared by Ian O’Byrne are a helpful place to start.


How to Dox Yourself on the Internet

To help our Times colleagues think like doxxers, we developed a formal program that consists of a series of repeatable steps that can be taken to clean up an online footprint. Our goal with this program is to empower people to control the information they share, and to provide them with tools and resources to have a better awareness around the information they intentionally and unintentionally share online.
We are now publicly releasing the content of this program for anyone to access. We think it is important for freelancers, activists, other newsrooms or people who want to take control of their own security online.

The NYT Open Team

This is a great idea. ‘Doxxing’ is the digging-up and sharing of personal information (e.g. home addresses) for the purposes of harrassment. This approach, where you try to ‘dox’ yourself so that you can take protective steps, is a great idea.


Header image by Adli Wahid who says “Rest in Peace Posters of Dr Li Wenliang, who warned authorities about the coronovirus outbreak seen at Hosier Lane in Melbourne, Australia. Hosier Lane is known for its street art. “

Friday featherings

Behold! The usual link round-up of interesting things I’ve read in the last week.

Feel free to let me know if anything particularly resonated with you via the comments section below…


Part I – What is a Weird Internet Career?

Weird Internet Careers are the kinds of jobs that are impossible to explain to your parents, people who somehow make a living from the internet, generally involving a changing mix of revenue streams. Weird Internet Career is a term I made up (it had no google results in quotes before I started using it), but once you start noticing them, you’ll see them everywhere. 

Gretchen McCulloch (All Things Linguistic)

I love this phrase, which I came across via Dan Hon’s newsletter. This is the first in a whole series of posts, which I am yet to explore in its entirety. My aim in life is now to make my career progressively more (internet) weird.


Nearly half of Americans didn’t go outside to recreate in 2018. That has the outdoor industry worried.

While the Outdoor Foundation’s 2019 Outdoor Participation Report showed that while a bit more than half of Americans went outside to play at least once in 2018, nearly half did not go outside for recreation at all. Americans went on 1 billion fewer outdoor outings in 2018 than they did in 2008. The number of adolescents ages 6 to 12 who recreate outdoors has fallen four years in a row, dropping more than 3% since 2007 

The number of outings for kids has fallen 15% since 2012. The number of moderate outdoor recreation participants declined, and only 18% of Americans played outside at least once a week. 

Jason Blevins (The Colorado Sun)

One of Bruce Willis’ lesser-known films is Surrogates (2009). It’s a short, pretty average film with a really interesting central premise: most people stay at home and send their surrogates out into the world. Over a decade after the film was released, a combination of things (including virulent viruses, screen-focused leisure time, and safety fears) seem to suggest it might be a predictor of our medium-term future.


I’ll Never Go Back to Life Before GDPR

It’s also telling when you think about what lengths companies have had to go through to make the EU versions of their sites different. Complying with GDPR has not been cheap. Any online business could choose to follow GDPR by default across all regions and for all visitors. It would certainly simplify things. They don’t, though. The amount of money in data collection is too big.

Jill Duffy (OneZero)

This is a strangely-titled article, but a decent explainer on what the web looks and feels like to those outside the EU. The author is spot-on when she talks about how GDPR and the recent California Privacy Law could be applied everywhere, but they’re not. Because surveillance capitalism.


You Are Now Remotely Controlled

The belief that privacy is private has left us careening toward a future that we did not choose, because it failed to reckon with the profound distinction between a society that insists upon sovereign individual rights and one that lives by the social relations of the one-way mirror. The lesson is that privacy is public — it is a collective good that is logically and morally inseparable from the values of human autonomy and self-determination upon which privacy depends and without which a democratic society is unimaginable.

Shoshana Zuboff (The New York Times)

I fear that the length of Zuboff’s (excellent) book on surveillance capitalism, her use of terms in this article such as ‘epistemic inequality, and the subtlety of her arguments, may mean that she’s preaching to the choir here.


How to Raise Media-Savvy Kids in the Digital Age

The next time you snap a photo together at the park or a restaurant, try asking your child if it’s all right that you post it to social media. Use the opportunity to talk about who can see that photo and show them your privacy settings. Or if a news story about the algorithms on YouTube comes on television, ask them if they’ve ever been directed to a video they didn’t want to see.

Meghan Herbst (WIRED)

There’s some useful advice in this WIRED article, especially that given by my friend Ian O’Byrne. The difficulty I’ve found is when one of your kids becomes a teenager and companies like Google contact them directly telling them they can have full control of their accounts, should they wish…


Control-F and Building Resilient Information Networks

One reason the best lack conviction, though, is time. They don’t have the time to get to the level of conviction they need, and it’s a knotty problem, because that level of care is precisely what makes their participation in the network beneficial. (In fact, when I ask people who have unintentionally spread misinformation why they did so, the most common answer I hear is that they were either pressed for time, or had a scarcity of attention to give to that moment)

But what if — and hear me out here — what if there was a way for people to quickly check whether linked articles actually supported the points they claimed to? Actually quoted things correctly? Actually provided the context of the original from which they quoted

And what if, by some miracle, that function was shipped with every laptop and tablet, and available in different versions for mobile devices?

This super-feature actually exists already, and it’s called control-f.

Roll the animated GIF!

Mike Caulfield (Hapgood)

I find it incredible, but absolutely believable, that only around 10% of internet users know how to use Ctrl-F to find something within a web page. On mobile, it’s just as easy, as there’s an option within most (all?) browsers to ‘search within page’. I like Mike’s work, as not only is it academic, it’s incredibly practical.


EdX launches for-credit credentials that stack into bachelor’s degrees

The MicroBachelors also mark a continued shift for EdX, which made its name as one of the first MOOC providers, to a wider variety of educational offerings 

In 2018, EdX announced several online master’s degrees with selective universities, including the Georgia Institute of Technology and the University of Texas at Austin.

Two years prior, it rolled out MicroMasters programs. Students can complete the series of graduate-level courses as a standalone credential or roll them into one of EdX’s master’s degrees.

That stackability was something EdX wanted to carry over into the MicroBachelors programs, Agarwal said. One key difference, however, is that the undergraduate programs will have an advising component, which the master’s programs do not. 

Natalie Schwartz (Education Dive)

This is largely a rewritten press release with a few extra links, but I found it interesting as it’s a concrete example of a couple of things. First, the ongoing shift in Higher Education towards students-as-customers. Second, the viability of microcredentials as a ‘stackable’ way to build a portfolio of skills.

Note that, as a graduate of degrees in the Humanities, I’m not saying this approach can be used for everything, but for those using Higher Education as a means to an end, this is exactly what’s required.


How much longer will we trust Google’s search results?

Today, I still trust Google to not allow business dealings to affect the rankings of its organic results, but how much does that matter if most people can’t visually tell the difference at first glance? And how much does that matter when certain sections of Google, like hotels and flights, do use paid inclusion? And how much does that matter when business dealings very likely do affect the outcome of what you get when you use the next generation of search, the Google Assistant?

Dieter Bohn (The Verge)

I’ve used DuckDuckGo as my go-to search engine for years now. It used to be that I’d have to switch to Google for around 10% of my searches. That’s now down to zero.


Coaching – Ethics

One of the toughest situations for a product manager is when they spot a brewing ethical issue, but they’re not sure how they should handle the situation.  Clearly this is going to be sensitive, and potentially emotional. Our best answer is to discover a solution that does not have these ethical concerns, but in some cases you won’t be able to, or may not have the time.

[…]

I rarely encourage people to leave their company, however, when it comes to those companies that are clearly ignoring the ethical implications of their work, I have and will continue to encourage people to leave.

Marty Cagan (SVPG)

As someone with a sensitive radar for these things, I’ve chosen to work with ethical people and for ethical organisations. As Cagan says in this post, if you’re working for a company that ignores the ethical implications of their work, then you should leave. End of story.


Image via webcomic.name

Friday festoonings

Check out these things I read and found interesting this week. Thanks to some positive feedback, I’ve carved out time for some commentary, and changed the way this link roundup is set out.

Let me know what you think! What did you find most interesting?


Maps Are Biased Against Animals

Critics may say that it is unreasonable to expect maps to reflect the communities or achievements of nonhumans. Maps are made by humans, for humans. When beavers start Googling directions to a neighbor’s dam, then their homes can be represented! For humans who use maps solely to navigate—something that nonhumans do without maps—man-made roads are indeed the only features that are relevant. Following a map that includes other information may inadvertently lead a human onto a trail made by and for deer.

But maps are not just tools to get from points A to B. They also relay new and learned information, document evolutionary changes, and inspire intrepid exploration. We operate on the assumption that our maps accurately reflect what a visitor would find if they traveled to a particular area. Maps have immense potential to illustrate the world around us, identifying all the important features of a given region. By that definition, the current maps that most humans use fall well short of being complete. Our definition of what is “important” is incredibly narrow.

Ryan Huling (WIRED)

Cartography is an incredibly powerful tool. We’ve known for a long time that “the map is not the territory” but perhaps this is another weapon in the fight against climate change and the decline in diversity of species?


Why Actually Principled People Are Difficult (Glenn Greenwald Edition)

Then you get people like Greenwald, Assange, Manning and Snowden. They are polarizing figures. They are loved or hated. They piss people off.

They piss people off precisely because they have principles they consider non-negotiable. They will not do the easy thing when it matters. They will not compromise on anything that really matters.

That’s breaking the actual social contract of “go along to get along”, “obey authority” and “don’t make people uncomfortable.” I recently talked to a senior activist who was uncomfortable even with the idea of yelling at powerful politicians. It struck them as close to violence.

So here’s the thing, people want men and women of principle to be like ordinary people.

They aren’t. They can’t be. If they were, they wouldn’t do what they do. Much of what you may not like about a Greenwald or Assange or Manning or Snowden is why they are what they are. Not just the principle, but the bravery verging on recklessness. The willingness to say exactly what they think, and do exactly what they believe is right even if others don’t.

Ian Welsh

Activists like Greta Thunberg and Edward Snowden are the closest we get to superheroes, to people who stand for the purest possible version of an idea. This is why we need them — and why we’re so disappointed when they turn out to be human after all.


Explicit education

Students’ not comprehending the value of engaging in certain ways is more likely to be a failure in our teaching than their willingness to learn (especially if we create a culture in which success becomes exclusively about marks and credentialization). The question we have to ask is if what we provide as ‘university’ goes beyond the value of what our students can engage with outside of our formal offer. 

Dave White

This is a great post by Dave, who I had the pleasure of collaborating with briefly during my stint at Jisc. I definitely agree that any organisation walks a dangerous path when it becomes overly-fixated on the ‘how’ instead of the ‘what’ and the ‘why’.


What Are Your Rules for Life? These 11 Expressions (from Ancient History) Might Help

The power of an epigram or one of these expressions is that they say a lot with a little. They help guide us through the complexity of life with their unswerving directness. Each person must, as the retired USMC general and former Secretary of Defense Jim Mattis, has said, “Know what you will stand for and, more important, what you won’t stand for.” “State your flat-ass rules and stick to them. They shouldn’t come as a surprise to anyone.”

Ryan Holiday

Of the 11 expressions here, I have to say that other than memento mori (“remember you will die”) I particularly like semper anticus (“always forward”) which I’m going to print out in a fancy font and stick on the wall of my home office.


Dark Horse Discord

In a hypothetical world, you could get a Discord (or whatever is next) link for your new job tomorrow – you read some wiki and meta info, sort yourself into your role you’d, and then are grouped with the people who you need to collaborate with on a need be basis. All wrapped in one platform. Maybe you have an HR complaint – drop it in #HR where you can’t read the messages but they can, so it’s a blind 1 way conversation. Maybe there is a #help channel, where you ping you write your problems and the bot pings people who have expertise based on keywords. There’s a lot of things you can do with this basic design.

Mule’s Musings

What is described in this post is a bit of a stretch, but I can see it: a world where work is organised a bit like how gamers organisers in chat channels. Something to keep an eye on, as the interplay between what’s ‘normal’ and what’s possible with communications technology changes and evolves.


The Edu-Decade That Was: Unfounded Optimism?

What made the last decade so difficult is how education institutions let corporations control the definitions so that a lot of “study and ethical practice” gets left out of the work. With the promise of ease of use, low-cost, increased student retention (or insert unreasonable-metric-claim here), etc. institutions are willing to buy into technology without regard to accessibility, scalability, equity and inclusion, data privacy or student safety, in hope of solving problem X that will then get to be checked off of an accreditation list. Or worse, with the hope of not having to invest in actual people and local infrastructure.

Geoff Cain (Brainstorm in progress)

It’s nice to see a list of some positives that came out of the last decades, and for microcredentials and badging to be on that list.


When Is a Bird a ‘Birb’? An Extremely Important Guide

First, let’s consider the canonized usages. The subreddit r/birbs defines a birb as any bird that’s “being funny, cute, or silly in some way.” Urban Dictionary has a more varied set of definitions, many of which allude to a generalized smallness. A video on the youtube channel Lucidchart offers its own expansive suggestions: All birds are birbs, a chunky bird is a borb, and a fluffed-up bird is a floof. Yet some tension remains: How can all birds be birbs if smallness or cuteness are in the equation? Clearly some birds get more recognition for an innate birbness.

Asher Elbein (Audubon magazine)

A fun article, but also an interesting one when it comes to ambiguity, affinity groups, and internet culture.


Why So Many Things Cost Exactly Zero

“Now, why would Gmail or Facebook pay us? Because what we’re giving them in return is not money but data. We’re giving them lots of data about where we go, what we eat, what we buy. We let them read the contents of our email and determine that we’re about to go on vacation or we’ve just had a baby or we’re upset with our friend or it’s a difficult time at work. All of these things are in our email that can be read by the platform, and then the platform’s going to use that to sell us stuff.”

Fiona Scott Morton (Yale business school) quoted by Peter coy (Bloomberg Businessweek)

Regular readers of Thought Shrapnel know all about surveillance capitalism, but it’s good to see these explainers making their way to the more mainstream business press.


Your online activity is now effectively a social ‘credit score’

The most famous social credit system in operation is that used by China’s government. It “monitors millions of individuals’ behavior (including social media and online shopping), determines how moral or immoral it is, and raises or lowers their “citizen score” accordingly,” reported Atlantic in 2018.

“Those with a high score are rewarded, while those with a low score are punished.” Now we know the same AI systems are used for predictive policing to round up Muslim Uighurs and other minorities into concentration camps under the guise of preventing extremism.

Violet Blue (Engadget)

Some (more prudish) people will write this article off because it discusses sex workers, porn, and gay rights. But the truth is that all kinds of censorship start with marginalised groups. To my mind, we’re already on a trajectory away from Silicon Valley and towards Chinese technology. Will we be able to separate the tech from the morality?


Panicking About Your Kids’ Phones? New Research Says Don’t

The researchers worry that the focus on keeping children away from screens is making it hard to have more productive conversations about topics like how to make phones more useful for low-income people, who tend to use them more, or how to protect the privacy of teenagers who share their lives online.

“Many of the people who are terrifying kids about screens, they have hit a vein of attention from society and they are going to ride that. But that is super bad for society,” said Andrew Przybylski, the director of research at the Oxford Internet Institute, who has published several studies on the topic.

Nathaniel Popper (The New York Times)

Kids and screentime is just the latest (extended) moral panic. Overuse of anything causes problems, smartphones, games consoles, and TV included. What we need to do is to help our children find balance in all of this, which can be difficult for the first generation of parents navigating all of this on the frontline.


Gorgeous header art via the latest Facebook alternative, planetary.social

Friday foggings

I’ve been travelling this week, so I’ve had plenty of time to read and digest a whole range of articles. In fact, because of the luxury of that extra time, I decided to write some comments about each link, as well as the usual quotation.

Let me know what you think about this approach. I may not have the bandwidth to do it every week, but if it’s useful, I’ll try and prioritise it. As ever, particularly interested in hearing from supporters!


Education and Men without Work (National Affairs) — “Unlike the Great Depression, however, today’s work crisis is not an unemployment crisis. Only a tiny fraction of workless American men nowadays are actually looking for employment. Instead we have witnessed a mass exodus of men from the workforce altogether. At this writing, nearly 7 million civilian non-institutionalized men between the ages of 25 and 54 are neither working nor looking for work — over four times as many as are formally unemployed.”

This article argues that the conventional wisdom, that men are out of work because of a lack of education, may be based on false assumptions. In fact, a major driver seems to be the number of men (more than 50% of working-age men, apparently) who live in child-free homes. What do these men end up doing with their time? Many of them are self-medicating with drugs and screens.


Fresh Cambridge Analytica leak ‘shows global manipulation is out of control’ (The Guardian) — “More than 100,000 documents relating to work in 68 countries that will lay bare the global infrastructure of an operation used to manipulate voters on “an industrial scale” are set to be released over the next months.”

Sadly, I think the response to these documents will be one of apathy. Due to the 24-hour news cycle and the stream of ‘news’ on social networks, the voting public grow tired of scandals and news stories that last for months and years.


Funding (Sussex Royals) — “The Sovereign Grant is the annual funding mechanism of the monarchy that covers the work of the Royal Family in support of HM The Queen including expenses to maintain official residences and workspaces. In this exchange, The Queen surrenders the revenue of the Crown Estate and in return, a portion of these public funds are granted to The Sovereign/The Queen for official expenditure.”

I don’t think I need to restate my opinions on the Royal Family, privilege, and hierarchies / coercive power relationships of all shapes and sizes. However, as someone pointed out on Mastodon, this page by ‘Harry and Meghan’ is quietly subversive.


How to sell good ideas (New Statesman) — “It is true that [Malcolm] Gladwell sometimes presses his stories too militantly into the service of an overarching idea, and, at least in his books, can jam together materials too disparate to cohere (Poole referred to his “relentless montage”). The New Yorker essay, which constrains his itinerant curiosity, is where he does his finest work (the best of these are collected in 2009’s What The Dog Saw). For the most part, the work of his many imitators attests to how hard it is to do what he does. You have to be able to write lucid, propulsive prose capable of introducing complex ideas within a magnetic field of narrative. You have to leave your desk and talk to people (he never stopped being a reporter). Above all, you need to acquire an extraordinary eye for the overlooked story, the deceptively trivial incident, the minor genius. Gladwell shares the late Jonathan Miller’s belief that “it is in the negligible that the considerable is to be found”.”

A friend took me to see Gladwell when he was in Newcastle-upon-Tyne touring with ‘What The Dog Saw’. Like the author of this article, I soon realised that Gladwell is selling something quite different to ‘science’ or ‘facts’. And so long as you’re OK with that, you can enjoy (as I do) his podcasts and books.


Just enough Internet: Why public service Internet should be a model of restraint (doteveryone) — “We have not yet done a good job of defining what good digital public service really looks like, of creating digital charters that match up to those of our great institutions, and it is these statements of values and ways of working – rather than any amount of shiny new technology – that will create essential building blocks for the public services of the future.”

While I attended the main MozFest weekend event, I missed the presentation and other events that happened earlier in the week. I definitely agree with the sentiment behind the transcript of this talk by Rachel Coldicutt. I’m just not sure it’s specific enough to be useful in practice.


Places to go in 2020 (Marginal Revolution) — “Here is the mostly dull NYT list. Here is my personal list of recommendations for you, noting I have not been to all of the below, but I am in contact with many travelers and paw through a good deal of information.”

This list by Tyler Cowen is really interesting. I haven’t been to any of the places on this list, but I now really want to visit Eastern Bali and Baku in Azerbaijan.


Reasons not to scoff at ghosts, visions and near-death experiences (Aeon) — “Sure, the dangers of gullibility are evident enough in the tragedies caused by religious fanatics, medical quacks and ruthless politicians. And, granted, spiritual worldviews are not good for everybody. Faith in the ultimate benevolence of the cosmos will strike many as hopelessly irrational. Yet, a century on from James’s pragmatic philosophy and psychology of transformative experiences, it might be time to restore a balanced perspective, to acknowledge the damage that has been caused by stigma, misdiagnoses and mis- or overmedication of individuals reporting ‘weird’ experiences. One can be personally skeptical of the ultimate validity of mystical beliefs and leave properly theological questions strictly aside, yet still investigate the salutary and prophylactic potential of these phenomena.”

I’d happily read a full-length book on this subject, as it’s a fascinating area. The tension between knowing that much/all of the phenomena is reducible to materiality and mechanics may explain what’s going on, but it doesn’t explain it away…


Surveillance Tech Is an Open Secret at CES 2020 (OneZero) — “Lowe offered one explanation for why these companies feel so comfortable marketing surveillance tech: He says that the genie can’t be put back in the bottle, so barring federal regulation that bans certain implementations, it’s increasingly likely that some company will fill the surveillance market. In other words, if Google isn’t going to work with the cops, Amazon will. And even if Amazon decides not to, smaller companies out of the spotlight still will.”

I suppose it should come as no surprise that, in this day and age, companies like Cyberlink, previously known for their PowerDVD software, have moved into the very profitable world of surveillance capitalism. What’s going to stop its inexorable rise? I can only think of government regulation (with teeth).


‘Techlash’ Hits College Campuses (New York Times) — “Some recent graduates are taking their technical skills to smaller social impact groups instead of the biggest firms. Ms. Dogru said that some of her peers are pursuing jobs at start-ups focused on health, education and privacy. Ms. Harbour said Berkeley offers a networking event called Tech for Good, where alumni from purpose-driven groups like Code for America and Khan Academy share career opportunities.”

I’m not sure this is currently as big a ‘movement’ as suggested in the article, but I’m glad the wind is blowing in this direction. As with other ethically-dubious industries, companies involved in surveillance capitalism will have to pay people extraordinarily well to put aside their moral scruples.


Tradition is Smarter Than You Are (The Scholar’s Stage) — “To extract resources from a population the state must be able to understand that population. The state needs to make the people and things it rules legible to agents of the government. Legibility means uniformity. States dream up uniform weights and measures, impress national languages and ID numbers on their people, and divvy the country up into land plots and administrative districts, all to make the realm legible to the powers that be. The problem is that not all important things can be made legible. Much of what makes a society successful is knowledge of the tacit sort: rarely articulated, messy, and from the outside looking in, purposeless. These are the first things lost in the quest for legibility. Traditions, small cultural differences, odd and distinctive lifeways… are all swept aside by a rationalizing state that preserves (or in many cases, imposes) only what it can be understood and manipulated from the 2,000 foot view. The result… are many of the greatest catastrophes of human history.”

One of the books that’s been on my ‘to-read’ list for a while is ‘Seeing Like a State’, written by James C. Scott and referenced in this article. I’m no believer in tradition for the sake of it but, I have to say, that a lot of the superstitions of my maternal grandmother, and a lot of the rituals that come with religion are often very practical in nature.


Image by Michael Schlegel (via kottke.org)

I am not fond of expecting catastrophes, but there are cracks in the universe

So said Sydney Smith. Let’s talk about surveillance. Let’s talk about surveillance capitalism and surveillance humanitarianism. But first, let’s talk about machine learning and algorithms; in other words, let’s talk about what happens after all of that data is collected.

Writing in The Guardian, Sarah Marsh investigates local councils using “automated guidance systems” in an attempt to save money.

The systems are being deployed to provide automated guidance on benefit claims, prevent child abuse and allocate school places. But concerns have been raised about privacy and data security, the ability of council officials to understand how some of the systems work, and the difficulty for citizens in challenging automated decisions.

Sarah Marsh

The trouble is, they’re not particularly effective:

It has emerged North Tyneside council has dropped TransUnion, whose system it used to check housing and council tax benefit claims. Welfare payments to an unknown number of people were wrongly delayed when the computer’s “predictive analytics” erroneously identified low-risk claims as high risk

Meanwhile, Hackney council in east London has dropped Xantura, another company, from a project to predict child abuse and intervene before it happens, saying it did not deliver the expected benefits. And Sunderland city council has not renewed a £4.5m data analytics contract for an “intelligence hub” provided by Palantir.

Sarah Marsh

When I was at Mozilla there were a number of colleagues there who had worked on the OFA (Obama For America) campaign. I remember one of them, a DevOps guy, expressing his concern that the infrastructure being built was all well and good when there’s someone ‘friendly’ in the White House, but what comes next.

Well, we now know what comes next, on both sides of the Atlantic, and we can’t put that genie back in its bottle. Swingeing cuts by successive Conservative governments over here, coupled with the Brexit time-and-money pit means that there’s no attention or cash left.

If we stop and think about things for a second, we probably wouldn’t don’t want to live in a world where machines make decisions for us, based on algorithms devised by nerds. As Rose Eveleth discusses in a scathing article for Vox, this stuff isn’t ‘inevitable’ — nor does it constitute a process of ‘natural selection’:

Often consumers don’t have much power of selection at all. Those who run small businesses find it nearly impossible to walk away from Facebook, Instagram, Yelp, Etsy, even Amazon. Employers often mandate that their workers use certain apps or systems like Zoom, Slack, and Google Docs. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or, ‘I’m not on social media,’” says Rumman Chowdhury, a data scientist at Accenture. “You actually have to be so comfortable in your privilege that you can opt out of things.”

And so we’re left with a tech world claiming to be driven by our desires when those decisions aren’t ones that most consumers feel good about. There’s a growing chasm between how everyday users feel about the technology around them and how companies decide what to make. And yet, these companies say they have our best interests in mind. We can’t go back, they say. We can’t stop the “natural evolution of technology.” But the “natural evolution of technology” was never a thing to begin with, and it’s time to question what “progress” actually means.

Rose Eveleth

I suppose the thing that concerns me the most is people in dire need being subject to impersonal technology for vital and life-saving aid.

For example, Mark Latonero, writing in The New York Times, talks about the growing dangers around what he calls ‘surveillance humanitarianism’:

By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.

Mark Latonero

It’s easy to think that this is an emergency, so we should just do whatever is necessary. But Latonero explains the risks, arguing that the risk is shifted to a later time:

If an individual or group’s data is compromised or leaked to a warring faction, it could result in violent retribution for those perceived to be on the wrong side of the conflict. When I spoke with officials providing medical aid to Syrian refugees in Greece, they were so concerned that the Syrian military might hack into their database that they simply treated patients without collecting any personal data. The fact that the Houthis are vying for access to civilian data only elevates the risk of collecting and storing biometrics in the first place.

Mark Latonero

There was a rather startling article in last weekend’s newspaper, which I’ve found online. Hannah Devlin, again writing in The Guardian (which is a good source of information for those concerned with surveillance) writes about a perfect storm of social media and improved processing speeds:

[I]n the past three years, the performance of facial recognition has stepped up dramatically. Independent tests by the US National Institute of Standards and Technology (Nist) found the failure rate for finding a target picture in a database of 12m faces had dropped from 5% in 2010 to 0.1% this year.

The rapid acceleration is thanks, in part, to the goldmine of face images that have been uploaded to Instagram, Facebook, LinkedIn and captioned news articles in the past decade. At one time, scientists would create bespoke databases by laboriously photographing hundreds of volunteers at different angles, in different lighting conditions. By 2016, Microsoft had published a dataset, MS Celeb, with 10m face images of 100,000 people harvested from search engines – they included celebrities, broadcasters, business people and anyone with multiple tagged pictures that had been uploaded under a Creative Commons licence, allowing them to be used for research. The dataset was quietly deleted in June, after it emerged that it may have aided the development of software used by the Chinese state to control its Uighur population.

In parallel, hardware companies have developed a new generation of powerful processing chips, called Graphics Processing Units (GPUs), uniquely adapted to crunch through a colossal number of calculations every second. The combination of big data and GPUs paved the way for an entirely new approach to facial recognition, called deep learning, which is powering a wider AI revolution.

Hannah Devlin

Those of you who have read this far and are expecting some big reveal are going to be disappointed. I don’t have any ‘answers’ to these problems. I guess I’ve been guilty, like many of us have, of the kind of ‘privacy nihilism’ mentioned by Ian Bogost in The Atlantic:

Online services are only accelerating the reach and impact of data-intelligence practices that stretch back decades. They have collected your personal data, with and without your permission, from employers, public records, purchases, banking activity, educational history, and hundreds more sources. They have connected it, recombined it, bought it, and sold it. Processed foods look wholesome compared to your processed data, scattered to the winds of a thousand databases. Everything you have done has been recorded, munged, and spat back at you to benefit sellers, advertisers, and the brokers who service them. It has been for a long time, and it’s not going to stop. The age of privacy nihilism is here, and it’s time to face the dark hollow of its pervasive void.

Ian Bogost

The only forces that we have to stop this are collective action, and governmental action. My concern is that we don’t have the digital savvy to do the former, and there’s definitely the lack of will in respect of the latter. Troubling times.

Saturday strikings

This week’s roundup is going out a day later than usual, as yesterday was the Global Climate Strike and Thought Shrapnel was striking too!

Here’s what I’ve been paying attention to this week:

  • How does a computer ‘see’ gender? (Pew Research Center) — “Machine learning tools can bring substantial efficiency gains to analyzing large quantities of data, which is why we used this type of system to examine thousands of image search results in our own studies. But unlike traditional computer programs – which follow a highly prescribed set of steps to reach their conclusions – these systems make their decisions in ways that are largely hidden from public view, and highly dependent on the data used to train them. As such, they can be prone to systematic biases and can fail in ways that are difficult to understand and hard to predict in advance.”
  • The Communication We Share with Apes (Nautilus) — “Many primate species use gestures to communicate with others in their groups. Wild chimpanzees have been seen to use at least 66 different hand signals and movements to communicate with each other. Lifting a foot toward another chimp means “climb on me,” while stroking their mouth can mean “give me the object.” In the past, researchers have also successfully taught apes more than 100 words in sign language.”
  • Why degrowth is the only responsible way forward (openDemocracy) — “If we free our imagination from the liberal idea that well-being is best measured by the amount of stuff that we consume, we may discover that a good life could also be materially light. This is the idea of voluntary sufficiency. If we manage to decide collectively and democratically what is necessary and enough for a good life, then we could have plenty.”
  • 3 times when procrastination can be a good thing (Fast Company) — “It took Leonardo da Vinci years to finish painting the Mona Lisa. You could say the masterpiece was created by a master procrastinator. Sure, da Vinci wasn’t under a tight deadline, but his lengthy process demonstrates the idea that we need to work through a lot of bad ideas before we get down to the good ones.”
  • Why can’t we agree on what’s true any more? (The Guardian) — “What if, instead, we accepted the claim that all reports about the world are simply framings of one kind or another, which cannot but involve political and moral ideas about what counts as important? After all, reality becomes incoherent and overwhelming unless it is simplified and narrated in some way or other.
  • A good teacher voice strikes fear into grown men (TES) — “A good teacher voice can cut glass if used with care. It can silence a class of children; it can strike fear into the hearts of grown men. A quiet, carefully placed “Excuse me”, with just the slightest emphasis on the “-se”, is more effective at stopping an argument between adults or children than any amount of reason.”
  • Freeing software (John Ohno) — “The only way to set software free is to unshackle it from the needs of capital. And, capital has become so dependent upon software that an independent ecosystem of anti-capitalist software, sufficiently popular, can starve it of access to the speed and violence it needs to consume ever-doubling quantities of to survive.”
  • Young People Are Going to Save Us All From Office Life (The New York Times) — “Today’s young workers have been called lazy and entitled. Could they, instead, be among the first to understand the proper role of work in life — and end up remaking work for everyone else?”
  • Global climate strikes: Don’t say you’re sorry. We need people who can take action to TAKE ACTUAL ACTION (The Guardian) — “Brenda the civil disobedience penguin gives some handy dos and don’ts for your civil disobedience”

Friday floutings

Did you see these things this week? I did, and thought they were aces.

  1. Do you live in a ‘soft city’? Here’s why you probably want to (Fast Company) — “The benefits of taking a layered approach to building design—and urban planning overall—is that it also cuts down on the amount of travel by car that people need to do. If resources are assembled in a way that a person leaving their home can access everything they need by walking, biking, or taking transit, it frees up space for streets to also be layered to support these different modes.”
  2. YouTube should stop recommending garbage videos to users (Ars Technica) — “When a video finishes playing, YouTube should show the next video in the same channel. Or maybe it could show users a video selected from a list of high-quality videos curated by human YouTube employees. But the current approach—in which an algorithm tries to recommend the most engaging videos without worrying about whether they’re any good—has got to go.”
  3. Fairphone 3 is the ‘ethical’ smartphone you might actually buy (Engadget) — “Doing the right thing is often framed as giving up something. You’re not enjoying a vegetarian burger, you’re being denied the delights of red meat. But what if the ethical, moral, right choice was also the tastiest one? What if the smartphone made by the yurt-dwelling moralists was also good-looking, inexpensive and useful? That’s the question the Fairphone 3 poses.”
  4. Uh-oh: Silicon Valley is building a Chinese-style social credit system (Fast Company) — “The most disturbing attribute of a social credit system is not that it’s invasive, but that it’s extralegal. Crimes are punished outside the legal system, which means no presumption of innocence, no legal representation, no judge, no jury, and often no appeal. In other words, it’s an alternative legal system where the accused have fewer rights.”
  5. The Adults In The Room (Deadspin) — “The tragedy of digital media isn’t that it’s run by ruthless, profiteering guys in ill-fitting suits; it’s that the people posing as the experts know less about how to make money than their employees, to whom they won’t listen.”
  6. A brief introduction to learning agility (Opensource.com) — “One crucial element of adaptability is learning agility. It is the capacity for adapting to situations and applying knowledge from prior experience—even when you don’t know what to do. In short, it’s a willingness to learn from all your experiences and then apply that knowledge to tackle new challenges in new situations.”
  7. Telegram Pushes Ahead With Plans for ‘Gram’ Cryptocurrency (The New York Times) — “In its sales pitch for the Gram, which was viewed by The New York Times, Telegram has said the new digital money will operate with a decentralized structure similar to Bitcoin, which could make it easier to skirt government regulations.”
  8. Don’t Teach Tools (Assorted Stuff) — “As Culatta notes, concentrating on specific products also locks teachers (and, by extension, their students) into a particular brand, to the advantage of the company, rather than helping them understand the broader concepts of using computing devices as learning and creative tools.”
  9. Stoic Reflections From The Gym (part 2) by Greg Sadler (Modern Stoicism) — “From a Stoic perspective, what we do or don’t make time for, particularly in relation to other things, reflects what Epictetus would call the price we actually place upon those things, on what we take to be goods or values, evils or disvalues, and the relative rankings of those in relation to each other.”

Calvin & Hobbes cartoon found via a recent post on tenpencemore

The best way out is always through

So said Robert Frost, but I want to begin with the ending of a magnificent post from Kate Bowles. She expresses clearly how I feel sometimes when I sit down to write something for Thought Shrapnel:

[T]his morning I blocked out time, cleared space, and sat down to write — and nothing happened. Nothing. Not a word, not even a wisp of an idea. After enough time staring at the blankness of the screen I couldn’t clearly remember having had an idea, ever.

Along the way I looked at the sky, I ate a mandarin and then a second mandarin, I made a cup of tea, I watched a family of wrens outside my window, I panicked. I let email divert me, and then remembered that was the opposite of the plan. I stayed off Twitter. Panic increased.

Then I did the one thing that absolutely makes a difference to me. I asked for help. I said “I write so many stupid words in my bullshit writing job that I can no longer write and that is the end of that.” And the person I reached out to said very calmly “Why not write about the thing you’re thinking about?”

Sometimes what you have to do as a writer is sit in place long enough, and sometimes you have to ask for help. Whatever works for you, is what works.

Kate Bowles

There are so many things wrong with the world right now, that sometimes I feel like I could stop working on all of the things I’m working on and spend time just pointing them out to people.

But to what end? You don’t change the world by just making people aware of things, not usually. For example, as tragic as the sentence, “the Amazon is on fire” is, it isn’t in and of itself a call-to-action. These days, people argue about the facts themselves as well as the appropriate response.

The world is an inordinately complicated place that we seek to make sense of by not thinking as much as humanly possible. To aid and abet us in this task, we divide ourselves, either consciously or unconsciously, into groups who apply similar heuristics. The new (information) is then assimilated into the old (worldview).

I have no privileged position, no objective viewpoint in which to observe and judge the world’s actions. None of us do. I’m as complicit in joining and forming in and out groups as the next person. I decide I’m going to delete my Twitter account and then end up rage-tweeting All The Things.

Thankfully, there are smart people, and not only academics, thinking about all this to figure out what we can and should do. Tim Urban, from the phenomenally-successful Wait But Why, for example, has spent the last three years working on “a new language we can use to think and talk about our societies and the people inside of them”. In the first chapter in a new series, he writes about the ongoing struggle between (what he calls) the ‘Primitive Minds’ and ‘Higher Minds’ of humans:

The never-ending struggle between these two minds is the human condition. It’s the backdrop of everything that has ever happened in the human world, and everything that happens today. It’s the story of our times because it’s the story of all human times.

Tim Urban

I think this is worth remembering when we spend time on social networks. And especially when we spend so much time that it becomes our default delivery method for the news of the day. Our Primitive Minds respond strongly to stimuli around fear and fornication.

When we reflect on our social media usage and the changing information landscape, the temptation is either to cut down, or to try a different information diet. Some people become the equivalent of Information Vegans, attempting to source the ‘cleanest’ morsels of information from the most wholesome, trusted, and traceable of places.

But where are those ‘trusted places’ these days? Are we as happy with the previously gold-standard news outlets such as the BBC and The New York Times as we once were? And if not, what’s changed?

The difference, I think, is the way we’ve decided to allow money to flow through our digital lives. Commercial news outlets, including those with which the BBC competes, are funded by advertising. Those adverts we see in digital spaces aren’t just showing things that we might happen to be interested in. They’ll keep on showing you that pair of shoes you almost bought last week in every space that is funded by advertising. Which is basically everywhere.

I feel like I’m saying obvious things here that everyone knows, but perhaps it bears repeating. If everyone is consuming news via social networks, and those news stories are funded by advertising, then the nature of what counts as ‘news’ starts to evolve. What gets the most engagement? How are headlines formed now, compared with a decade ago?

It’s as if something hot-wires our brain when something non-threatening and potentially interesting is made available to us ‘for free’. We never get to the stuff that we’d like to think defines us, because we caught in neverending cycles of titillation. We pay with our attention, that scarce and valuable resource.

Our attention, and more specifically, how we react to our social media feeds when we’re ‘engaged’ is valuable because it can be packaged up and sold to advertisers. But it’s also sold to governments too. Twitter just had to update their terms and conditions specifically because of the outcry over the Chinese government’s propaganda around the Hong Kong protests.

Protesters part of the ‘umbrella revolution’ in Hong Kong have recently been focusing on cutting down what we used to call CCTV cameras, but which are much more accurately described as ‘facial recognition masts’:

We are living in a world where the answer to everything seems to be ‘increased surveillance’. Kids not learning fast enough in school? Track them more. Scared of terrorism? Add more surveillance into the lives of everyday citizens. And on and on.

In an essay earlier this year, Maciej Cegłowski riffed on all of this, reflecting on what he calls ‘ambient privacy’:

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent.

Ambient privacy is particularly hard to protect where it extends into social and public spaces outside the reach of privacy law. If I’m subjected to facial recognition at the airport, or tagged on social media at a little league game, or my public library installs an always-on Alexa microphone, no one is violating my legal rights. But a portion of my life has been brought under the magnifying glass of software. Even if the data harvested from me is anonymized in strict conformity with the most fashionable data protection laws, I’ve lost something by the fact of being monitored.

Maciej Cegłowski

One of the difficulties in resisting the ‘Silicon Valley narrative’ and Big Tech’s complicity with governments is the danger of coming across as a neo-luddite. Without looking very closely to understand what’s going on (and having some time to reflect) it can all look like the inevitable march of progress.

So, without necessarily an answer to all this, I guess the best thing is, like Kate, to ask for help. What can we do here? What practical steps can we take? Comments are open.

Friday flinchings

Here’s a distillation of the best of what I’ve been reading over the last three weeks:

  • The new left economics: how a network of thinkers is transforming capitalism (The Guardian) — “The new leftwing economics wants to see the redistribution of economic power, so that it is held by everyone – just as political power is held by everyone in a healthy democracy. This redistribution of power could involve employees taking ownership of part of every company; or local politicians reshaping their city’s economy to favour local, ethical businesses over large corporations; or national politicians making co-operatives a capitalist norm.”
  • Dark web detectives and cannabis sommeliers: Here are some jobs that could exist in the future (CBC) — “In a report called Signs of the Times: Expert insights about employment in 2030, the Brookfield Institute for Innovation + Entrepreneurship — a policy institute set up to help Canadians navigate the innovation economy — brings together insights into the future of work gleaned from workshops held across the country.”
  • Art Spiegelman: golden age superheroes were shaped by the rise of fascism (The Guardian) — “The young Jewish creators of the first superheroes conjured up mythic – almost god-like – secular saviours to deal with the threatening economic dislocations that surrounded them in the great depression and gave shape to their premonitions of impending global war. Comics allowed readers to escape into fantasy by projecting themselves on to invulnerable heroes.”
  • We Have Ruined Childhood (The New York Times) — “I’ve come to believe that the problems with children’s mental and emotional health are caused not by any single change in kids’ environment but by a fundamental shift in the way we view children and child-rearing, and the way this shift has transformed our schools, our neighborhoods and our relationships to one another and our communities.”
  • Turning the Nintendo Switch into Android’s best gaming hardware (Ars Technica) — “The Nintendo Switch is, basically, a game console made out of smartphone parts…. Really, the only things that make the Switch a game console are the sweet slide-on controllers and the fact that it is blessed by Nintendo, with actually good AAA games, ecosystem support, and developer outreach.
  • Actually, Gender-Neutral Pronouns Can Change a Culture (WIRED) — “Would native-speaker Swedes, seven years after getting a new pronoun plugged into their language, be more likely to assume this androgynous cartoon was a man? A woman? Either, or neither? Now that they had a word for it, a nonbinary option, would they think to use it?”
  • Don’t Blink! The Hazards of Confidence (The New York Times Magazine) — “Unfortunately, this advice is difficult to follow: overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion.”
  • Why These Social Networks Failed So Badly (Gizmodo) — “It’s not to say that without Facebook, the whole internet would be more like a local farmer’s market or a punk venue or an art gallery or comedy club or a Narnia fanfic club, just that those places are harder to find these days.”
  • Every productivity thought I’ve ever had, as concisely as possible (Alexey Guzey) — “I combed through several years of my private notes and through everything I published on productivity before and tried to summarize all of it in this post.”

Header image via Jessica Hagy at Indexed

The greatest obstacle to discovery is not ignorance—it is the illusion of knowledge

So said Daniel J. Boorstin. It’s been an interesting week for those, like me, who follow the development of interaction between humans and machines. Specifically, people seem shocked that voice assistants are being used for health questions, also that the companies who make them employ people to listen to samples of voice recordings to make them better.

Before diving into that, let’s just zoom out a bit and remind ourselves that the average level of digital literacies in the general population is pretty poor. Sometimes I wonder how on earth VC-backed companies manage to burn through so much cash. Then I remember the contortions that those who design visual interfaces go through so that people don’t have to think.

Discussing ‘fake news’ and our information literacy problem in Forbes, you can almost feel Kalev Leetaru‘s eye-roll when he says:

It is the accepted truth of Silicon Valley that every problem has a technological solution.

Most importantly, in the eyes of the Valley, every problem can be solved exclusively through technology without requiring society to do anything on its own. A few algorithmic tweaks, a few extra lines of code and all the world’s problems can be simply coded out of existence.

Kalev Leetaru

It’s somewhat tangential to the point I want to make in this article, but Cory Doctorow makes a a good point in this regard about fake news for Locus

Fake news is an instrument for measuring trauma, and the epistemological incoherence that trauma creates – the justifiable mistrust of the establishment that has nearly murdered our planet and that insists that making the richest among us much, much richer will benefit everyone, eventually.

Cory Doctorow

Before continuing, I’d just like to say that I’ve got some skin in the voice assistant game, given that our home has no fewer that six devices that use the Google Assistant (ten if you count smartphones and tablets).

Voice assistants are pretty amazing when you know exactly what you want and can form a coherent query. It’s essentially just clicking the top link on a Google search result, without any of the effort of pointing and clicking. “Hey Google, do I need an umbrella today?”

However, some people are suspicious of voice assistants to a degree that borders on the superstitious. There’s perhaps some valid reasons if you know your tech, but if you’re of the opinion that your voice assistant is ‘always recording’ and literally sending everything to Amazon, Google, Apple, and/or Donald Trump then we need to have words. Just think about that for a moment, realise how ridiculous it is, and move on.

This week an article by VRT NWS stoked fears like these. It was cleverly written so that those who read it quickly could easily draw the conclusion that Google is listening to everything you say. However, let me carve out the key paragraphs:

Why is Google storing these recordings and why does it have employees listening to them? They are not interested in what you are saying, but the way you are saying it. Google’s computer system consists of smart, self-learning algorithms. And in order to understand the subtle differences and characteristics of the Dutch language, it still needs to learn a lot.

[…]

Speech recognition automatically generates a script of the recordings. Employees then have to double check to describe the excerpt as accurately as possible: is it a woman’s voice, a man’s voice or a child? What do they say? They write out every cough and every audible comma. These descriptions are constantly improving Google’s search engines, which results in better reactions to commands. One of our sources explains how this works.

VRS NWS

Every other provider of speech recognition products does this. Obviously. How else would you manage to improve voice recognition in real-world situations? What VRS NWS did was to get a sub-contractor to break a Non-Disclosure Agreement (and violate GDPR) to share recordings.

Google responded on their blog The Keyword, saying:

As part of our work to develop speech technology for more languages, we partner with language experts around the world who understand the nuances and accents of a specific language. These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant.

We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.

We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google.

The Keyword

As I’ve said before, due to the GDPR actually having teeth (British Airways was fined £183m last week) I’m a lot happier to share my data with large companies than I was before the legislation came in. That’s the whole point.

The other big voice assistant story, in the UK at least, was that the National Health Service (NHS) is partnering with Amazon Alexa to offer health advice. The BBC reports:

From this week, the voice-assisted technology is automatically searching the official NHS website when UK users ask for health-related advice.

The government in England said it could reduce demand on the NHS.

Privacy campaigners have raised data protection concerns but Amazon say all information will be kept confidential.

The partnership was first announced last year and now talks are under way with other companies, including Microsoft, to set up similar arrangements.

Previously the device provided health information based on a variety of popular responses.

The use of voice search is on the increase and is seen as particularly beneficial to vulnerable patients, such as elderly people and those with visual impairment, who may struggle to access the internet through more traditional means.

The BBC

So long as this is available to all types of voice assistants, this is great news. The number of people I know, including family members, who have convinced themselves they’ve got serious problems by spending ages searching their symptoms, is quite frightening. Getting sensible, prosaic advice is much better.

Iliana Magra writes in the The New York Times that privacy campaigners are concerned about Amazon setting up a health care division, but that there are tangible benefits to certain sections of the population.

The British health secretary, Matt Hancock, said Alexa could help reduce strain on doctors and pharmacists. “We want to empower every patient to take better control of their health care,” he said in a statement, “and technology like this is a great example of how people can access reliable, world-leading N.H.S. advice from the comfort of their home.”

His department added that voice-assistant advice would be particularly useful for “the elderly, blind and those who cannot access the internet through traditional means.”

Iliana Magra

I’m not dismissing the privacy issues, of course not. But what I’ve found, especially recently, is that the knowledge, skills, and expertise required to be truly ‘Google-free’ (or the equivalent) is an order of magnitude greater than what is realistically possible for the general population.

It might be fatalistic to ask the following question, but I’ll do it anyway: who exactly do we expect to be building these things? Mozilla, one of the world’s largest tech non-profits is conspicuously absent in these conversations, and somehow I don’t think people aren’t going to trust governments to get involved.

For years, techies have talked about ‘personal data vaults’ where you could share information in a granular way without being tracked. Currently being trialled is the BBC box to potentially help with some of this:

With a secure Databox at its heart, BBC Box offers something very unusual and potentially important: it is a physical device in the person’s home onto which personal data is gathered from a range of sources, although of course (and as mentioned above) it is only collected with the participants explicit permission, and processed under the person’s control.

Personal data is stored locally on the box’s hardware and once there, it can be processed and added to by other programmes running on the box – much like apps on a smartphone. The results of this processing might, for example be a profile of the sort of TV programmes someone might like or the sort of theatre they would enjoy. This is stored locally on the box – unless the person explicitly chooses to share it. No third party, not even the BBC itself, can access any data in ‘the box’ unless it is authorised by the person using it, offering a secure alternative to existing services which rely on bringing large quantities of personal data together in one place – with limited control by the person using it.

The BBC

It’s an interesting concept and, if they can get the user experience right, a potentially groundbreaking concept. Eventually, of course, it will be in your smartphone, which means that device really will be a ‘digital self’.

You can absolutely opt-out of whatever you want. For example, I opt out of Facebook’s products (including WhatsApp and Instagram). You can point out to others the reasons for that, but at some point you have to realise it’s an opinion, a lifestyle choice, an ideology. Not everyone wants to be a tech vegan, or live their lives under those who act as though they are one.