Tag: digital literacies

Friday facilitations

This week, je presente

  1. We Have No Reason to Believe 5G Is Safe (Scientific American) — “The latest cellular technology, 5G, will employ millimeter waves for the first time in addition to microwaves that have been in use for older cellular technologies, 2G through 4G. Given limited reach, 5G will require cell antennas every 100 to 200 meters, exposing many people to millimeter wave radiation… [which are] absorbed within a few millimeters of human skin and in the surface layers of the cornea. Short-term exposure can have adverse physiological effects in the peripheral nervous system, the immune system and the cardiovascular system.”
  2. Situated degree pathways (The Ed Techie) — “[T]he Trukese navigator “begins with an objective rather than a plan. He sets off toward the objective and responds to conditions as they arise in an ad hoc fashion. He utilizes information provided by the wind, the waves, the tide and current, the fauna, the stars, the clouds, the sound of the water on the side of the boat, and he steers accordingly.” This is in contrast to the European navigator who plots a course “and he carries out his voyage by relating his every move to that plan. His effort throughout his voyage is directed to remaining ‘on course’.”
  3. on rms / necessary but not sufficient (p1k3) — “To the extent that free software was about wanting the freedom to hack and freely exchange the fruits of your hacking, this hasn’t gone so badly. It could be better, but I remember the 1990s pretty well and I can tell you that much of the stuff trivially at my disposal now would have blown my tiny mind back then. Sometimes I kind of snap to awareness in the middle of installing some package or including some library in a software project and this rush of gratitude comes over me.”
  4. Screen time is good for you—maybe (MIT Technology Review) — “Przybylski admitted there are some drawbacks to his team’s study: demographic effects, like socioeconomics, are tied to psychological well-being, and he said his team is working to differentiate those effects—along with the self-selection bias introduced when kids and their caregivers report their own screen use. He also said he was working to figure out whether a certain type of screen use was more beneficial than others.”
  5. This Map Lets You Plug in Your Address to See How It’s Changed Over the Past 750 Million Years (Smithsonian Magazine) — “Users can input a specific address or more generalized region, such as a state or country, and then choose a date ranging from zero to 750 million years ago. Currently, the map offers 26 timeline options, traveling back from the present to the Cryogenian Period at intervals of 15 to 150 million years.”
  6. Understanding extinction — humanity has destroyed half the life on Earth (CBC) — “One of the most significant ways we’ve reduced the biomass on the planet is by altering the kind of life our planet supports. One huge decrease and shift was due to the deforestation that’s occurred with our increasing reliance on agriculture. Forests represent more living material than fields of wheat or soybeans.”
  7. Honks vs. Quacks: A Long Chat With the Developers of ‘Untitled Goose Game’ (Vice) — “[L]ike all creative work, this game was made through a series of political decisions. Even if this doesn’t explicitly manifest in the text of the game, there are a bunch of ambient traces of our politics evident throughout it: this is why there are no cops in the game, and why there’s no crown on the postbox.”
  8. What is the Zeroth World, and how can we use it? (Bryan Alexander) — “[T]he idea of a zeroth world is also a critique. The first world idea is inherently self-congratulatory. In response, zeroth sets the first in some shade, causing us to see its flaws and limitations. Like postmodern to modern, or Internet2 to the rest of the internet, it’s a way of helping us move past the status quo.”
  9. It’s not the claim, it’s the frame (Hapgood) — “[A] news-reading strategy where one has to check every fact of a source because the source itself cannot be trusted is neither efficient nor effective. Disinformation is not usually distributed as an entire page of lies…. Even where people fabricate issues, they usually place the lies in a bed of truth.”

Image of hugelkultur bed via Sid

We don’t receive wisdom; we must discover it for ourselves after a journey that no one can take us on or spare us

So said Marcel Proust, that famous connoisseur of les petites madeleines. While I don’t share his effete view of the world, I do like French cakes and definitely agree with his sentiments on wisdom.

Earlier this week, Eylan Ezekiel shared this Nesta Landscape of innovation approaches with our Slack channel. It’s what I would call ‘slidebait’ — carefully crafted to fit onto slide decks in keynotes around the world. It’s a smart move because it gets people talking about your organisation.

Nesta's Landscape of innovation approaches
Nesta’s Landscape of innovation approaches

In my opinion, how these things are made is more interesting than the end result. There are inevitably value judgements when creating anything like this, and, because Nesta have set it out as overlapping ‘spaces’, the most obvious takeaway from the above diagram is that those innovation approaches sitting within three overlapping spaces are the ‘most valuable’ or ‘most impactful’. Is that true?

A previous post on this topic from the Nesta blog explains:

Although this map is neither exhaustive nor definitive – and at some points it may seem perhaps a little arbitrary, personal choice and preference – we have tried to provide an overview of both commonly used and emerging innovation approaches.

Bas Leurs (formerly of nesta)

When you’re working for a well-respected organisation, you have to be really careful, because people can take what you produce as some sort of Gospel Truth. No matter how many caveats you add, people confuse the map with the territory.

I have some experience with creating a ‘map’ for a given area, as I was Mozilla’s Web Literacy Lead from 2013 to 2015. During that time, I worked with the community to take the Web Literacy Standard Map from v0.1 to v1.5.

Digital literacies of various types are something I’ve been paying attention to for around 15 years now. And, let me tell, you, I’ve seen some pretty bad ‘maps’ and ‘frameworks’.

For example, here’s a slide deck for a presentation I did for a European Commission Summer School last year, in which I attempted to take the audience on a journey to decide whether a particular example I showed them was any good:

If you have a look at Slide 14 onwards, you’ll see that the point I was trying to make is that you have no way of knowing whether or not a shiny, good-looking map is any good. The organisation who produced it didn’t ‘show their work’, so you have zero insight into its creation and the decisions taken in its creation. Did their intern knock it up on a short deadline? We’ll never know.

The problem with many think tanks and ‘innovation’ organisations is that they move on too quickly to the next thing. Instead of sitting with something and let it mature and flourish, as soon as the next bit of funding comes in, they’re off like a dog chasing a shiny car. I’m not sure that’s how innovation works.

Before Mozilla, I worked at Jisc, which at the time funded innovation programmes on behalf of the UK government and disseminated the outcomes. I remember a very simple overview from Jisc’s Sustaining and Embedding Innovations project that focused on three stages of innovation:

Invention                     
This is about the generation of new ideas e.g. new ways of teaching and learning or new ICT solutions.

Early Innovation
This is all about the early practical application of new inventions, often focused in specific areas e.g. a subject discipline or speciality such as distance learning or work-based learning.

Systemic Innovation
This is where an institution, for example, will aim to embed an innovation institutionally. 

Jisc

The problem with many maps and frameworks, especially around digital skills and innovation, is that they remove any room for ambiguity. So, in an attempt not to come across as vague, they instead become ‘dead metaphors’.

Continuum of ambiguity
Continuum of Ambiguity

I don’t think I’ve ever seen an example where, without any contextualisation, an individual or organisation has taken something ‘off the shelf’ and applied it to achieve uniformly fantastic results. That’s not how these things work.

Humans are complex organisms; we’re not machines. For a given input you can’t expect the same output. We’re not lossless replicators.

So although it takes time, effort, and resources, you’ve got to put in the hard yards to see an innovation through all three of those stages outlined by Jisc. Although the temptation is to nail things down initially, the opposite is actually the best way forward. Take people on a journey and get them to invest in what’s at stake. Embrace the ambiguity.

I’ve written more about this in a post I wrote about a 5-step process for creating a sustainable digital literacies curriculum. It’s something I’ll be thinking about more as I reboot my consultancy work (through our co-op) for 2020!

For now, though, remember this wonderful African proverb:

"If you want to go fast, go alone. If you want to go far, go together." (African proverb)
CC BY-ND Bryan Mathers

The greatest obstacle to discovery is not ignorance—it is the illusion of knowledge

So said Daniel J. Boorstin. It’s been an interesting week for those, like me, who follow the development of interaction between humans and machines. Specifically, people seem shocked that voice assistants are being used for health questions, also that the companies who make them employ people to listen to samples of voice recordings to make them better.

Before diving into that, let’s just zoom out a bit and remind ourselves that the average level of digital literacies in the general population is pretty poor. Sometimes I wonder how on earth VC-backed companies manage to burn through so much cash. Then I remember the contortions that those who design visual interfaces go through so that people don’t have to think.

Discussing ‘fake news’ and our information literacy problem in Forbes, you can almost feel Kalev Leetaru‘s eye-roll when he says:

It is the accepted truth of Silicon Valley that every problem has a technological solution.

Most importantly, in the eyes of the Valley, every problem can be solved exclusively through technology without requiring society to do anything on its own. A few algorithmic tweaks, a few extra lines of code and all the world’s problems can be simply coded out of existence.

Kalev Leetaru

It’s somewhat tangential to the point I want to make in this article, but Cory Doctorow makes a a good point in this regard about fake news for Locus

Fake news is an instrument for measuring trauma, and the epistemological incoherence that trauma creates – the justifiable mistrust of the establishment that has nearly murdered our planet and that insists that making the richest among us much, much richer will benefit everyone, eventually.

Cory Doctorow

Before continuing, I’d just like to say that I’ve got some skin in the voice assistant game, given that our home has no fewer that six devices that use the Google Assistant (ten if you count smartphones and tablets).

Voice assistants are pretty amazing when you know exactly what you want and can form a coherent query. It’s essentially just clicking the top link on a Google search result, without any of the effort of pointing and clicking. “Hey Google, do I need an umbrella today?”

However, some people are suspicious of voice assistants to a degree that borders on the superstitious. There’s perhaps some valid reasons if you know your tech, but if you’re of the opinion that your voice assistant is ‘always recording’ and literally sending everything to Amazon, Google, Apple, and/or Donald Trump then we need to have words. Just think about that for a moment, realise how ridiculous it is, and move on.

This week an article by VRT NWS stoked fears like these. It was cleverly written so that those who read it quickly could easily draw the conclusion that Google is listening to everything you say. However, let me carve out the key paragraphs:

Why is Google storing these recordings and why does it have employees listening to them? They are not interested in what you are saying, but the way you are saying it. Google’s computer system consists of smart, self-learning algorithms. And in order to understand the subtle differences and characteristics of the Dutch language, it still needs to learn a lot.

[…]

Speech recognition automatically generates a script of the recordings. Employees then have to double check to describe the excerpt as accurately as possible: is it a woman’s voice, a man’s voice or a child? What do they say? They write out every cough and every audible comma. These descriptions are constantly improving Google’s search engines, which results in better reactions to commands. One of our sources explains how this works.

VRS NWS

Every other provider of speech recognition products does this. Obviously. How else would you manage to improve voice recognition in real-world situations? What VRS NWS did was to get a sub-contractor to break a Non-Disclosure Agreement (and violate GDPR) to share recordings.

Google responded on their blog The Keyword, saying:

As part of our work to develop speech technology for more languages, we partner with language experts around the world who understand the nuances and accents of a specific language. These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant.

We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.

We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google.

The Keyword

As I’ve said before, due to the GDPR actually having teeth (British Airways was fined £183m last week) I’m a lot happier to share my data with large companies than I was before the legislation came in. That’s the whole point.

The other big voice assistant story, in the UK at least, was that the National Health Service (NHS) is partnering with Amazon Alexa to offer health advice. The BBC reports:

From this week, the voice-assisted technology is automatically searching the official NHS website when UK users ask for health-related advice.

The government in England said it could reduce demand on the NHS.

Privacy campaigners have raised data protection concerns but Amazon say all information will be kept confidential.

The partnership was first announced last year and now talks are under way with other companies, including Microsoft, to set up similar arrangements.

Previously the device provided health information based on a variety of popular responses.

The use of voice search is on the increase and is seen as particularly beneficial to vulnerable patients, such as elderly people and those with visual impairment, who may struggle to access the internet through more traditional means.

The BBC

So long as this is available to all types of voice assistants, this is great news. The number of people I know, including family members, who have convinced themselves they’ve got serious problems by spending ages searching their symptoms, is quite frightening. Getting sensible, prosaic advice is much better.

Iliana Magra writes in the The New York Times that privacy campaigners are concerned about Amazon setting up a health care division, but that there are tangible benefits to certain sections of the population.

The British health secretary, Matt Hancock, said Alexa could help reduce strain on doctors and pharmacists. “We want to empower every patient to take better control of their health care,” he said in a statement, “and technology like this is a great example of how people can access reliable, world-leading N.H.S. advice from the comfort of their home.”

His department added that voice-assistant advice would be particularly useful for “the elderly, blind and those who cannot access the internet through traditional means.”

Iliana Magra

I’m not dismissing the privacy issues, of course not. But what I’ve found, especially recently, is that the knowledge, skills, and expertise required to be truly ‘Google-free’ (or the equivalent) is an order of magnitude greater than what is realistically possible for the general population.

It might be fatalistic to ask the following question, but I’ll do it anyway: who exactly do we expect to be building these things? Mozilla, one of the world’s largest tech non-profits is conspicuously absent in these conversations, and somehow I don’t think people aren’t going to trust governments to get involved.

For years, techies have talked about ‘personal data vaults’ where you could share information in a granular way without being tracked. Currently being trialled is the BBC box to potentially help with some of this:

With a secure Databox at its heart, BBC Box offers something very unusual and potentially important: it is a physical device in the person’s home onto which personal data is gathered from a range of sources, although of course (and as mentioned above) it is only collected with the participants explicit permission, and processed under the person’s control.

Personal data is stored locally on the box’s hardware and once there, it can be processed and added to by other programmes running on the box – much like apps on a smartphone. The results of this processing might, for example be a profile of the sort of TV programmes someone might like or the sort of theatre they would enjoy. This is stored locally on the box – unless the person explicitly chooses to share it. No third party, not even the BBC itself, can access any data in ‘the box’ unless it is authorised by the person using it, offering a secure alternative to existing services which rely on bringing large quantities of personal data together in one place – with limited control by the person using it.

The BBC

It’s an interesting concept and, if they can get the user experience right, a potentially groundbreaking concept. Eventually, of course, it will be in your smartphone, which means that device really will be a ‘digital self’.

You can absolutely opt-out of whatever you want. For example, I opt out of Facebook’s products (including WhatsApp and Instagram). You can point out to others the reasons for that, but at some point you have to realise it’s an opinion, a lifestyle choice, an ideology. Not everyone wants to be a tech vegan, or live their lives under those who act as though they are one.

Life doesn’t depend on any one opinion, any one custom, or any one century

Baltasar Gracián was a 17th-century Spanish Jesuit who put together a book of aphorisms usually translated The Pocket Oracle and Art of Prudence or simply The Art of Worldly Wisdom. It’s one of a few books that have had a very large effect on my life. Today’s quotation-as-title comes from him.

The historian in me wonders about why we seem to live in such crazy times. My simple answer is ‘the internet’, but I want to dig into a bit using an essay from Scott Alexander:

[T]oday we have an almost unprecedented situation.

We have a lot of people… boasting of being able to tolerate everyone from every outgroup they can imagine, loving the outgroup, writing long paeans to how great the outgroup is, staying up at night fretting that somebody else might not like the outgroup enough.

This is really surprising. It’s a total reversal of everything we know about human psychology up to this point. No one did any genetic engineering. No one passed out weird glowing pills in the public schools. And yet suddenly we get an entire group of people who conspicuously promote and defend their outgroups, the outer the better.

What is going on here?

Scott Alexander

It’s long, and towards the end, Alexander realises that he’s perhaps guilty of the very thing he’s pointing out. Nevertheless, his definition of an ‘outgroup’ is useful:

So what makes an outgroup? Proximity plus small differences. If you want to know who someone in former Yugoslavia hates, don’t look at the Indonesians or the Zulus or the Tibetans or anyone else distant and exotic. Find the Yugoslavian ethnicity that lives closely intermingled with them and is most conspicuously similar to them, and chances are you’ll find the one who they have eight hundred years of seething hatred toward.

Scott Alexander

Over the last three years in the UK, we’ve done a spectacular job of adding a hatred of the opposing side in the Brexit debate to our national underlying sense of xenophobia . What’s necessary next is to bring everyone together and, whether we end up leaving the EU or not, forging a new narrative.

As Bryan Caplan points out, such efforts at cohesion need to be approached obliquely. He uses the example of American politics, but it applies equally elsewhere, including the UK:

Suppose you live in a deeply divided society: 60% of people strongly identify with Group A, and the other 40% strongly identify with Group B. While you plainly belong to Group A, you’re convinced this division is bad: It would be much better if everyone felt like they belonged to Group AB. You seek a cohesive society, where everyone feels like they’re on the same team.

What’s the best way to bring this cohesion about? Your all-too-human impulse is to loudly preach the value of cohesion. But on reflection, this is probably counter-productive. When members of Group B hear you, they’re going to take “cohesion” as a euphemism for “abandon your identity, and submit to the dominance of Group A.” None too enticing. And when members of Group A notice Group B’s recalcitrance, they’re probably going to think, “We offer Group B the olive branch of cohesion, and they spit in our faces. Typical.” Instead of forging As and Bs into one people, preaching cohesion tears them further apart.

Bryan Caplan

So, what can we do? Caplan suggests that members of one side should go out of their way to be overwhelmingly positive and friendly to the other side:

The first rule of promoting cohesion is: Don’t talk about cohesion. The second rule of promoting cohesion is: Don’t talk about cohesion. If you really want to build a harmonious, unified society, take one for the team. Discard your anger, swallow your pride, and show out-groups unilateral respect and friendship. End of story.

Bryan Caplan

It reminds me of the Christian advice to “turn the other cheek” which must have melted the brains of those listening to Jesus who were used to the Old Testament approach:

“You have heard that it was said, ‘An eye for an eye and a tooth for a tooth.’ But I say to you, Do not resist the one who is evil. But if anyone slaps you on the right cheek, turn to him the other also. And if anyone would sue you and take your tunic, let him have your cloak as well.

Matthew 5:38-40 (ESV)

Over the last 20 years, as the internet has played an ever-increasing role in our daily lives, we’ve seen a real ramping-up of the feminist movement, gay marriage becoming the norm in civilised western democracies, and movements like #BlackLivesMatter reminding us of just how racist our societies are.

In addition, despite the term being coined as long ago as 1989, we’ve seen a rise in awareness around intersectionality. It’s not exactly a radical notion to say that us being more connected leads to more awareness of ‘outgroups’. What is interesting is the way that we choose to deal with that.

Let’s have a quick look at the demographics from the Brexit vote three years ago:

Brexit demographics from The Guardian
Brexit demographics from The Guardian

Remain voters were, on the whole, younger, better educated, and more well-off than Leave voters. They were also slightly more likely to be born outside the UK. I haven’t done the research, but I just have a feeling that the generational differences here are to do with relative exposure to outgroups.

What’s more interesting than the result of the referendum itself, of course, is the reaction since then, with both ‘Leavers’ and ‘Remainers’ digging in to their entrenched positions. Now we’ve created new outgroups, we can join together in welcoming in the old outgroups. Hence LGBT+ pride rainbows in shops and everywhere else.

As I explained five years ago, one of the problems is that we’re not collectively aware enough of the role money plays in our democratic processes and information landscapes:

The problem with social networks as news platforms is that they are not neutral spaces. Perhaps the easiest way to get quickly to the nub of the issue is to ask how they are funded. The answer is clear and unequivocal: through advertising. The two biggest social networks, Twitter and Facebook (which also owns Instagram and WhatsApp), are effectively “services with shareholders.” Your interactions with other people, with media, and with adverts, are what provide shareholder value. Lest we forget, CEOs of publicly-listed companies have a legal obligation to provide shareholder value. In an advertising-fueled online world this means continually increasing the number of eyeballs looking at (and fingers clicking on) content.

Doug Belshaw

Sadly, in the west we invested in Computing to the detriment of critical digital literacies at exactly the wrong moment. That investment should have come on top of a real push to help everyone in society realise the importance of questioning and reflecting on their information environment.

Much as some people might like to, we can’t put the internet back in a box. It’s connected us all, for better and for worse, in ways that only a few would have foreseen. It’s changing the way we interact with one another, the way we buy things, and the way we think about education, work, and human flourishing.

All these connections might mean that style of representative democracy we’re currently used to might need tweaking. As Jamie Bartlett points out in The People vs Tech, “these are spiritual as well as technical questions”.


Also check out:

  • There is nothing more depressing than “positive news” (The Outline) — “The world is often a bummer, but a whole ecosystem of podcasts and Facebook pages have sprung up to assure you that things are actually great.”
  • Space for More Spaces (CogDogBlog) — “I still hold on to the idea that those old archaic, pre-social media constructs, a personal blog, is the main place, the home, to operate from.”
  • Clay Shirky on Mega-Universities and Scale (Phil on EdTech) — “What the mega-university story gets right is that online education is transforming higher education. What it gets wrong is the belief that transformation must end with consolidation around a few large-scale institutions”

Opting in and out of algorithms

It’s now over seven years since I submitted my doctoral thesis on digital literacies. Since then, almost the entire time my daughter has been alive, the world has changed a lot.

Writing in The Conversation, Anjana Susarla explains her view that digital literacy goes well beyond functional skills:

In my view, the new digital literacy is not using a computer or being on the internet, but understanding and evaluating the consequences of an always-plugged-in lifestyle. This lifestyle has a meaningful impact on how people interact with others; on their ability to pay attention to new information; and on the complexity of their decision-making processes.

Digital literacies are plural, context-dependent and always evolving. Right now, I think Susarla is absolutely correct to be focusing on algorithms and the way they interact with society. Ben Williamson is definitely someone to follow and read up on in that regard.

Over the past few years I’ve been trying (both directly and indirectly) to educate people about the impact of algorithms on everything from fake news to privacy. It’s one of the reasons I don’t use Facebook, for example, and go out of my way to explain to others why they shouldn’t either:

A study of Facebook usage found that when participants were made aware of Facebook’s algorithm for curating news feeds, about 83% of participants modified their behavior to try to take advantage of the algorithm, while around 10% decreased their usage of Facebook.

[…]

However, a vast majority of platforms do not provide either such flexibility to their end users or the right to choose how the algorithm uses their preferences in curating their news feed or in recommending them content. If there are options, users may not know about them. About 74% of Facebook’s users said in a survey that they were not aware of how the platform characterizes their personal interests.

Although I’m still not going to join Facebook, one reason I’m a little more chilled out about algorithms and privacy these days is because of the GDPR. If it’s regulated effectively (as I think it will be) then it should really keep Big Tech in check:

As part of the recently approved General Data Protection Regulation in the European Union, people have “a right to explanation” of the criteria that algorithms use in their decisions. This legislation treats the process of algorithmic decision-making like a recipe book. The thinking goes that if you understand the recipe, you can understand how the algorithm affects your life.

[…]

But transparency is not a panacea. Even when an algorithm’s overall process is sketched out, the details may still be too complex for users to comprehend. Transparency will help only users who are sophisticated enough to grasp the intricacies of algorithms.

I agree that it’s not enough to just tell people that they’re being tracked without them being able to do something about it. That leads to technological defeatism. We need a balance between simple, easy-to-use tools that enable user privacy and security. These aren’t going to come through tech industry self-regulation, but through regulatory frameworks like GDPR.

Source: The Conversation


Also check out:

Myths about children and digital technologies

Prof. Sonia Livingstone has written a link-filled post relating to a panel she’s on at the Digital Families 2018 conference. In it, she talks about six myths around children in the digital age:

  1. Children are ‘digital natives’ and know it all.
  2. Parents are ‘digital immigrants’ and don’t know anything.
  3. Time with media is time wasted compared with ‘real’ conversation or playing outside.
  4. Parents’ role is to monitor, restrict and ban because digital risks greatly outweigh digital opportunities.
  5. Children don’t care about their privacy online.
  6. Media literacy is THE answer to the problems of the digital age.

Good stuff, and the post and associated links are well worth checking out.

Source: Parenting for a Digital Future

To lose old styles of reading is to lose a part of ourselves

Sometimes I think we’re living in the end times:

Out for dinner with another writer, I said, “I think I’ve forgotten how to read.”

“Yes!” he replied, pointing his knife. “Everybody has.”

“No, really,” I said. “I mean I actually can’t do it any more.”

He nodded: “Nobody can read like they used to. But nobody wants to talk about it.”

I wrote my doctoral thesis on digital literacies. There was a real sense in the 1990s that reading on screen was very different to reading on paper. We’ve kind of lost that sense of difference, and I think perhaps we need to regain it:

For most of modern life, printed matter was, as the media critic Neil Postman put it, “the model, the metaphor, and the measure of all discourse.” The resonance of printed books – their lineal structure, the demands they make on our attention – touches every corner of the world we’ve inherited. But online life makes me into a different kind of reader – a cynical one. I scrounge, now, for the useful fact; I zero in on the shareable link. My attention – and thus my experience – fractures. Online reading is about clicks, and comments, and points. When I take that mindset and try to apply it to a beaten-up paperback, my mind bucks.

We don’t really talk about ‘hypertext’ any more, as it’s almost the default type of text that we read. As such, reading on paper doesn’t really prepare us for it:

For a long time, I convinced myself that a childhood spent immersed in old-fashioned books would insulate me somehow from our new media climate – that I could keep on reading and writing in the old way because my mind was formed in pre-internet days. But the mind is plastic – and I have changed. I’m not the reader I was.

Me too. I train myself to read longer articles through mechanisms such as writing Thought Shrapnel posts and newsletters each week. But I don’t read like I used to; I read for utility rather than pleasure and just for the sake of it.

The suggestion that, in a few generations, our experience of media will be reinvented shouldn’t surprise us. We should, instead, marvel at the fact we ever read books at all. Great researchers such as Maryanne Wolf and Alison Gopnik remind us that the human brain was never designed to read. Rather, elements of the visual cortex – which evolved for other purposes – were hijacked in order to pull off the trick. The deep reading that a novel demands doesn’t come easy and it was never “natural.” Our default state is, if anything, one of distractedness. The gaze shifts, the attention flits; we scour the environment for clues. (Otherwise, that predator in the shadows might eat us.) How primed are we for distraction? One famous study found humans would rather give themselves electric shocks than sit alone with their thoughts for 10 minutes. We disobey those instincts every time we get lost in a book.

It’s funny. We’ve such a connection with books, but for most of human history we’ve done without them:

Literacy has only been common (outside the elite) since the 19th century. And it’s hardly been crystallized since then. Our habits of reading could easily become antiquated. The writer Clay Shirky even suggests that we’ve lately been “emptily praising” Tolstoy and Proust. Those old, solitary experiences with literature were “just a side-effect of living in an environment of impoverished access.” In our online world, we can move on. And our brains – only temporarily hijacked by books – will now be hijacked by whatever comes next.

There’s several theses in all of this around fake news, the role of reading in a democracy, and how information spreads. For now, I continue to be amazed at the power of the web on the fabric of societies.

Source: The Globe and Mail

Commit to improving your security in 2018

We don’t live in a cosy world where everyone hugs fluffy bunnies who shoot rainbows out of their eyes. Hacks and data breaches affect everyone:

If you aren’t famous enough to be a target, you may still be a victim of a mass data breach. Whereas passwords are usually stored in hashed or encrypted form, answers to security questions are often stored — and therefore stolen — in plain text, as users entered them. This was the case in the 2015 breach of the extramarital encounters site Ashley Madison, which affected 32 million users, and in some of the Yahoo breaches, disclosed over the past year and a half, which affected all of its three billion accounts.

Some of it isn’t our fault, however. For example, you can bypass PayPal’s two-factor authentication by opting to answer questions about your place of birth and mother’s maiden name. This is not difficult information for hackers to obtain:

According to Troy Hunt, a cybersecurity expert, organizations continue to use security questions because they are easy to set up technically, and easy for users. “If you ask someone their favorite color, that’s not a drama,” Mr. Hunt said. “They’ll be able to give you a straight answer. If you say, ‘Hey, please download this authenticator app and point the camera at a QR code on the screen,’ you’re starting to lose people.” Some organizations have made a risk-based decision to retain this relatively weak security measure, often letting users opt for it over two-factor authentication, in the interest of getting people signed up.

Remaining secure online is a constantly-moving target, and one that we would all do well to spend a bit more time thinking about. These principles by the EFF are a good starting point for conversations we should be having this year.

Source: The New York Times

Martin Weller on how algorithms feeding on engagement draw us towards ever more radical stuff online:
There are implications for this. For the individual I worry about our collective mental health, to be angry, to be made to engage with this stuff, to be scared and to feel that it is more prevalent than maybe it really is. For society it normalises these views, desensitises us to them and also raises the emotional temperature of any discussion. One way of viewing digital literacy is reestablishing the protective layer, learning the signals and techniques that we have in the analogue world for the digital one. And perhaps the first step in that is in recognising how that layer has been diminished by algorithms.
Source:
The zone of proximal depravity