Category: Life online (page 1 of 2)

Why we forget most of what we read

I read a lot of stuff, and I remember random bits of it. I used to be reasonably disciplined about bookmarking stuff, but then realised I hardly ever went back through my bookmarks. So, instead, I try to use what I read, which is kind of the reason for Thought Shrapnel…

Surely some people can read a book or watch a movie once and retain the plot perfectly. But for many, the experience of consuming culture is like filling up a bathtub, soaking in it, and then watching the water run down the drain. It might leave a film in the tub, but the rest is gone.

Well, indeed. Nice metaphor.

In the internet age, recall memory—the ability to spontaneously call information up in your mind—has become less necessary. It’s still good for bar trivia, or remembering your to-do list, but largely, [Jared Horvath, a research fellow at the University of Melbourne] says, what’s called recognition memory is more important. “So long as you know where that information is at and how to access it, then you don’t really need to recall it,” he says.

Exactly. You need to know how to find that article you read that backs up the argument you’re making. You don’t need to remember all of the details. Search skills are really important.

One study showed that recalling details about episodes for those bingeing on Netflix series was much lower than for thoose who spaced them out. I guess that’s unsurprising.

People are binging on the written word, too. In 2009, the average American encountered 100,000 words a day, even if they didn’t “read” all of them. It’s hard to imagine that’s decreased in the nine years since. In “Binge-Reading Disorder,” an article for The Morning News, Nikkitha Bakshani analyzes the meaning of this statistic. “Reading is a nuanced word,” she writes, “but the most common kind of reading is likely reading as consumption: where we read, especially on the internet, merely to acquire information. Information that stands no chance of becoming knowledge unless it ‘sticks.’”

For anyone who knows about spaced learning, the conclusions are pretty obvious:

The lesson from his binge-watching study is that if you want to remember the things you watch and read, space them out. I used to get irritated in school when an English-class syllabus would have us read only three chapters a week, but there was a good reason for that. Memories get reinforced the more you recall them, Horvath says. If you read a book all in one stretch—on an airplane, say—you’re just holding the story in your working memory that whole time. “You’re never actually reaccessing it,” he says.

So apply what you learn and you’re putting it to work. Hence this post!

Source: The Atlantic (via e180)

Why do some things go viral?

I love internet memes and included a few in my TEDx talk a few years ago. The term ‘meme’ comes from Richard Dawkins who coined the term in the 1970s:

But trawling the Internet, I found a strange paradox: While memes were everywhere, serious meme theory was almost nowhere. Richard Dawkins, the famous evolutionary biologist who coined the word “meme” in his classic 1976 book, The Selfish Gene, seemed bent on disowning the Internet variety, calling it a “hijacking” of the original term. The peer-reviewed Journal of Memetics folded in 2005. “The term has moved away from its theoretical beginnings, and a lot of people don’t know or care about its theoretical use,” philosopher and meme theorist Daniel Dennett told me. What has happened to the idea of the meme, and what does that evolution reveal about its usefulness as a concept?

Memes aren’t things that you necessarily want to find engaging or persuasive. They’re kind of parasitic on the human mind:

Dawkins’ memes include everything from ideas, songs, and religious ideals to pottery fads. Like genes, memes mutate and evolve, competing for a limited resource—namely, our attention. Memes are, in Dawkins’ view, viruses of the mind—infectious. The successful ones grow exponentially, like a super flu. While memes are sometimes malignant (hellfire and faith, for atheist Dawkins), sometimes benign (catchy songs), and sometimes terrible for our genes (abstinence), memes do not have conscious motives. But still, he claims, memes parasitize us and drive us.

Dawkins doesn’t like the use of the word ‘meme’ to refer to what we see on the internet:

According to Dawkins, what sets Internet memes apart is how they are created. “Instead of mutating by random chance before spreading by a form of Darwinian selection, Internet memes are altered deliberately by human creativity,” he explained in a recent video released by the advertising agency Saatchi & Saatchi. He seems to think that the fact that Internet memes are engineered to go viral, rather than evolving by way of natural selection, is a salient difference that distinguishes from other memes—which is arguable, since what catches fire on the Internet can be as much a product of luck as any unexpected mutation.

So… why should we care?

While entertaining bored office workers seems harmless enough, there is something troubling about a multi-million dollar company using our minds as petri dishes in which to grow its ideas. I began to wonder if Dawkins was right—if the term meme is really being hijacked, rather than mindlessly evolving like bacteria. The idea of memes “forces you to recognize that we humans are not entirely the center of the universe where information is concerned—we’re vehicles and not necessarily in charge,” said James Gleick, author of The Information: A History, A Theory, A Flood, when I spoke to him on the phone. “It’s a humbling thing.”

It is indeed a humbling thing, but one that a the study of Philosphy prepares you for, particularly Stoicism. Your mind is the one thing you can control, so be careful out there on the internet, reader.

Source: Nautilus

Decentralisation is the only way to wean people off capitalist social media

Everyone wants ‘decentralisation’ these days, whether it’s the way we make payments, or… well, pretty much anything that can be put on a blockchain.

But what does that actually mean in practice? What, as William James would say, is the ‘cash value’ of decentralisation? This article explores some of that:

Decentralization is a pretty vague buzzword. Vitalik considered its meaning a year ago. In my estimation, it can mean a couple of things:

  1. Abstract principle when analyzing general power structures of any kind: “Political decentralization” means spreading political power among differing entities. “Market decentralization” refers to outcomes being produced without being coordinated by a central authority. It’s a philosophical idea that can be interpreted broadly in a lot of different contexts.
  2. Bitcoin, mostly. Lots of credit for the buzzword’s current popularity traces back to cryptocurrencies and blockchains, and I think the term “decentralization” without context is rightfully claimed by the yescoiners and defer to Vitalik’s interpretation for its meaning. I call this “financial decentralization” in contexts where my definition is dominant.
  3. A second, specific implementation of (1) that I want to talk about.

The author goes on to discuss a specific problem around social networking that decentralisation can solve:

Fundamentally, the problem with the web ecosyste

m is that consumer choice is limited. Facebook, Twitter, Google, and other tech giants “own” a large part of the social graph that both powers the core digital connection goodness and sustains the momentum that they will keep owning it, due to something called Metcalfe’s law. If you want to connect to people on the internet, you have to play by their rules.

So what can we do?

A “web decentralized” system looks like thus. You start with bare-bones replicas of social networking, publishing, microblogging, and chatting. You build a small social graph of your friends. This time, the data structures powering these applications live on your computer and are in a format you can easily grok and extend (Sorry, normies, it will be engineers-only for the next year or two).

[…]

The solution is technological standardization. Individuals, mostly engineers, need to expend a lot more effort contributing to the protocols and processes that drive inter-application communication. Your core Facebook identity — your username, your connections, your chat history — should be a universally standardized protocol with a Democracy-scale process for updating and extending it. Crucially, that process needs to be directed outside the direct control of tech companies, who are capitalistically bound to monopolize and direct control back to their domains.

It’s worth quoting the last paragraph:

Ultimately, decentralization is about shaping the the balance of power in digital domains. I for one would not like to wait around while the Tech overlords and Crusty regulators decide what happens with our digital lives. There’s no reason for us to keep listening to either of them. A handful of dedicated engineers, designers, a organizers could implement the alternative today. And that’s what web decentralization is all about.

Source: Clutch of the Dead Hand

Web Trends Map 2018 (or ‘why we can’t have nice things’)

My son, who’s now 11 years old, used to have iA’s Web Trends Map v4 on his wall. It was produced in 2009, when he was two:

iA Web Trends Map 4 (2009)

I used it to explain the web to him, as the subway map was a metaphor he could grasp. I’d wondered why iA hadn’t produced more in subsequent years.

Well, the answer is clear in a recent post:

Don’t get too excited. We don’t have it. We tried. We really tried. Many times. The most important ingredient for a Web Trend Map is missing: The Web. Time to bring some of it back.

Basically, the web has been taken over by capitalist interests:

The Web has lost its spirit. The Web is no longer a distributed Web. It is, ironically, a couple of big tubes that belong to a handful of companies. Mainly Google (search), Facebook (social) and Amazon (e-commerce). There is an impressive Chinese line and there are some local players in Russia, Japan, here and there. Overall it has become monotonous and dull. What can we do?

It’s difficult. Although I support the aims, objectives, and ideals of the IndieWeb, I can’t help but think it’s looking backwards instead of forwards. I’m hoping that newer approaches such as federated social networks, distributed ledgers and databases, and regulation such as GDPR have some impact.

Source: iA

Audrey Watters on technology addiction

Audrey Watters answers the question whether we’re ‘addicted’ to technology:

I am hesitant to make any clinical diagnosis about technology and addiction – I’m not a medical professional. But I’ll readily make some cultural observations, first and foremost, about how our notions of “addiction” have changed over time. “Addiction” is medical concept but it’s also a cultural one, and it’s long been one tied up in condemning addicts for some sort of moral failure. That is to say, we have labeled certain behaviors as “addictive” when they’ve involve things society doesn’t condone. Watching TV. Using opium. Reading novels. And I think some of what we hear in discussions today about technology usage – particularly about usage among children and teens – is that we don’t like how people act with their phones. They’re on them all the time. They don’t make eye contact. They don’t talk at the dinner table. They eat while staring at their phones. They sleep with their phones. They’re constantly checking them.

The problem is that our devices are designed to be addictive, much like casinos. The apps on our phones are designed to increase certain metrics:

I think we’re starting to realize – or I hope we’re starting to realize – that those metrics might conflict with other values. Privacy, sure. But also etiquette. Autonomy. Personal agency. Free will.

Ultimately, she thinks, this isn’t a question of addiction. It’s much wider than that:

How are our minds – our sense of well-being, our knowledge of the world – being shaped and mis-shaped by technology? Is “addiction” really the right framework for this discussion? What steps are we going to take to resist the nudges of the tech industry – individually and socially and yes maybe even politically?

Good stuff.

Source: Audrey Watters

Ethical design in social networks

I’m thinking a lot about privacy and ethical design at the moment as part of my role leading Project MoodleNet. This article gives a short but useful overview of the Ethical Design Manifesto, along with some links for further reading:

There is often a disconnect between what digital designers originally intend with a product or feature, and how consumers use or interpret it.

Ethical user experience design – meaning, for example, designing technologies in ways that promote good online behaviour and intuit how they might be used – may help bridge that gap.

There’s already people (like me) making choices about the technology and social networks they used based on ethics:

User experience design and research has so far mainly been applied to designing tech that is responsive to user needs and locations. For example, commercial and digital assistants that intuit what you will buy at a local store based on your previous purchases.

However, digital designers and tech companies are beginning to recognise that there is an ethical dimension to their work, and that they have some social responsibility for the well-being of their users.

Meeting this responsibility requires designers to anticipate the meanings people might create around a particular technology.

In addition to ethical design, there are other elements to take into consideration:

Contextually aware design is capable of understanding the different meanings that a particular technology may have, and adapting in a way that is socially and ethically responsible. For example, smart cars that prevent mobile phone use while driving.

Emotional design refers to technology that elicits appropriate emotional responses to create positive user experiences. It takes into account the connections people form with the objects they use, from pleasure and trust to fear and anxiety.

This includes the look and feel of a product, how easy it is to use and how we feel after we have used it.

Anticipatory design allows technology to predict the most useful interaction within a sea of options and make a decision for the user, thus “simplifying” the experience. Some companies may use anticipatory design in unethical ways that trick users into selecting an option that benefits the company.

Source: The Conversation

Reading the web on your own terms

Although it was less than a decade ago since the demise of the wonderful, simple, much-loved Google Reader, it seems like it was a different age entirely.

Subscribing to news feeds and blogs via RSS wasn’t as widely used as it could/should have been, but there was something magical about that period of time.

In this article, the author reflects on that era and suggests that we might want to give it another try:

Well, I believe that RSS was much more than just a fad. It made blogging possible for the first time because you could follow dozens of writers at the same time and attract a considerably large audience if you were the writer. There were no ads (except for the high-quality Daring Fireball kind), no one could slow down your feed with third party scripts, it had a good baseline of typographic standards and, most of all, it was quiet. There were no comments, no likes or retweets. Just the writer’s thoughts and you.

I was a happy user of Google Reader until they pulled the plug. It was a bit more interactive than other feed readers, somehow, in a way I can’t quite recall. Everyone used it until they didn’t.

The unhealthy bond between RSS and Google Reader is proof of how fragile the web truly is, and it reveals that those communities can disappear just as quickly as they bloom.

Since that time I’ve been an intermittent user of Feedly. Everyone else, it seems, succumbed to the algorithmic news feeds provided by Facebook, Twitter, and the like.

A friend of mine the other day said that “maybe Medium only exists because Google Reader died — Reader left a vacuum, and the social network filled it.” I’m not entirely sure I agree with that, but it sure seems likely. And if that’s the case then the death of Google Reader probably led to the emergence of email newsletters, too.

[…]

On a similar note, many believe that blogging is making a return. Folks now seem to recognize the value of having your own little plot of land on the web and, although it’s still pretty complex to make your own website and control all that content, it’s worth it in the long run. No one can run ads against your thing. No one can mess with the styles. No one can censor or sunset your writing.

Not only that but when you finish making your website you will have gained superpowers: you now have an independent voice, a URL, and a home on the open web.

I don’t think we can turn the clock back, but it does feel like there might be positive, future-focused ways of improving things through, for example, decentralisation.

Source: Robin Rendle

More on Facebook’s ‘trusted news’ system

Mike Caulfield reflects on Facebook’s announcement that they’re going to allow users to rate the sources of news in terms of trustworthiness. Like me, and most people who have thought about this for more than two seconds, he thinks it’s a bad idea.

Instead, he thinks Facebook should try Google’s approach:

Most people misunderstand what the Google system looks like (misreporting on it is rife) but the way it works is this. Google produces guidance docs for paid search raters who use them to rate search results (not individual sites). These documents are public, and people can argue about whether Google’s take on what constitutes authoritative sources is right — because they are public.

Facebook’s algorithms are opaque by design, whereas, Caulfield argues, Google’s approach is documented:

I’m not saying it doesn’t have problems — it does. It has taken Google some time to understand the implications of some of their decisions and I’ve been critical of them in the past. But I am able to be critical partially because we can reference a common understanding of what Google is trying to accomplish and see how it was falling short, or see how guidance in the rater docs may be having unintended consequences.

This is one of the major issues of our time, particularly now that people have access to the kind of CGI only previously available to Hollywood. And what are they using this AI-powered technology for? Fake celebrity (and revenge) porn, of course.

Source: Hapgood

Anxiety is the price of convenience

Remote working, which I’ve done for over five years now, sounds awesome, doesn’t it? Open your laptop while still in bed, raid the biscuit barrel at every opportunity, spend more time with your family…

Don’t get me wrong, it is great and I don’t think I could ever go back to working full-time in an office. That being said, there’s a hidden side to remote working which no-one ever tells you about: anxiety.

Every interaction when you’re working remotely is an intentional act. You either have to schedule a meeting with someone, or ‘ping’ them to see if they’re available. You can’t see that they’re free, wander over to talk to them, or bump into them in the corridor, as you could if you were physically co-located.

When people don’t respond in a timely fashion, or within the time frame you were expecting, it’s unclear why that happened. This article picks up on that:

In recent decades, written communication has caught up—or at least come as close as it’s likely to get to mimicking the speed of regular conversation (until they implant thought-to-text microchips in our brains). It takes more than 200 milliseconds to compose a text, but it’s not called “instant” messaging for nothing: There is an understanding that any message you send can be replied to more or less immediately.

But there is also an understanding that you don’t have to reply to any message you receive immediately. As much as these communication tools are designed to be instant, they are also easily ignored. And ignore them we do. Texts go unanswered for hours or days, emails sit in inboxes for so long that “Sorry for the delayed response” has gone from earnest apology to punchline.

It’s not just work, either. Because we carry our smartphones with us everywhere, my wife expects almost an instantaneous response on even the most trivial matters. I’ve come back to my phone with a stream of ‘oi’ messages before…

It’s anxiety-inducing because written communication is now designed to mimic conversation—but only when it comes to timing. It allows for a fast back-and-forth dialogue, but without any of the additional context of body language, facial expression, and intonation. It’s harder, for example, to tell that someone found your word choice off-putting, and thus to correct it in real-time, or try to explain yourself better. When someone’s in front of you, “you do get to see the shadow of your words across someone else’s face,” [Sherry] Turkle says.

Lots to ponder here. A lot of it has to do with the culture of your organisation / family, at the end of the day.

Source: The Atlantic (via Hurry Slowly)

Some podcast recommendations

Despite no longer having a commute, I still find time to listen to podcasts. They’re useful for a variety of reasons: I can be doing something else while listening to them such as walking, going to the gym, or boring admin, and they don’t require me to look at a screen (which I do most of the day).

So it’s very useful for Bryan Alexander to share the podcasts he’s listening to at present. Here’s a couple that were new to me:

Beyond the Book – a look into the book publishing industry. It’s clearly biased in favor of strong copyright policies and practices, a bias I don’t share, but the program is also very informative.

Very Bad Wizards – two thinkers and, sometimes, a guest brood about deep questions concerning human psychology, philosophy, and ethics. It’s not my usual fare, so I enjoy learning.

Podcasts are basically RSS feeds with an audio enclosures as such, they can be exported as OPML files. Most podcast clients, including AntennaPod (which I use) allow you to do this.

Here’s my OPML file, as of today. I don’t listen to all of these podcasts regularly, just dipping in and out of them. My top five favourites are:

There’s also, obviously, Today In Digital Education (TIDE) which I record with Dai Barnes. Well be releasing our first episode of 2018 later this week!

Source: Bryan Alexander