Category: Life online (page 1 of 3)

On blogging

Jim Groom nails it on blogging:

[M]ost folks treat their blog as if it were some kind of glossy headshot of their thinking, whereas the beauty and freedom of blogging was that it was by design a networked tool. Blogging provides a space to develop an online voice, connect with a particular network, and build a sense of identity online in conjunction with others working through a similar process. Scale in many ways became a distraction, one which was magnified to such a degree by the hype around MOOCs in edtech that anything less that 10s of thousands of “users,” “learners,” “participants,” followers,” etc. was tacitly considered somehow less than optimal for effective online learning. It was, and remains, a symptom of the capital-driven ethos of Silicon Valley that places all value on scale and numbers which is rooted in monetization—a reality that has infected edtech and helped to undermine the value and importance of forging an independent voice and intimate connections through what should be an independent media of expression. When scale is the endgame the whole process becomes bogged down in page views, followers, and likes rather than the freedom to explore and experiment with your ideas online. It’s a uniquely web-based version of Hell where the dominant form of communication online is a Medium think piece written by your friendly neighborhood thought leader.

You could accuse Thought Shrapnel of being glossy, but it’s just a shiny version of what’s in my head.

Source: bavatuesdays

On the cultural value of memes

I’ve always been a big fan of memes. In fact, I discuss them in my thesis, ebook, and TEDx talk. This long-ish article from Jay Owens digs into their relationship with fake news and what he calls ‘post-authenticity’. What I’m really interested in, though, comes towards the end. He gets into the power of memes and why they’re the perfect form of online cultural expression.

So through humour, exaggeration, and irony — a truth emerges about how people are actually feeling. A truth that they may not have felt able to express straightforwardly. And there’s just as much, and potentially more, community present in these groups as in many of the more traditional civic-oriented groups Zuckerberg’s strategy may have had in mind.

The thing that can be missing from text-based interactions is empathy. The right kind of meme, however, speaks using images, words, but also to something else that a group have in common.

Meme formats — from this week’s American Chopper dialectic model to now classics like the “Exploding Brain,” “Distracted Boyfriend,” and “Tag Yourself” templates — are by their very nature iterative and quotable. That is how the meme functions, through reference to the original context and memes that have come before, coupled with creative remixing to speak to a particular audience, topic, or moment. Each new instance of a meme is thereby automatically familiar and recognisable. The format carries a meta-message to the audience: “This is familiar, not weird.” And the audience is prepared to know how to react: you like, you respond with laughter-referencing emoji, you tag your friends in the comments.

Let’s take this example, that Owens cites in the article. I sent it to my wife via Telegram, which an instant messaging app that we use as a permanent backchannel).

90s kids

Her response, inevitably was: 😂

It’s funny because it’s true. But it also quickly communicates solidarity and empathy.

The format acts as a kind of Trojan horse, then, for sharing difficult feelings — because the format primes the audience to respond hospitably. There isn’t that moment of feeling stuck over how to respond to a friend’s emotional disclosure, because she hasn’t made the big statement directly, but instead through irony and cultural quotation — distancing herself from the topic through memes, typically by using stock photography (as Leigh Alexander notes) rather than anything as gauche as a picture of oneself. This enables you the viewer to sidestep the full intensity of it in your response, should you choose, but still, crucially, to respond). And also to DM your friend and ask, “Hey, are you alright?” and cut to the realtalk should you so choose to.

So, effectively, you can be communicating different things to different people. If, instead of sending the 90s kids image above directly to my wife via Telegram, I’d shared it to my Twitter followers, it may have elicited a different response. Some people would have liked and retweeted it, for sure, but someone who knows me well might ask if I’m OK. After all, there’s a subtext in there of feeling like you’re “stuck”.

Owens goes on to talk about how that memetic culture means that we’re living in a ‘post authentic’ world. But did such authenticity ever really exist?

So perhaps to say that this post-authentic moment is one of evolving, increasingly nuanced collective communication norms, able to operate with multi-layered recursive meanings and ironies in disposable pop culture content… is kind of cold comfort.

Nonetheless, author Robin Sloan described the genius of the “American Chopper” meme as being that “THIS IS THE ONLY MEME FORMAT THAT ACKNOWLEDGES THE EXISTENCE OF COMPETING INFORMATION, AND AS SUCH IT IS THE ONLY FORMAT SUITED TO THE COMPLEXITY OF OUR WORLD!”

Amen to that.

Source: Jay Owens

Clickbait and switch?

Should you design for addiction or for loyalty? That’s the question posed by Michelle Manafy in this post for Nieman Lab. It all depends, she says, on whether you’re trying to attract users or an audience.

With advertising as the primary driver of web revenue, many publishers have chased the click dragon. Seeking to meet marketers’ insatiable desire for impressions, publishers doubled down on quick clicks. Headlines became little more than a means to a clickthrough, often regardless of whether the article would pay off or even if the topic was worthy of coverage. And — since we all know there are still plenty of publications focusing on hot headlines over substance — this method pays off. In short-term revenue, that is.

However, the reader experience that shallow clicks deliver doesn’t develop brand affinity or customer loyalty. And the negative consumer experience has actually been shown to extend to any advertising placed in its context. Sure, there are still those seeking a quick buck — but these days, we all see clickbait for what it is.

Audiences mature over time and become wary of particular approaches. Remember “…and you’ll not believe what came next” approaches?

Ask Manafy notes, it’s much easier to design for addiction than to build an audience. The former just requires lots and lots of tracking — something at which the web has become spectacularly good at, due to advertising.

For example, many push notifications are specifically designed to leverage the desire for human interaction to generate clicks (such as when a user is alerted that their friend liked an article). Push notifications and alerts are also unpredictable (Will we have likes? Mentions? New followers? Negative comments?). And this unpredictability, or B.F. Skinner’s principle of variable rewards, is the same one used in those notoriously addictive slot machines. They’re also lucrative — generating more revenue in the U.S. than baseball, theme parks, and movies combined. A pull-to-refresh even smacks of a slot machine lever.

The problem is that designing for addiction isn’t a long-term strategy. Who plays Farmville these days? And the makers of Candy Crush aren’t exactly crushing it with their share price these days.

Sure, an addict is “engaged” — clicking, liking, swiping — but what if they discover that your product is bad for them? Or that it’s not delivering as much value as it does harm? The only option for many addicts is to quit, cold turkey. Sure, many won’t have the willpower, and you can probably generate revenue off these users (yes, users). But is that a long-term strategy you can live with? And is it a growth strategy, should the philosophical, ethical, or regulatory tide turn against you?

The ‘regulatory tide’ referenced here is exemplified through GDPR, which is already causing a sea change in attitude towards user data. Compliance with teeth, it seems, gets results.

Designing for sustainability isn’t just good from a regulatory point of view, it’s good for long-term business, argues Manafy:

Where addiction relies on an imbalanced and unstable relationship, loyal customers will return willingly time and again. They’ll refer you to others. They’ll be interested in your new offerings, because they will already rely on you to deliver. And, as an added bonus, these feelings of goodwill will extend to any advertising you deliver too. Through the provision of quality content, delivered through excellent experiences at predictable and optimal times, content can become a trusted ally, not a fleeting infatuation or unhealthy compulsion.

Instead of thinking of your audience as ‘users’ waiting for their next hit, she suggests, think of them as your audience. That’s a much better approach and will help you make much better design decisions.

Source: Nieman Lab

Social internet vs social media

It’s good to see Cal Newport, whose book Deep Work I found unexpectedly great last year, add a bit more nuance to his position on social media:

The young progressives grew up in a time when platform monopolies like Facebook were so dominant that they seemed inextricably intertwined into the fabric of the internet. To criticize social media, therefore, was to criticize the internet’s general ability to do useful things like connect people, spread information, and support activism and expression.

The older progressives, however, remember the internet before the platform monopolies. They were concerned to observe a small number of companies attempt to consolidate much of the internet into their for-profit, walled gardens.

To them, social media is not the internet. It was instead a force that was co-opting the internet — including the powerful capabilities listed above — in ways that would almost certainly lead to trouble.

Newport has started talking about the difference between ‘social media’ and the ‘social internet’:

The social internet describes the general ways in which the global communication network and open protocols known as “the internet” enable good things like connecting people, spreading information, and supporting expression and activism.

Social media, by contrast, describes the attempt to privatize these capabilities by large companies within the newly emerged algorithmic attention economy, a particularly virulent strain of the attention sector that leverages personal data and sophisticated algorithms to ruthlessly siphon users’ cognitive capital.

If you’d asked people in 2005, they would have said that there was no way that people would leave MySpace in favour of a different platform.

People like Facebook. But if you could offer them a similar alternative that stripped away the most unsavory elements of Zuckerberg’s empire (perhaps funded by a Wikipedia-style nonprofit collective, or a modest subscription fee), many would happily jump ship.

Indeed.

Following up with another this post this week, Newport writes:

My argument is that you can embrace the social internet without having to become a “gadget” inside the algorithmic attention economy machinations of the social media conglomerates. As noted previously, I think this is the right answer for those who are fed up with the dehumanizing aspects of social media, but are reluctant to give up altogether on the potential of the internet to bring people together.

He suggests several ways for this to happen:

  • Approach #1: The Slow Social Media Philosophy
  • Approach #2: Own Your Own Domain

This is, in effect, the IndieWeb approach. However, I still think that Newport and others who work in universities may a special case. As Austin Kleon notes, there’s already built-in ways for your career to advance in academia. Others have to show their work…

What I don’t see being discussed is that as we collectively mature in our use of social media is that we’re likely to use different networks for different purposes. Facebook, LinkedIn, and the like try to force us into a single online identity. It’s OK to look and act differently when you’re around different people in different environments.

Source: Cal Newport (On Social Media and Its Discontents / Beyond #DeleteFacebook: More Thoughts on Embracing the Social Internet Over Social Media)

Mozilla’s Web Literacy Curriculum

I’m not sure what to say about this announcement from Mozilla about their ‘new’ Web Literacy Curriculum. I led this work from 2012 to 2015 at the Mozilla Foundation, but it doesn’t seem to be any further forward now than when I left.

In fact, it seems to have just been re-focused for the libraries sector:

With support from Institute of Museum and Library Services, and a host of collaborators including key public library leaders from around the country, this open-source, participatory, and hands-on curriculum was designed to help the everyday person in a library setting, formal and informal education settings, community center, or at your kitchen table.

The site for the Web Literacy Curriculum features resources that will already be familiar to those who follow Mozilla’s work.

Four years ago, I wrote a post on the Mozilla Learning blog about Atul Varma’s WebLitMapper, Laura Hilliger’s Web Literacy Learning Pathways, as well as the draft alignment guidelines I’d drawn up. Where has the innovation gone since that point?

It’s sad to see such a small, undeveloped resource from an organisation that once showed such potential in teaching the world the Web.

Source: Read, Write, Participate

No-one wants a single identity, online or offline

It makes sense for companies reliant on advertising to not only get as much data as they can about you, but to make sure that you have a single identity on their platform to which to associate it.

This article by Cory Doctorow in BoingBoing reports on some research around young people and social media. As Doctorow states:

Social media has always had a real-names problem. Social media companies want their users to use their real names because it makes it easier to advertise to them. Users want to be able to show different facets of their identities to different people, because only a sociopath interacts with their boss, their kids, and their spouse in the same way.

I was talking to one of my Moodle colleagues about how, in our mid-thirties, we’re a ‘bridging’ generation between those who only went online in adulthood, and those who have only ever known a world with the internet. I got online for the first time when I was about fourteen or fifteen.

Those younger than me are well aware of the perils and pitfalls of a single online identity:

Amy Lancaster from the Journalism and Digital Communications school at the University of Central Lancashire studies the way that young people resent “the way Facebook ties them into a fixed self…[linking] different areas of a person’s life, carrying over from school to university to work.”

I think Doctorow has made an error around Amy’s surname, which is given as ‘Binns’ instead of ‘Lancaster’ both in the journal article and the original post.

Binns writes:

Young people know their future employers, parents and grandparents are present online, and so they behave accordingly. And it’s not only older people that affect behaviour.

My research shows young people dislike the way Facebook ties them into a fixed self. Facebook insists on real names and links different areas of a person’s life, carrying over from school to university to work. This arguably restricts the freedom to explore new identities – one of the key benefits of the web.

The desire for escapable transience over damning permanence has driven Snapchat’s success, precisely because it’s a messaging app that allows users to capture videos and pictures that are quickly removed from the service.

This is important for the work I’m leading around Project MoodleNet. It’s not just teenagers who want “escapable transience over damning permanence”.

Source: BoingBoing

Small ‘b’ blogging

I’ve been a blogger for around 13 years now. What the author of this post says about its value really resonates with me:

Small b blogging is learning to write and think with the network. Small b blogging is writing content designed for small deliberate audiences and showing it to them. Small b blogging is deliberately chasing interesting ideas over pageviews and scale. An attempt at genuine connection vs the gloss and polish and mass market of most “content marketing”.

He talks about the ‘topology’ of blogging changing over the years:

Crucially, these entry points to the network were very big and very accessible. What do I mean by that? Well – in those early days they were very big in the sense that if you got your content on the Digg homepage a lot of people would see it (relative to the total size of the network at the time). And they were very accessible in the sense that it wasn’t that hard to get your content there! I recall having a bunch of Digg homepage hits and Hacker News homepage hits.

I once had 15,000 people read a post of mine within a 24 hour period via a link from Hacker News. Yet the number of people who did something measurable (got in touch, subscribed to my newsletter, etc. ) was effectively zero.

Every community now has a fragmented number of communities, homepages, entry points, tinyletters, influencers and networks. They overlap in weird and wonderful ways – and it means that it’s harder than ever to feel like you got a “homepage” success on these networks. To create a moment that has the whole audience looking at the same thing at the same time.

We shouldn’t write for page views and fame, but instead to create value. Just this week I’ve had people cite back to me posts I wrote years ago. It’s a great thing.

So I challenge you to think clearly about the many disparate networks you’re part of and think about the ideas you might want to offer those networks that you don’t want to get lost in the feed. Ideas you might want to return to. Think about how writing with and for the network might enable you to start blogging. Forget the big B blogging model. Forget Medium’s promise of page views and claps. Forget the guest post on Inc, Forbes and Entrepreneur. Forget Fast Company. Forget fast content.

Source: Tom Critchlow

On your deathbed, you’re not going to wish that you’d spent more time on Facebook

As many readers of my work will know, I don’t have a Facebook account. This article uses Facebook as a proxy for something that, whether you’ve got an account on the world’s largest social network or not, will be familiar:

An increasing number of us are coming to realize that our relationships with our phones are not exactly what a couples therapist would describe as “healthy.” According to data from Moment, a time-tracking app with nearly five million users, the average person spends four hours a day interacting with his or her phone.

The trick, like anything to which you’re psychologically addicted, is to reframe what you’re doing:

Many people equate spending less time on their phones with denying themselves pleasure — and who likes to do that? Instead, think of it this way: The time you spend on your phone is time you’re not spending doing other pleasurable things, like hanging out with a friend or pursuing a hobby. Instead of thinking of it as “spending less time on your phone,” think of it as “spending more time on your life.”

The thing I find hardest is to leave my phone in a different room, or not take it with me when I go out. There’s always a reason for this (usually ‘being contactable’) but not having it constantly alongside you is probably a good idea:

Leave your phone at home while you go for a walk. Stare out of a window during your commute instead of checking your email. At first, you may be surprised by how powerfully you crave your phone. Pay attention to your craving. What does it feel like in your body? What’s happening in your mind? Keep observing it, and eventually, you may find that it fades away on its own.

There’s a great re-adjustment happening with our attitude towards devices and the services we use on them. In a separate BBC News article, Amol Rajan outlines some reasons why Facebook usage may have actually peaked:

  1. A drop in users
  2. A drop in engagement
  3. Advertiser enmity
  4. Disinformation and fake news
  5. Former executives speak out
  6. Regulatory mood is hardening
  7. GDPR
  8. Antagonism with the news industry

Interesting times.

Source: The New York Times / BBC News

Why we forget most of what we read

I read a lot of stuff, and I remember random bits of it. I used to be reasonably disciplined about bookmarking stuff, but then realised I hardly ever went back through my bookmarks. So, instead, I try to use what I read, which is kind of the reason for Thought Shrapnel…

Surely some people can read a book or watch a movie once and retain the plot perfectly. But for many, the experience of consuming culture is like filling up a bathtub, soaking in it, and then watching the water run down the drain. It might leave a film in the tub, but the rest is gone.

Well, indeed. Nice metaphor.

In the internet age, recall memory—the ability to spontaneously call information up in your mind—has become less necessary. It’s still good for bar trivia, or remembering your to-do list, but largely, [Jared Horvath, a research fellow at the University of Melbourne] says, what’s called recognition memory is more important. “So long as you know where that information is at and how to access it, then you don’t really need to recall it,” he says.

Exactly. You need to know how to find that article you read that backs up the argument you’re making. You don’t need to remember all of the details. Search skills are really important.

One study showed that recalling details about episodes for those bingeing on Netflix series was much lower than for thoose who spaced them out. I guess that’s unsurprising.

People are binging on the written word, too. In 2009, the average American encountered 100,000 words a day, even if they didn’t “read” all of them. It’s hard to imagine that’s decreased in the nine years since. In “Binge-Reading Disorder,” an article for The Morning News, Nikkitha Bakshani analyzes the meaning of this statistic. “Reading is a nuanced word,” she writes, “but the most common kind of reading is likely reading as consumption: where we read, especially on the internet, merely to acquire information. Information that stands no chance of becoming knowledge unless it ‘sticks.’”

For anyone who knows about spaced learning, the conclusions are pretty obvious:

The lesson from his binge-watching study is that if you want to remember the things you watch and read, space them out. I used to get irritated in school when an English-class syllabus would have us read only three chapters a week, but there was a good reason for that. Memories get reinforced the more you recall them, Horvath says. If you read a book all in one stretch—on an airplane, say—you’re just holding the story in your working memory that whole time. “You’re never actually reaccessing it,” he says.

So apply what you learn and you’re putting it to work. Hence this post!

Source: The Atlantic (via e180)

Why do some things go viral?

I love internet memes and included a few in my TEDx talk a few years ago. The term ‘meme’ comes from Richard Dawkins who coined the term in the 1970s:

But trawling the Internet, I found a strange paradox: While memes were everywhere, serious meme theory was almost nowhere. Richard Dawkins, the famous evolutionary biologist who coined the word “meme” in his classic 1976 book, The Selfish Gene, seemed bent on disowning the Internet variety, calling it a “hijacking” of the original term. The peer-reviewed Journal of Memetics folded in 2005. “The term has moved away from its theoretical beginnings, and a lot of people don’t know or care about its theoretical use,” philosopher and meme theorist Daniel Dennett told me. What has happened to the idea of the meme, and what does that evolution reveal about its usefulness as a concept?

Memes aren’t things that you necessarily want to find engaging or persuasive. They’re kind of parasitic on the human mind:

Dawkins’ memes include everything from ideas, songs, and religious ideals to pottery fads. Like genes, memes mutate and evolve, competing for a limited resource—namely, our attention. Memes are, in Dawkins’ view, viruses of the mind—infectious. The successful ones grow exponentially, like a super flu. While memes are sometimes malignant (hellfire and faith, for atheist Dawkins), sometimes benign (catchy songs), and sometimes terrible for our genes (abstinence), memes do not have conscious motives. But still, he claims, memes parasitize us and drive us.

Dawkins doesn’t like the use of the word ‘meme’ to refer to what we see on the internet:

According to Dawkins, what sets Internet memes apart is how they are created. “Instead of mutating by random chance before spreading by a form of Darwinian selection, Internet memes are altered deliberately by human creativity,” he explained in a recent video released by the advertising agency Saatchi & Saatchi. He seems to think that the fact that Internet memes are engineered to go viral, rather than evolving by way of natural selection, is a salient difference that distinguishes from other memes—which is arguable, since what catches fire on the Internet can be as much a product of luck as any unexpected mutation.

So… why should we care?

While entertaining bored office workers seems harmless enough, there is something troubling about a multi-million dollar company using our minds as petri dishes in which to grow its ideas. I began to wonder if Dawkins was right—if the term meme is really being hijacked, rather than mindlessly evolving like bacteria. The idea of memes “forces you to recognize that we humans are not entirely the center of the universe where information is concerned—we’re vehicles and not necessarily in charge,” said James Gleick, author of The Information: A History, A Theory, A Flood, when I spoke to him on the phone. “It’s a humbling thing.”

It is indeed a humbling thing, but one that a the study of Philosphy prepares you for, particularly Stoicism. Your mind is the one thing you can control, so be careful out there on the internet, reader.

Source: Nautilus