Category: 21st Century Society (page 1 of 6)

Remembering the past through photos

A few weeks ago, I bought a Google Assistant-powered smart display and put it in our kitchen in place of the DAB radio. It has the added bonus of cycling through all of my Google Photos, which stretch back as far as when my wife and I were married, 15 years ago.

This part of its functionality makes it, of course, just a cloud-powered digital photo frame. But I think it’s possible to underestimate the power that these things have. About an hour before composing this post, for example, my wife took a photo of a photo(!) that appeared on the display showing me on the beach with our two children when they were very small.

An article by Giuliana Mazzoni in The Conversation points out that our ability to whip out a smartphone at any given moment and take a photo changes our relationship to the past:

We use smart phones and new technologies as memory repositories. This is nothing new – humans have always used external devices as an aid when acquiring knowledge and remembering.

[…]

Nowadays we tend to commit very little to memory – we entrust a huge amount to the cloud. Not only is it almost unheard of to recite poems, even the most personal events are generally recorded on our cellphones. Rather than remembering what we ate at someone’s wedding, we scroll back to look at all the images we took of the food.

Mazzoni points out that this can be problematic, as memory is important for learning. However, there may be a “silver lining”:

Even if some studies claim that all this makes us more stupid, what happens is actually shifting skills from purely being able to remember to being able to manage the way we remember more efficiently. This is called metacognition, and it is an overarching skill that is also essential for students – for example when planning what and how to study. There is also substantial and reliable evidence that external memories, selfies included, can help individuals with memory impairments.

But while photos can in some instances help people to remember, the quality of the memories may be limited. We may remember what something looked like more clearly, but this could be at the expense of other types of information. One study showed that while photos could help people remember what they saw during some event, they reduced their memory of what was said.

She goes on to discuss the impact that viewing many photos from your past has on a malleable sense of self:

Research shows that we often create false memories about the past. We do this in order to maintain the identity that we want to have over time – and avoid conflicting narratives about who we are. So if you have always been rather soft and kind – but through some significant life experience decide you are tough – you may dig up memories of being aggressive in the past or even completely make them up.

I’m not so sure that it’s a good thing to tell yourself the wrong story about who you are. For example, although I grew up in, and identified with, a macho ex-mining town environment, I’ve become happier by realising that my identify is separate to that.

I suppose it’s a bit different for me, as most of the photos I’m looking at are of me with my children and/or my wife. However, I still have to tell myself a story of who I am as a husband and a father, so in many ways it’s the same.

All in all, I love the fact that we can take photos anywhere and at any time. We may need to evolve social norms around the most appropriate ways of capturing images in crowded situations, but that’s separate to the very great benefit which I believe they bring us.

Source: The Conversation

The endless Black Friday of the soul

This article by Ruth Whippman appears in the New York Times, so focuses on the US, but the main thrust is applicable on a global scale:

When we think “gig economy,” we tend to picture an Uber driver or a TaskRabbit tasker rather than a lawyer or a doctor, but in reality, this scrappy economic model — grubbing around for work, all big dreams and bad health insurance — will soon catch up with the bulk of America’s middle class.

Apparently, 94% of the jobs created in the last decade are freelancer or contract positions. That’s the trajectory we’re on.

Almost everyone I know now has some kind of hustle, whether job, hobby, or side or vanity project. Share my blog post, buy my book, click on my link, follow me on Instagram, visit my Etsy shop, donate to my Kickstarter, crowdfund my heart surgery. It’s as though we are all working in Walmart on an endless Black Friday of the soul.

[…]

Kudos to whichever neoliberal masterminds came up with this system. They sell this infinitely seductive torture to us as “flexible working” or “being the C.E.O. of You!” and we jump at it, salivating, because on its best days, the freelance life really can be all of that.

I don’t think this is a neoliberal conspiracy, it’s just the logic of capitalism seeping into every area of society. As we all jockey for position in the new-ish landscape of social media, everything becomes mediated by the market.

What I think’s missing from this piece, though, is a longer-term trend towards working less. We seem to be endlessly concerned about how the nature of work is changing rather than the huge opportunities for us to do more than waste away in bullshit jobs.

I’ve been advising anyone who’ll listen over the last few years that reducing the number of days you work has a greater impact on your happiness than earning more money. Once you reach a reasonable salary, there’s diminishing returns in any case.

Source: The New York Times (via Dense Discovery)

Looking back and forward in tech

Looking back at 2018, Amber Thomas commented that, for her, a few technologies became normalised over the course of the year:

  1. Phone payments
  2. Voice-controlled assistants
  3. Drones
  4. Facial recognition
  5. Fingerprints

Apart from drones, I’ve spent the last few years actively avoiding the above. In fact, I spent most of 2018 thinking about decentralised technology, privacy, and radical politics.

However, December is always an important month for me. I come off social media, stop blogging, and turn another year older just before Christmas. It’s a good time to reflect and think about what’s gone before, and what comes next.

Sometimes, it’s possible to identify a particular stimulus to a change in thinking. For me, it was while I was watching Have I Got News For You and the panellists were shown a photo of a fashion designer who put a shoe in front of their face to avoid being recognisable. Paul Merton asked, “doesn’t he have a passport?”

Obvious, of course, but I’d recently been travelling and using the biometric features of my passport. I’ve also relented this year and use the fingerprint scanner to unlock my phone. I realised that the genie isn’t going back in the bottle here, and that everyone else was using my data — biometric or otherwise — so I might as well benefit, too.

Long story short, I’ve bought a Google Pixelbook and Lenovo Smart Display over the Christmas period which I’ll be using in 2019 to my life easier. I’m absolutely trading privacy for convenience, but it’s been a somewhat frustrating couple of years trying to use nothing but Open Source tools.

I’ll have more to say about all of this in due course, but it’s worth saying that I’m still committed to living and working openly. And, of course, I’m looking forward to continuing to work on MoodleNet.

Source: Fragments of Amber

Co-operation and anti-social punishment in different societies

I find this absolutely fascinating. It turns out that some societies actively ‘punish’ those who engage in collaborative and co-operative ventures:

Social contributions over time (with punishment)

The tragedy of the commons is already well-documented, showing that commonly-owned resources end up suffering if people can free-ride without consequences. The above chart, however, shows that in some cultures, there being a consequence for that free-riding leads to contribution (e.g. Boston, Copenhagen). In others, it makes no difference (e.g. Riyadh, Athens).

Herrmann, Thöni and Gächter speculate that the anti-social punishment may be a form of revenge. You’ve punished me for free-riding so now I’ll punish you just that you know how it feels! And given that I don’t know who the punisher was, I’ll punish all the cooperators who were likely to administer the original punishment in the first place.

I’m less interested in the graphs and the ‘hard’ science than the anecdotal aspects of this post. The author is from Slovakia, and comments:

To get back to Eastern Europe, we’ve used to live under communist regime where all the common causes were appropriated by the state. Any gains from a contribution to a common cause would silently disappear somewhere in the dark corners of the bureaucracy.

Quite the opposite: People felt justified to take stuff from the commons. We even had a saying: “If you don’t steal [from the common property] you are stealing from your family.”

At the same time, stealing from the state was, legally, a crime apart and it was ranked in severity somewhere in the vicinity of murder. You could get ten years in jail if they’ve caught you.

Unsurprisingly, in such an environment, reporting to authorities (i.e. “pro-social punishment”) was regarded as highly unjust — remember the coffee cup example! — and anti-social and there was a strict taboo against it. Ratting often resulted in social ostracism (i.e. “anti-social punishment”). We can still witness that state of affairs in the highly offensive words used to refer to the informers: “udavač”, “donášač”, “práskač”, “špicel”, “fízel” (roughly: “nark”, “rat”, “snoop”, “stool pigeon”).

A perfect example of how the state can cause the co-operation to thrive or dwindle based on governmental policy.

Source: LessWrong

Reappropriating the artifacts of late-stage capitalism

During our inter-railing adventure this summer, we visited Zurich in Switzerland. In one of the parks there, we came across a dockless scooter, which we promptly unlocked and had a great time zooming around.

As you’d expect, the greatest density of dockless bikes and scooters — devices that don’t have to be picked up or returned in any specific place — is in San Francisco. It seems that, in their attempts to flood the city and gain some kind of competitive advantage, VC-backed dockless bike and scooter startups are having an unintended effect. They’re helping homeless people move around the city more easily:

Hoarding and vandalism aren’t the only problems for electric scooter companies. There’s also theft. While the vehicles have GPS tracking, once the battery fully dies they go off the app’s map.

“Every homeless person has like three scooters now,” [Michael Ghadieh, who owns electric bicycle shop, SF Wheels] said. “They take the brains out, the logos off and they literally hotwire it.”

I’ve seen scooters stashed at tent cities around San Francisco. Photos of people extracting the batteries have been posted on Twitter and Reddit. Rumor has it the batteries have a resale price of about $50 on the street, but there doesn’t appear to be a huge market for them on eBay or Craigslist, according to my quick survey.

Source: CNET (via BoingBoing)

Insidious Instagram influencers?

There seems to a lot of pushback at the moment against the kind of lifestyle that’s a direct result of the Silicon Valley mindset. People are rejecting everything from the Instagram ‘influencer’ approach to life to the ‘techbro’-style crazy working hours.

This week saw Basecamp, a company that prides itself on the work/life balance of its employees and on rejecting venture capital, publish another book. You can guess at what it focuses on from its title, It doesn’t have to be crazy at work. I’ve enjoyed and have recommended their previous books (as ’37 Signals’), and am looking forward to reading this latest one.

Alongside that book, I’ve seen three articles that, to me at least, are all related to the same underlying issues. The first comes from Simone Stolzoff who writes in Quartz at Work that we’re no longer quite sure what we’re working for:

Before I became a journalist, I worked in an office with hot breakfast in the mornings and yoga in the evenings. I was #blessed. But I would reflect on certain weeks—after a string of days where I was lured in before 8am and stayed until well after sunset—like a driver on the highway who can’t remember the last five miles of road. My life had become my work. And my work had become a series of rinse-and-repeat days that started to feel indistinguishable from one another.

Part of this lack of work/life balance comes from our inability these days to simply have hobbies, or interests, or do anything just for the sake of it. As Tim Wu points out in The New York Times, it’s all linked some kind of existential issue around identity:

If you’re a jogger, it is no longer enough to cruise around the block; you’re training for the next marathon. If you’re a painter, you are no longer passing a pleasant afternoon, just you, your watercolors and your water lilies; you are trying to land a gallery show or at least garner a respectable social media following. When your identity is linked to your hobby — you’re a yogi, a surfer, a rock climber — you’d better be good at it, or else who are you?

To me, this is inextricably linked to George Monbiot’s recent piece in The Guardian about about the problem of actors being interviewed about the world’s issues disproportionately more often than anybody else. As a result, we’re rewarding those people who look like they know what they’re talking about with our collective attention, rather than those who actually do. Monbiot concludes:

The task of all citizens is to understand what we are seeing. The world as portrayed is not the world as it is. The personification of complex issues confuses and misdirects us, ensuring that we struggle to comprehend and respond to our predicaments. This, it seems, is often the point.

There’s always been a difference between appearance and reality in public life. However, previously, at least they seem to have been two faces of the same coin. These days, our working lives as well as our public lives seem to be

Sources: Basecamp / Quartz at Work / The New York Times / The Guardian

 

The rise and rise of e-sports

I wouldn’t even have bothered clicking on this article if it weren’t for one simple fact: my son can’t get enough of this guy’s YouTube channel.

If you haven’t heard of Ninja, ask the nearest 12-year-old. He shot to fame in March after he and Drake played Fortnite, the video game phenomenon in which 100 players are dropped onto an island and battle to be the last one standing while building forts that are used to both attack and hide from opponents. At its peak, Ninja and Drake’s game, which also featured rapper Travis Scott and Pittsburgh Steelers receiver JuJu Smith-Schuster, pulled in 630,000 concurrent viewers on Twitch, Amazon’s livestreaming platform, shattering the previous record of 388,000. Since then, Ninja has achieved what no other gamer has before: mainstream fame. With 11 million Twitch followers and climbing, he commands an audience few can dream of. In April, he logged the most social media interactions in the entire sports world, beating out the likes of Cristiano Ronaldo, Shaquille O’Neal and Neymar.

This article in ESPN is testament to the work that Ninja (a.k.a. Tyler Blevins) has done in crafting a brand and putting in the hours for over a decade. It sounds gruelling:

Tyler can’t join us until he wraps up his six-hour stream. In the basement, past a well-stocked bar, a pool table and a dartboard, next to a foosball table, he sits on this sunny August day in a T-shirt and plaid pajama pants at the most famous space in their house, his gaming setup. It doesn’t look like much — a couple of screens, a fridge full of Red Bull, a mess of wires — but from this modest corner he makes millions by captivating millions.

[…]

In college, Jess [his wife] started streaming to better understand why Tyler would go hours without replying to her texts. A day in, she realized how consuming it was. “It’s physically exhausting but also mentally because you’re sitting there constantly interacting,” Tyler says. “I’m engaging a lot more senses than if I were just gaming by myself. We’re not sitting there doing nothing. I don’t think anyone gets that.”

The reason for sharing this here is because I’m going to use this as an example of deliberate practice.

How does he stay so good? Pro tip: Don’t just play, practice. Ninja competes in about 50 games a day, and he analyzes each and every one. He never gets tired of it, and every loss hits him hard. Hypercompetitive, he makes sure he walks away with at least one win each day. (He averages about 15 and once got 29 in a single day.)

“When I die, I get so upset,” he says. “You can play every single day, you’re not practicing. You die, and oh well, you go onto the next game. When you’re practicing, you’re taking every single match seriously, so you don’t have an excuse when you die. You’re like, ‘I should have rotated here, I should have pushed there, I should have backed off.’ A lot of people don’t do that.”

The article is worth a read, for several reasons. It shows why e-sports are going to be even bigger than regular sports for my children’s generation. It demonstrates how to get to the top in anything you have to put in the time and effort. And, perhaps, above all, it shows that, just as I’ve found, growing up spending time in front of screens can be pretty lucrative.

Source: ESPN

Audiobooks vs reading

Although I listen to a lot of podcasts (here’s my OPML file) I don’t listen to many audiobooks. That’s partly because I never feel up-to-date with my podcast listening, but also because I often read before going to sleep. It’s much more difficult to find your place again if you drift off while listening than while reading!

This article in TIME magazine (is it still a ‘magazine’?) looks at the research into whether listening to an audiobook is like reading using your eyes. Well, first off, it would seem that there’s no difference in recall of facts given a non-fiction text:

For a 2016 study, Rogowsky put her assumptions to the test. One group in her study listened to sections of Unbroken, a nonfiction book about World War II by Laura Hillenbrand, while a second group read the same parts on an e-reader. She included a third group that both read and listened at the same time. Afterward, everyone took a quiz designed to measure how well they had absorbed the material. “We found no significant differences in comprehension between reading, listening, or reading and listening simultaneously,” Rogowsky says.

However, the difficulty here is that there’s already an observed discrepancy in recall between dead-tree books and e-books. So perhaps audiobooks are as good as e-books, but both aren’t as good as printed matter?

There’s a really interesting point made in the article about how dead-tree books allow for a slight ‘rest’ while you’re reading:

If you’re reading, it’s pretty easy to go back and find the point at which you zoned out. It’s not so easy if you’re listening to a recording, Daniel says. Especially if you’re grappling with a complicated text, the ability to quickly backtrack and re-examine the material may aid learning, and this is likely easier to do while reading than while listening. “Turning the page of a book also gives you a slight break,” he says. This brief pause may create space for your brain to store or savor the information you’re absorbing.

This reminds me of an article on Lifehacker a few years ago that quoted a YouTuber who swears by reading a book while also listening to it:

First of all, it combines two senses…so you end up with really good comprehension while being really efficient at the same time. …Another possibly even more important benefit is…it keeps you going. So you’re not going back and rereading things, you’re not taking all kinds of unnecessary breaks and pauses, your eyes aren’t running around all the time, and you’re not getting distracted every two minutes.

Since switching to an open source e-reader, I’m no longer using the Amazon Kindle ecosystem so much these days. If I were, I’d be experimenting with their WhisperSync technology that allows you to either pick up where you left up with one medium — or, indeed, use both at the same time.

Source: TIME / Lifehacker

The Amazon Echo as an anatomical map of human labor, data and planetary resources

This map of what happens when you interact with a digital assistant such as the Amazon Echo is incredible. The image is taken from a length piece of work which is trying to bring attention towards the hidden costs of using such devices.

With each interaction, Alexa is training to hear better, to interpret more precisely, to trigger actions that map to the user’s commands more accurately, and to build a more complete model of their preferences, habits and desires. What is required to make this possible? Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch. A full accounting for these costs is almost impossible, but it is increasingly important that we grasp the scale and scope if we are to understand and govern the technical infrastructures that thread through our lives.

It’s a tour de force. Here’s another extract:

When a human engages with an Echo, or another voice-enabled AI device, they are acting as much more than just an end-product consumer. It is difficult to place the human user of an AI system into a single category: rather, they deserve to be considered as a hybrid case. Just as the Greek chimera was a mythological animal that was part lion, goat, snake and monster, the Echo user is simultaneously a consumer, a resource, a worker, and a product. This multiple identity recurs for human users in many technological systems. In the specific case of the Amazon Echo, the user has purchased a consumer device for which they receive a set of convenient affordances. But they are also a resource, as their voice commands are collected, analyzed and retained for the purposes of building an ever-larger corpus of human voices and instructions. And they provide labor, as they continually perform the valuable service of contributing feedback mechanisms regarding the accuracy, usefulness, and overall quality of Alexa’s replies. They are, in essence, helping to train the neural networks within Amazon’s infrastructural stack.

Well worth a read, especially alongside another article in Bloomberg about what they call ‘oral literacy’ but which I referred to in my thesis as ‘oracy’:

Should the connection between the spoken word and literacy really be so alien to us? After all, starting in the 1950s, basic literacy training in elementary schools in the United States has involved ‘phonics.’ And what is phonics but a way of attaching written words to the sounds they had been or could become? The theory grew out of the belief that all those lines of text on the pages of schoolbooks had become too divorced from their sounds; phonics was intended to give new readers a chance to recognize written language as part of the world of language they already knew.

The technological landscape is reforming what it means to be literate in the 21st century. Interestingly, some of that is a kind of a return to previous forms of human interaction that we used to value a lot more.

Sources: Anatomy of AI and Bloomberg

Tracking vs advertising

We tend to use words to denote something right up to the time that term becomes untenable. Someone has to invent a better one. Take mobile phones, for example. They’re literally named after the least-used app on there, so we’re crying out for a different way to refer to them. Perhaps a better name would be ‘trackers’.

These days, most people use mobile devices for social networking. These are available free at the point of access, funded by what we’re currently calling ‘advertising’. However, as this author notes, it’s nothing of the sort:

What we have today is not advertising. The amount of personally identifiable information companies have about their customers is absolutely perverse. Some of the world’s largest companies are in the business of selling your personal information for use in advertising. This might sound innocuous but the tracking efforts of these companies are so accurate that many people believe that Facebook listens to their conversations to serve them relevant ads. Even if it’s true that the microphone is not used, the sum of all other data collected is still enough to show creepily relevant advertising.

Unfortunately, the author doesn’t seem to have come to the conclusion yet that it’s the logic of capitalism that go us here. Instead, he just points out that people’s privacy is being abused.

[P]eople now get most of their information from social networks yet these networks dictate the order in which content is served to the user. Google makes the worlds most popular mobile operating system and it’s purpose is drive the company’s bottom line (ad blocking is forbidden). “Smart” devices are everywhere and companies are jumping over each other to put more shit in your house so they can record your movements and sell the information to advertisers. This is all a blatant abuse of privacy that is completely toxic to society.

Agreed, and it’s easy to feel a little helpless against this onslaught. While it’s great to have a list of things that users can do, if those things are difficult to implement and/or hard to understand, then it’s an uphill battle.

That being said, the three suggestions he makes are use

To combat this trend, I have taken the following steps and I think others should join the movement:

  • Aggressively block all online advertisements
  • Don’t succumb to the “curated” feeds
  • Not every device needs to be “smart”

I feel I’m already way ahead of the author in this regard:

  • Aggressively block all online advertisements
  • Don’t succumb to the “curated” feeds
    • I quit Facebook years ago, haven’t got an Instagram account, and pretty much only post links to my own spaces on Twitter and LinkedIn.
  • Not every device needs to be “smart”
    • I don’t really use my Philips Hue lights, and don’t have an Amazon Alexa — or even the Google Assistant on my phone).

It’s not easy to stand up to Big Tech. The amount of money they pour into things make their ‘innovations’ seem inevitable. They can afford to make things cheap and frictionless so you get hooked.

As an aside, it’s interesting to note that those that previously defended Apple as somehow ‘different’ on privacy, despite being the world’s most profitable company, are starting to backtrack.

Source: Nicholas Rempel