✨ The World’s Oldest Story? Astronomers Say Global Myths About ‘Seven Sisters’ Stars May Reach Back 100,000 Years — “Why are the Australian Aboriginal stories so similar to the Greek ones? Anthropologists used to think Europeans might have brought the Greek story to Australia, where it was adapted by Aboriginal people for their own purposes. But the Aboriginal stories seem to be much, much older than European contact. And there was little contact between most Australian Aboriginal cultures and the rest of the world for at least 50,000 years. So why do they share the same stories?”
🚶♂️ The joy of steps: 20 ways to give purpose to your daily walk — “We need to gallivant around outside in daylight so that our circadian rhythms can regulate sleep and alertness. (Yes, even when the sky is resolutely leaden, it is still technically daylight.) Walking warms you up, too; when you get back indoors, it will feel positively tropical.”
🔐 How Law Enforcement Gets Around Your Smartphone’s Encryption — “Cryptographers at Johns Hopkins University used publicly available documentation from Apple and Google as well as their own analysis to assess the robustness of Android and iOS encryption. They also studied more than a decade’s worth of reports about which of these mobile security features law enforcement and criminals have previously bypassed, or can currently, using special hacking tools.”
😲 The Ethics of Emotion in AI Systems (Research Summary) — “There will always be a gap between the emotions modelled and the experience of EAI systems. Addressing this gap also implies recognizing the implicit norms and values integrated into these systems in ways that cannot always be foreseen by the original designers. With EAI, it is not just a matter of deciding between the right emotional models and proxy variables, but what the responses collected signify in terms of human beings’ inner feelings, judgments, and future actions.”
Quotation-as-title by Baltasar Gracián. Image from top-linked post.
I get the feeling that, between film and TV shows on Netflix, Amazon deliveries, and social interaction on Twitter and Mastodon, beyond close friends and family, no-one would even realise if I’d been quarantined.
Writing in The Atlantic, Ian Bogost points out that Every Place Is the Same Now, because you go to every place with your personal screen, a digital portal to the wider world.
Anywhere has become as good as anywhere else. The office is a suitable place for tapping out emails, but so is the bed, or the toilet. You can watch television in the den—but also in the car, or at the coffee shop, turning those spaces into impromptu theaters. Grocery shopping can be done via an app while waiting for the kids’ recital to start. Habits like these compress time, but they also transform space. Nowhere feels especially remarkable, and every place adopts the pleasures and burdens of every other. It’s possible to do so much from home, so why leave at all?
Ian Bogost (The Atlantic)
If you’re a knowledge worker, someone who deals with ideas and virtual objects rather than things in ‘meatspace’, then there is nothing tying you to a particular geographical place. This may be liberating, but it’s also quite… weird.
It’s easy but disorienting, and it makes the home into a very strange space. Until the 20th century, one had to leave the house for almost anything: to work, to eat or shop, to entertain yourself, to see other people. For decades, a family might have a single radio, then a few radios and a single television set. The possibilities available outside the home were far greater than those within its walls. But now, it’s not merely possible to do almost anything from home—it’s also the easiest option. Our forebears’ problem has been inverted: Now home is a prison of convenience that we need special help to escape.
Ian Bogost (The Atlantic)
I’ve worked from home for the last eight years, and now can’t imagine going back to working any other way. Granted, I get to travel pretty much every month, but that 95% being-at-home statistic still includes my multi-day international trips.
I haven’t watched it recently, but in 2009 a film called Surrogates starring Bruce Willis foreshadowed the kind of world we’re creating. Here’s the synopsis via IMDB:
People are living their lives remotely from the safety of their own homes via robotic surrogates — sexy, physically perfect mechanical representations of themselves. It’s an ideal world where crime, pain, fear and consequences don’t exist. When the first murder in years jolts this utopia, FBI agent Greer discovers a vast conspiracy behind the surrogate phenomenon and must abandon his own surrogate, risking his life to unravel the mystery.
If we replace the word ‘robotic’ with ‘virtual’ in this plot summary, then it’s a close approximation to the world in which some of us now live. Facetuned Instagram selfies project a perfect life. We construct our own narratives and then believe the story we have concocted. Everything is amazing but no-one’s happy.
Even Zoom, the videoconferencing software I use most days for work, has an option to smooth out wrinkles, change your background, and make everything look a bit more sparkly. Our offline lives can be gloriously mundane, but online, thanks to various digital effects, we can make them look glorious. And why wouldn’t we?
I think we’ll see people and businesses optimising for how they look and sound online, including recruitment. The ability to communicate effectively at a distance with people who you may never meet in person is a skill that’s going to be in high demand, if it isn’t already.
Remote working may be a trend, but one which is stubbornly resisted by some bosses who are convinced they have to keep a close eye on employees to get any work out of them.
However, when those bosses are forced to implement remote working policies to keep their businesses afloat, and nothing bad happens as a result, this attitude can, and probably will, change. Remote working, when done properly, is not only more cost-effective for businesses, but often leads to higher productivity and self-reported worker happiness.
Being ‘good in the room‘ is fine, and I’m sure it will always be highly prized, but I also see confident, open working practices as something that’s rising in perceived value. Chairing successful online meetings is at least as important as chairing ones offline, for example. We need to think of ways of being able recognise these remote working skills, as it’s not something in which you can receive a diploma.
For workers, of course, there are so many benefits of working from home that I’m not even sure where to start. Your health, relationships, and happiness are just three things that are likely to dramatically improve when you start working remotely.
For example, let’s just take the commute. This dominates the lives of non-remote workers, usually taking an hour or more out of a their day — every day. Commuting is tiring and inconvenient, but people are currently willing to put up with long commutes to afford a decently-sized house, or to live in a nicer area.
So, let’s imagine that because of the current pandemic (which some are calling the world’s biggest remote-working experiment) businesses decide that having their workers being based from home has multi-faceted benefits. What happens next?
Well, if a large percentage (say we got up to ~50%) of the working population started working remotely over the next few months and years, this would have a knock-on effect. We’d see changes in:
…to name but a few. I think it would be a huge net benefit for society, and hopefully allow for much greater civic engagement and democratic participation.
I’ll conclude with a quotation from Nafeez Ahmed’s excellent (long!) post on what he’s calling a global phase shift. Medium says it’s a 30-minute read, but I reckon it’s about half that.
Ahmed points out in stark detail the crisis, potential future scenarios, and the opportunity we’ve got. I particularly appreciate his focus on the complete futility of what he calls “a raw, ‘fend for yourself’ approach”. We must work together to solve the world’s problems.
The coronavirus outbreak is, ultimately, a lesson in not just the inherent systemic fragilities in industrial civilization, but also the limits of its underlying paradigm. This is a paradigm premised on a specific theory of human nature, the neoclassical view of Homo-Economicus, human beings as dislocated units which compete with each other to maximise their material self-gratification through endless consumption and production. That paradigm and its values have brought us so far in our journey as a species, but they have long outlasted their usefulness and now threaten to undermine our societies, and even our survival as a species.
Getting through coronavirus will be an exercise not just in building societal resilience, but relearning the values of cooperation, compassion, generosity and kindness, and building systems which institutionalize these values. It is high time to recognize that such ethical values are not simply human constructs, products of socialization. They are cognitive categories which reflect patterns of behaviour in individuals and organizations that have an evolutionary, adaptive function. In the global phase shift, systems which fail to incorporate these values into their structures will eventually die.
Just as crises can be manufactured by totalitarian regimes to seize power and control populations, perhaps natural crises can be used to make us collectively realise we need to pull together?
Thank you to Seneca for the quotation for today’s title, which sprang to mind after reading Rosie Spinks’ claim in Quartz that we’ve reached ‘peak influencer’.
Where once the social network was basically lunch and sunsets, it’s now a parade of strategically-crafted life updates, career achievements, and public vows to spend less time online (usually made by people who earn money from social media)—all framed with the carefully selected language of a press release. Everyone is striving, so very hard.
Thank goodness for that. The selfie-obsessed influencer brigade is an insidious effect of the neoliberalism that permeates western culture:
For the internet influencer, everything from their morning sun salutation to their coffee enema (really) is a potential money-making opportunity. Forget paying your dues, or working your way up—in fact, forget jobs. Work is life, and getting paid to live your best life is the ultimate aspiration.
“Selling out” is not just perfectly OK in the influencer economy—it’s the raison d’etre. Influencers generally do not have a craft or discipline to stay loyal to in the first place, and by definition their income comes from selling a version of themselves.
As Yascha Mounk, writing in The Atlantic, explains the problem isn’t necessarily with social networks. It’s that you care about them. Social networks flatten everything into a never-ending stream. That stream makes it very difficult to differentiate between gossip and (for example) extremely important things that are an existential threat to democratic institutions:
“When you’re on Twitter, every controversy feels like it’s at the same level of importance,” one influential Democratic strategist told me. Over time, he found it more and more difficult to tune Twitter out: “People whose perception of reality is shaped by Twitter live in a different world and a different country than those off Twitter.”
It’s easier for me to say these days that our obsession with Twitter and Instagram is unhealthy. While I’ve never used Instagram (because it’s owned by Facebook) a decade ago I was spending hours each week on Twitter. My relationship with the service has changed as I’ve grown up and it has changed — especially after it became a publicly-traded company in 2013.
Twitter, in particular, now feels like a neverending soap opera similar to EastEnders. There’s always some outrage or drama running. Perhaps it’s better, as Catherine Price suggests in The New York Times, just to put down our smartphones?
Until now, most discussions of phones’ biochemical effects have focused on dopamine, a brain chemical that helps us form habits — and addictions. Like slot machines, smartphones and apps are explicitly designed to trigger dopamine’s release, with the goal of making our devices difficult to put down.
This manipulation of our dopamine systems is why many experts believe that we are developing behavioral addictions to our phones. But our phones’ effects on cortisol are potentially even more alarming.
Cortisol is our primary fight-or-flight hormone. Its release triggers physiological changes, such as spikes in blood pressure, heart rate and blood sugar, that help us react to and survive acute physical threats.
Depending on how we use them, social networks can stoke the worst feelings in us: emotions such as jealousy, anger, and worry. This is not conducive to healthy outcomes, especially for children where stress has a direct correlation to the take-up of addictive substances, and to heart disease in later life.
I wonder how future generations will look back at this time period?
Emotional Intelligence: The Social Skills You Weren’t Taught in School(Lifehacker) — “People who exhibit emotional intelligence have the less obvious skills necessary to get ahead in life, such as managing conflict resolution, reading and responding to the needs of others, and keeping their own emotions from overflowing and disrupting their lives.”
I usually limit myself to three quotations in posts I write here. I’m going to break that self-imposed rule for this article by Erika Christakis in The Atlantic on parents’ screentime.
Christakis points out the good and the bad news:
Yes, parents now have more face time with their children than did almost any parents in history. Despite a dramatic increase in the percentage of women in the workforce, mothers today astoundingly spend more time caring for their children than mothers did in the 1960s. But the engagement between parent and child is increasingly low-quality, even ersatz. Parents are constantly present in their children’s lives physically, but they are less emotionally attuned.
As parents, and in society in general, we’re super-hot on limiting kids’ screentime, but we don’t necessarily apply that to ourselves:
[S]urprisingly little attention is paid to screen use by parents… who now suffer from what the technology expert Linda Stone more than 20 years ago called “continuous partial attention.” This condition is harming not just us, as Stone has argued; it is harming our children. The new parental-interaction style can interrupt an ancient emotional cueing system, whose hallmark is responsive communication, the basis of most human learning. We’re in uncharted territory.
‘Continuous partial attention’ is the term people tend to use these days instead of ‘multitasking’. To my mind it’s a better term, as it references the fact that you’re not just trying to do different things simultaneously, you’re trying to pay attention to them.
I’ve given the example before of my father sitting down to read the newspaper on a Sunday. Is there really much difference to the child, I’ve wondered, between his being hidden behind a broadsheet for an hour, and his scrolling and clicking on a mobile device? In some ways yes, in some ways no.
It has never been easy to balance adults’ and children’s needs, much less their desires, and it’s naive to imagine that children could ever be the unwavering center of parental attention. Parents have always left kids to entertain themselves at times—“messing about in boats,” in a memorable phrase from The Wind in the Willows, or just lounging aimlessly in playpens. In some respects, 21st-century children’s screen time is not very different from the mother’s helpers every generation of adults has relied on to keep children occupied. When parents lack playpens, real or proverbial, mayhem is rarely far behind. Caroline Fraser’s recent biography of Laura Ingalls Wilder, the author of Little House on the Prairie, describes the exceptionally ad hoc parenting style of 19th-century frontier parents, who stashed babies on the open doors of ovens for warmth and otherwise left them vulnerable to “all manner of accidents as their mothers tried to cope with competing responsibilities.” Wilder herself recounted a variety of near-calamities with her young daughter, Rose; at one point she looked up from her chores to see a pair of riding ponies leaping over the toddler’s head.
To me, the difference can be summed up quite easily: our mobile devices are designed to be addictive and capture our full attention, in ways that analogue media and experiences aren’t.
Short, deliberate separations can of course be harmless, even healthy, for parent and child alike (especially as children get older and require more independence). But that sort of separation is different from the inattention that occurs when a parent is with a child but communicating through his or her nonengagement that the child is less valuable than an email. A mother telling kids to go out and play, a father saying he needs to concentrate on a chore for the next half hour—these are entirely reasonable responses to the competing demands of adult life. What’s going on today, however, is the rise of unpredictable care, governed by the beeps and enticements of smartphones. We seem to have stumbled into the worst model of parenting imaginable—always present physically, thereby blocking children’s autonomy, yet only fitfully present emotionally.
Physically present but emotionally unavailable. Yes, we need to do better.
Under the circumstances, it’s easier to focus our anxieties on our children’s screen time than to pack up our own devices. I understand this tendency all too well. In addition to my roles as a mother and a foster parent, I am the maternal guardian of a middle-aged, overweight dachshund. Being middle-aged and overweight myself, I’d much rather obsess over my dog’s caloric intake, restricting him to a grim diet of fibrous kibble, than address my own food regimen and relinquish (heaven forbid) my morning cinnamon bun. Psychologically speaking, this is a classic case of projection—the defensive displacement of one’s failings onto relatively blameless others. Where screen time is concerned, most of us need to do a lot less projecting.
To help you avoid finding out the horrifying truth when your phone goes clattering to the ground, we tested all of the major smartphones by dropping them over the course of four rounds from 4 feet and 6 feet onto wood and concrete — and even into a toilet — to see which handset is the toughest.
Apple’s least-expensive phone didn’t prove very tough at all. In fact, the $399 iPhone SE was rendered unusable before all of the others. However, this was not a big surprise, as the newer iPhone 8 and iPhone X are made with much stronger glass than the iPhone SE’s from 2016.
In all, the team identified 2,121 trackers — 233 of which were previously unknown to popular advertising and tracking blacklists. These trackers collected personal data like Android IDs, phone numbers, device fingerprints, and MAC addresses.
The link to the full report is linked to in the quotation above, but the high-level findings were:
»Most trackers are owned by just a few parent organizations. The authors report that sixteen of the 20 most pervasive trackers are owned by Alphabet. Other parent organizations include Facebook and Verizon. “There is a clear oligopoly happening in the ecosystem,” Nithyanand says.
» Mobile games and educational apps are the two categories with the highest number of trackers. Users of news and entertainment apps are also exposed to a wide range of trackers. In a separate paper co-authored by Vallina-Rodriguez, he explores the intersection of mobile tracking and apps for youngsters: “Is Our Children’s Apps Learning?”
» Cross-device tracking is widespread. The vast majority of mobile trackers are also active on the desktop web, allowing companies to link together personal data produced in both ecosystems. “Cross-platform tracking is already happening everywhere,” Nithyanand says. “Fifteen of the top 20 organizations active in the mobile advertising space also have a presence in the web advertising space.”
We’re finally getting the stage where a large portion of the population can’t really ignore the fact that they’re using free services in return for pervasive and always-on surveillance.
Audrey Watters answers the question whether we’re ‘addicted’ to technology:
I am hesitant to make any clinical diagnosis about technology and addiction – I’m not a medical professional. But I’ll readily make some cultural observations, first and foremost, about how our notions of “addiction” have changed over time. “Addiction” is medical concept but it’s also a cultural one, and it’s long been one tied up in condemning addicts for some sort of moral failure. That is to say, we have labeled certain behaviors as “addictive” when they’ve involve things society doesn’t condone. Watching TV. Using opium. Reading novels. And I think some of what we hear in discussions today about technology usage – particularly about usage among children and teens – is that we don’t like how people act with their phones. They’re on them all the time. They don’t make eye contact. They don’t talk at the dinner table. They eat while staring at their phones. They sleep with their phones. They’re constantly checking them.
The problem is that our devices are designed to be addictive, much like casinos. The apps on our phones are designed to increase certain metrics:
I think we’re starting to realize – or I hope we’re starting to realize – that those metrics might conflict with other values. Privacy, sure. But also etiquette. Autonomy. Personal agency. Free will.
Ultimately, she thinks, this isn’t a question of addiction. It’s much wider than that:
How are our minds – our sense of well-being, our knowledge of the world – being shaped and mis-shaped by technology? Is “addiction” really the right framework for this discussion? What steps are we going to take to resist the nudges of the tech industry – individually and socially and yes maybe even politically?