Tag: New York Times (page 1 of 2)

To be perfectly symmetrical is to be perfectly dead

So said Igor Stravinsky. I’m a little behind on my writing, and prioritised writing up my experiences in the Lake District over the past couple of days.

Today’s update is therefore a list post:

  • Degrowth: a Call for Radical Abundance (Jason Hickel) — “In other words, the birth of capitalism required the creation of scarcity. The constant creation of scarcity is the engine of the juggernaut.”
  • Hey, You Left Something Out (Cogito, Ergo Sumana) — “People who want to compliment work should probably learn to give compliments that sound encouraging.”
  • The Problem is Capitalism (George Monbiot) — “A system based on perpetual growth cannot function without peripheries and externalities. There must always be an extraction zone, from which materials are taken without full payment, and a disposal zone, where costs are dumped in the form of waste and pollution.”
  • In Stores, Secret Surveillance Tracks Your Every Move (The New York Times) — “For years, Apple and Google have allowed companies to bury surveillance features inside the apps offered in their app stores. And both companies conduct their own beacon surveillance through iOS and Android.”
  • The Inevitable Same-ification of the Internet
    (Matthew Ström) — “Convergence is not the sign of a broken system, or a symptom of a more insidious disease. It is an emergent phenomenon that arises from a few simple rules.”


Friday fumblings

These were the things I came across this week that made me smile:


Image via Why WhatsApp Will Never Be Secure (Pavel Durov)

Fascinating Friday Facts

Here’s some links I thought I’d share which struck me as interesting:


Header image: Keep out! The 100m² countries – in pictures (The Guardian)

Sometimes even to live is an act of courage

Thank you to Seneca for the quotation for today’s title, which sprang to mind after reading Rosie Spinks’ claim in Quartz that we’ve reached ‘peak influencer’.

Where once the social network was basically lunch and sunsets, it’s now a parade of strategically-crafted life updates, career achievements, and public vows to spend less time online (usually made by people who earn money from social media)—all framed with the carefully selected language of a press release. Everyone is striving, so very hard.

Thank goodness for that. The selfie-obsessed influencer brigade is an insidious effect of the neoliberalism that permeates western culture:

For the internet influencer, everything from their morning sun salutation to their coffee enema (really) is a potential money-making opportunity. Forget paying your dues, or working your way up—in fact, forget jobs. Work is life, and getting paid to live your best life is the ultimate aspiration.

[…]

“Selling out” is not just perfectly OK in the influencer economy—it’s the raison d’etre. Influencers generally do not have a craft or discipline to stay loyal to in the first place, and by definition their income comes from selling a version of themselves.

As Yascha Mounk, writing in The Atlantic, explains the problem isn’t necessarily with social networks. It’s that you care about them. Social networks flatten everything into a never-ending stream. That stream makes it very difficult to differentiate between gossip and (for example) extremely important things that are an existential threat to democratic institutions:

“When you’re on Twitter, every controversy feels like it’s at the same level of importance,” one influential Democratic strategist told me. Over time, he found it more and more difficult to tune Twitter out: “People whose perception of reality is shaped by Twitter live in a different world and a different country than those off Twitter.”

It’s easier for me to say these days that our obsession with Twitter and Instagram is unhealthy. While I’ve never used Instagram (because it’s owned by Facebook) a decade ago I was spending hours each week on Twitter. My relationship with the service has changed as I’ve grown up and it has changed — especially after it became a publicly-traded company in 2013.

Twitter, in particular, now feels like a neverending soap opera similar to EastEnders. There’s always some outrage or drama running. Perhaps it’s better, as Catherine Price suggests in The New York Times, just to put down our smartphones?

Until now, most discussions of phones’ biochemical effects have focused on dopamine, a brain chemical that helps us form habits — and addictions. Like slot machines, smartphones and apps are explicitly designed to trigger dopamine’s release, with the goal of making our devices difficult to put down.

This manipulation of our dopamine systems is why many experts believe that we are developing behavioral addictions to our phones. But our phones’ effects on cortisol are potentially even more alarming.

Cortisol is our primary fight-or-flight hormone. Its release triggers physiological changes, such as spikes in blood pressure, heart rate and blood sugar, that help us react to and survive acute physical threats.

Depending on how we use them, social networks can stoke the worst feelings in us: emotions such as jealousy, anger, and worry. This is not conducive to healthy outcomes, especially for children where stress has a direct correlation to the take-up of addictive substances, and to heart disease in later life.

I wonder how future generations will look back at this time period?


Also check out:

Let’s (not) let children get bored again

Is boredom a good thing? Is there a direct link between having nothing to do and being creative? I’m not sure. Pamela Paul, writing in The New York Times, certainly thinks so:

[B]oredom is something to experience rather than hastily swipe away. And not as some kind of cruel Victorian conditioning, recommended because it’s awful and toughens you up. Despite the lesson most adults learned growing up — boredom is for boring people — boredom is useful. It’s good for you.

Paul doesn’t give any evidence beyond anecdote for boredom being ‘good for you’. She gives a post hoc argument stating that because someone’s creative life came after (what they remembered as) a childhood punctuated by boredom, the boredom must have caused the creativity.

I don’t think that’s true at all. You need space to be creative, but that space isn’t physical, it’s mental. You can carve it out in any situation, whether that’s while watching a TV programme or staring out of a window.

For me, the elephant in the room here is the art of parenting. Not a week goes by without the media beating up parents for not doing a good enough job. This is particularly true of the bizarre concept of ‘screentime’ (something that Ian O’Byrne and Kristen Turner are investigating as part of a new project).

In the article, Paul admits that previous generations ‘underparented’. However, in her article she creates a false dichotomy between that and ‘relentless’ modern helicopter parents. Where’s the happy medium that most of us inhabit?

Only a few short decades ago, during the lost age of underparenting, grown-ups thought a certain amount of boredom was appropriate. And children came to appreciate their empty agendas. In an interview with GQ magazine, Lin-Manuel Miranda credited his unattended afternoons with fostering inspiration. “Because there is nothing better to spur creativity than a blank page or an empty bedroom,” he said.

Nowadays, subjecting a child to such inactivity is viewed as a dereliction of parental duty. In a much-read story in The Times, “The Relentlessness of Modern Parenting,” Claire Cain Miller cited a recent study that found that regardless of class, income or race, parents believed that “children who were bored after school should be enrolled in extracurricular activities, and that parents who were busy should stop their task and draw with their children if asked.”

So parents who provide for their children by enrolling them in classes and activities to explore and develop their talents are somehow doing them a disservice? I don’t get it. Fair enough if they’re forcing them into those activities, but I don’t know too many parents who are doing that.

Ultimately, Paul and I have very different expectations and experiences of adult life. I don’t expect to be bored whether at work our out of it. There’s so much to do in the world, online and offline, that I don’t particularly get the fetishisation of boredom. To me, as soon as someone uses the word ‘realistic’, they’ve lost the argument:

But surely teaching children to endure boredom rather than ratcheting up the entertainment will prepare them for a more realistic future, one that doesn’t raise false expectations of what work or life itself actually entails. One day, even in a job they otherwise love, our kids may have to spend an entire day answering Friday’s leftover email. They may have to check spreadsheets. Or assist robots at a vast internet-ready warehouse.

This sounds boring, you might conclude. It sounds like work, and it sounds like life. Perhaps we should get used to it again, and use it to our benefit. Perhaps in an incessant, up-the-ante world, we could do with a little less excitement.

No, perhaps we should make more engaging, and provide more than bullshit jobs. Perhaps we should seek out interesting things ourselves, so that our children do likewise?

Source: The New York Times

Exit option democracy

This week saw the launch of a new book by Shoshana Zuboff entitled The Age of Surveillance Capitalism: the fight for a human future at the new frontier of power. It was featured in two of my favourite newspapers, The Observer and the The New York Times, and is the kind of book I would have lapped up this time last year.

In 2019, though, I’m being a bit more pragmatic, taking heed of Stoic advice to focus on the things that you can change. Chiefly, that’s your own perceptions about the world. I can’t change the fact that, despite the Snowden revelations and everything that has come afterwards, most people don’t care one bit that they’re trading privacy for convenience..

That puts those who care about privacy in a bit of a predicament. You can use the most privacy-respecting email service in the world, but as soon as you communicate with someone using Gmail, then Google has got the entire conversation. Chances are, the organisation you work for has ‘gone Google’ too.

Then there’s Facebook shadow profiles. You don’t even have to have an account on that platform for the company behind it to know all about you. Same goes with companies knowing who’s in your friendship group if your friends upload their contacts to WhatsApp. It makes no difference if you use ridiculous third-party gadgets or not.

In short, if you want to live in modern society, your privacy depends on your family and friends. Of course you have the option to choose not to participate in certain platforms (I don’t use Facebook products) but that comes at a significant cost. It’s the digital equivalent of Thoreau taking himself off to Walden pond.

In a post from last month that I stumbled across this weekend, Nate Matias reflects on a talk he attended by Janet Vertesi at Princeton University’s Center for Information Technology Policy. Vertesi, says Matias, tried four different ways of opting out of technology companies gathering data on her:

  • Platform avoidance,
  • Infrastructural avoidance
  • Hardware experiments
  • Digital homesteading

Interestingly, the starting point is Vertesi’s rejection of ‘exit option democracy’:

The basic assumption of markets is that people have choices. This idea that “you can just vote with your feet” is called an “exit option democracy” in organizational sociology (Weeks, 2004). Opt-out democracy is not really much of a democracy, says Janet. She should know–she’s been opting out of tech products for years.

The option Vertesi advocates for going Google-free is a pain in the backside. I know, because I’ve tried it:

To prevent Google from accessing her data, Janet practices “data balkanization,” spreading her traces across multiple systems. She’s used DuckDuckGo, sandstorm.io, ResilioSync, and youtube-dl to access key services. She’s used other services occasionally and non-exclusively, and varied it with open source alternatives like etherpad and open street map. It’s also important to pay attention to who is talking to whom and sharing data with whom. Data balkanization relies on knowing what companies hate each other and who’s about to get in bed with whom.

The time I’ve spent doing these things was time I was not being productive, nor was it time I was spending with my wife and kids. It’s easy to roll your eyes at people “trading privacy for convenience” but it all adds up.

Talking of family, straying too far from societal norms has, for better or worse, negative consequences. Just as Linux users were targeted for surveillance, so Vertisi and her husband were suspected of fraud for browsing the web using Tor and using cash for transactions:

Trying to de-link your identity from data storage has consequences. For example, when Janet and her husband tried to use cash for their purchases, they faced risks of being reported to the authorities for fraud, even though their actions were legal.

And then, of course, there’s the tinfoil hat options:

…Janet used parts from electronics kits to make her own 2g phone. After making the phone Janet quickly realized even a privacy-protecting phone can’t connect to the network without identifying the user to companies through the network itself.

I’m rolling my eyes at this point. The farthest I’ve gone down this route is use the now-defunct Firefox OS and LineageOS for microG. Although both had their upsides, they were too annoying to use for extended periods of time.

Finally, Vertesi goes down the route of trying to own all your own data. I’ll just point out that there’s a reason those of us who had huge CD and MP3 collections switched to Spotify. Looking after any collection takes time and effort. It’s also a lot more cost effective for someone like me to ‘rent’ my music instead of own it. The same goes for Netflix.

What I do accept, though, is that Vertesi’s findings show that ‘exit democracy’ isn’t really an option here, so the world of technology isn’t really democratic. My takeaway from all this, and the reason for my pragmatic approach this year, is that it’s up to governments to do something about all this.

Western society teaches us that empowered individuals can change the world. But if you take a closer look, whether it’s surveillance capitalism or climate change, it’s legislation that’s going to make the biggest difference here. Just look at the shift that took place because of GDPR.

So whether or not I read Zuboff’s new book, I’m going to continue my pragmatic approach this year. Meanwhile, I’ll continue to mute the microphone on the smart speakers in our house when they’re not being used, block trackers on my Android smartphone, and continue my monthly donations to work of the Electronic Frontier Foundation and the Open Rights Group.

Source: J. Nathan Matias

The quixotic fools of imperialism

As an historian with an understanding of our country’s influence of the world over the last few hundred years, I look back at the British Empire with a sense of shame, not of pride.

But, even if you do flag-wave and talk about our nation’s glorious past, an article in yesterday’s New York Times shows how far we’ve falled:

The Brexiteers, pursuing a fantasy of imperial-era strength and self-sufficiency, have repeatedly revealed their hubris, mulishness and ineptitude over the past two years. Though originally a “Remainer,” Prime Minister Theresa May has matched their arrogant obduracy, imposing a patently unworkable timetable of two years on Brexit and laying down red lines that undermined negotiations with Brussels and doomed her deal to resoundingly bipartisan rejection this week in Parliament.

I think I’d forgotten how useful the word mendacious is in this context (“lying, untruthful”):

From David Cameron, who recklessly gambled his country’s future on a referendum in order to isolate some whingers in his Conservative party, to the opportunistic Boris Johnson, who jumped on the Brexit bandwagon to secure the prime ministerial chair once warmed by his role model Winston Churchill, and the top-hatted, theatrically retro Jacob Rees-Mogg, whose fund management company has set up an office within the European Union even as he vehemently scorns it, the British political class has offered to the world an astounding spectacle of mendacious, intellectually limited hustlers.

When leaving countries after their imperialist adventures, members of the British ruling elite were fond of dividing countries with arbitrary lines. Cases in point: India, Ireland,  the Middle East. That this doesn’t work is blatantly obvious, and is a lazy way to deal with complex issues.

It is a measure of English Brexiteers’ political acumen that they were initially oblivious to the volatile Irish question and contemptuous of the Scottish one. Ireland was cynically partitioned to ensure that Protestant settlers outnumber native Catholics in one part of the country. The division provoked decades of violence and consumed thousands of lives. It was partly healed in 1998, when a peace agreement removed the need for security checks along the British-imposed partition line.

I’d love to think that we’re nearing the end of what the Times calls ‘chumocracy’ and no longer have to suffer what Hannah Arendt called “the quixotic fools of imperialism”. We can but hope.

 

The endless Black Friday of the soul

This article by Ruth Whippman appears in the New York Times, so focuses on the US, but the main thrust is applicable on a global scale:

When we think “gig economy,” we tend to picture an Uber driver or a TaskRabbit tasker rather than a lawyer or a doctor, but in reality, this scrappy economic model — grubbing around for work, all big dreams and bad health insurance — will soon catch up with the bulk of America’s middle class.

Apparently, 94% of the jobs created in the last decade are freelancer or contract positions. That’s the trajectory we’re on.

Almost everyone I know now has some kind of hustle, whether job, hobby, or side or vanity project. Share my blog post, buy my book, click on my link, follow me on Instagram, visit my Etsy shop, donate to my Kickstarter, crowdfund my heart surgery. It’s as though we are all working in Walmart on an endless Black Friday of the soul.

[…]

Kudos to whichever neoliberal masterminds came up with this system. They sell this infinitely seductive torture to us as “flexible working” or “being the C.E.O. of You!” and we jump at it, salivating, because on its best days, the freelance life really can be all of that.

I don’t think this is a neoliberal conspiracy, it’s just the logic of capitalism seeping into every area of society. As we all jockey for position in the new-ish landscape of social media, everything becomes mediated by the market.

What I think’s missing from this piece, though, is a longer-term trend towards working less. We seem to be endlessly concerned about how the nature of work is changing rather than the huge opportunities for us to do more than waste away in bullshit jobs.

I’ve been advising anyone who’ll listen over the last few years that reducing the number of days you work has a greater impact on your happiness than earning more money. Once you reach a reasonable salary, there’s diminishing returns in any case.

Source: The New York Times (via Dense Discovery)

Insidious Instagram influencers?

There seems to a lot of pushback at the moment against the kind of lifestyle that’s a direct result of the Silicon Valley mindset. People are rejecting everything from the Instagram ‘influencer’ approach to life to the ‘techbro’-style crazy working hours.

This week saw Basecamp, a company that prides itself on the work/life balance of its employees and on rejecting venture capital, publish another book. You can guess at what it focuses on from its title, It doesn’t have to be crazy at work. I’ve enjoyed and have recommended their previous books (as ’37 Signals’), and am looking forward to reading this latest one.

Alongside that book, I’ve seen three articles that, to me at least, are all related to the same underlying issues. The first comes from Simone Stolzoff who writes in Quartz at Work that we’re no longer quite sure what we’re working for:

Before I became a journalist, I worked in an office with hot breakfast in the mornings and yoga in the evenings. I was #blessed. But I would reflect on certain weeks—after a string of days where I was lured in before 8am and stayed until well after sunset—like a driver on the highway who can’t remember the last five miles of road. My life had become my work. And my work had become a series of rinse-and-repeat days that started to feel indistinguishable from one another.

Part of this lack of work/life balance comes from our inability these days to simply have hobbies, or interests, or do anything just for the sake of it. As Tim Wu points out in The New York Times, it’s all linked some kind of existential issue around identity:

If you’re a jogger, it is no longer enough to cruise around the block; you’re training for the next marathon. If you’re a painter, you are no longer passing a pleasant afternoon, just you, your watercolors and your water lilies; you are trying to land a gallery show or at least garner a respectable social media following. When your identity is linked to your hobby — you’re a yogi, a surfer, a rock climber — you’d better be good at it, or else who are you?

To me, this is inextricably linked to George Monbiot’s recent piece in The Guardian about about the problem of actors being interviewed about the world’s issues disproportionately more often than anybody else. As a result, we’re rewarding those people who look like they know what they’re talking about with our collective attention, rather than those who actually do. Monbiot concludes:

The task of all citizens is to understand what we are seeing. The world as portrayed is not the world as it is. The personification of complex issues confuses and misdirects us, ensuring that we struggle to comprehend and respond to our predicaments. This, it seems, is often the point.

There’s always been a difference between appearance and reality in public life. However, previously, at least they seem to have been two faces of the same coin. These days, our working lives as well as our public lives seem to be

Sources: Basecamp / Quartz at Work / The New York Times / The Guardian

 

When we eat matters

As I get older, I’m more aware that some things I do are very affected by the world around me. For example, since finding out that the intensity of light you experience during the day is correlated with the amount of sleep you get, I don’t feel so bad about ‘sleeping in’ during the summer months.

So it shouldn’t be surprising that this article in The New York Times suggests that there’s a good and a bad time to eat:

A growing body of research suggests that our bodies function optimally when we align our eating patterns with our circadian rhythms, the innate 24-hour cycles that tell our bodies when to wake up, when to eat and when to fall asleep. Studies show that chronically disrupting this rhythm — by eating late meals or nibbling on midnight snacks, for example — could be a recipe for weight gain and metabolic trouble.

A more promising approach is what some call ‘intermittent fasting’ where you restrict your calorific intake to eight hours of the day, and don’t consume anything other than water for the other 16 hours.

This approach, known as early time-restricted feeding, stems from the idea that human metabolism follows a daily rhythm, with our hormones, enzymes and digestive systems primed for food intake in the morning and afternoon. Many people, however, snack and graze from roughly the time they wake up until shortly before they go to bed. Dr. Panda has found in his research that the average person eats over a 15-hour or longer period each day, starting with something like milk and coffee shortly after rising and ending with a glass of wine, a late night meal or a handful of chips, nuts or some other snack shortly before bed.

That pattern of eating, he says, conflicts with our biological rhythms.

So when should we eat? As early as possible in the day, it would seem:

Most of the evidence in humans suggests that consuming the bulk of your food earlier in the day is better for your health, said Dr. Courtney Peterson, an assistant professor in the department of nutrition sciences at the University of Alabama at Birmingham. Dozens of studies demonstrate that blood sugar control is best in the morning and at its worst in the evening. We burn more calories and digest food more efficiently in the morning as well.

That’s not great news for me. After a protein smoothie in the morning and eggs for lunch, I end up eating most of my calories in the evening. I’m going to have to rethink my regime…

Source: The New York Times