Tag: New York Times

Let’s (not) let children get bored again

Is boredom a good thing? Is there a direct link between having nothing to do and being creative? I’m not sure. Pamela Paul, writing in The New York Times, certainly thinks so:

[B]oredom is something to experience rather than hastily swipe away. And not as some kind of cruel Victorian conditioning, recommended because it’s awful and toughens you up. Despite the lesson most adults learned growing up — boredom is for boring people — boredom is useful. It’s good for you.

Paul doesn’t give any evidence beyond anecdote for boredom being ‘good for you’. She gives a post hoc argument stating that because someone’s creative life came after (what they remembered as) a childhood punctuated by boredom, the boredom must have caused the creativity.

I don’t think that’s true at all. You need space to be creative, but that space isn’t physical, it’s mental. You can carve it out in any situation, whether that’s while watching a TV programme or staring out of a window.

For me, the elephant in the room here is the art of parenting. Not a week goes by without the media beating up parents for not doing a good enough job. This is particularly true of the bizarre concept of ‘screentime’ (something that Ian O’Byrne and Kristen Turner are investigating as part of a new project).

In the article, Paul admits that previous generations ‘underparented’. However, in her article she creates a false dichotomy between that and ‘relentless’ modern helicopter parents. Where’s the happy medium that most of us inhabit?

Only a few short decades ago, during the lost age of underparenting, grown-ups thought a certain amount of boredom was appropriate. And children came to appreciate their empty agendas. In an interview with GQ magazine, Lin-Manuel Miranda credited his unattended afternoons with fostering inspiration. “Because there is nothing better to spur creativity than a blank page or an empty bedroom,” he said.

Nowadays, subjecting a child to such inactivity is viewed as a dereliction of parental duty. In a much-read story in The Times, “The Relentlessness of Modern Parenting,” Claire Cain Miller cited a recent study that found that regardless of class, income or race, parents believed that “children who were bored after school should be enrolled in extracurricular activities, and that parents who were busy should stop their task and draw with their children if asked.”

So parents who provide for their children by enrolling them in classes and activities to explore and develop their talents are somehow doing them a disservice? I don’t get it. Fair enough if they’re forcing them into those activities, but I don’t know too many parents who are doing that.

Ultimately, Paul and I have very different expectations and experiences of adult life. I don’t expect to be bored whether at work our out of it. There’s so much to do in the world, online and offline, that I don’t particularly get the fetishisation of boredom. To me, as soon as someone uses the word ‘realistic’, they’ve lost the argument:

But surely teaching children to endure boredom rather than ratcheting up the entertainment will prepare them for a more realistic future, one that doesn’t raise false expectations of what work or life itself actually entails. One day, even in a job they otherwise love, our kids may have to spend an entire day answering Friday’s leftover email. They may have to check spreadsheets. Or assist robots at a vast internet-ready warehouse.

This sounds boring, you might conclude. It sounds like work, and it sounds like life. Perhaps we should get used to it again, and use it to our benefit. Perhaps in an incessant, up-the-ante world, we could do with a little less excitement.

No, perhaps we should make more engaging, and provide more than bullshit jobs. Perhaps we should seek out interesting things ourselves, so that our children do likewise?

Source: The New York Times

Exit option democracy

This week saw the launch of a new book by Shoshana Zuboff entitled The Age of Surveillance Capitalism: the fight for a human future at the new frontier of power. It was featured in two of my favourite newspapers, The Observer and the The New York Times, and is the kind of book I would have lapped up this time last year.

In 2019, though, I’m being a bit more pragmatic, taking heed of Stoic advice to focus on the things that you can change. Chiefly, that’s your own perceptions about the world. I can’t change the fact that, despite the Snowden revelations and everything that has come afterwards, most people don’t care one bit that they’re trading privacy for convenience..

That puts those who care about privacy in a bit of a predicament. You can use the most privacy-respecting email service in the world, but as soon as you communicate with someone using Gmail, then Google has got the entire conversation. Chances are, the organisation you work for has ‘gone Google’ too.

Then there’s Facebook shadow profiles. You don’t even have to have an account on that platform for the company behind it to know all about you. Same goes with companies knowing who’s in your friendship group if your friends upload their contacts to WhatsApp. It makes no difference if you use ridiculous third-party gadgets or not.

In short, if you want to live in modern society, your privacy depends on your family and friends. Of course you have the option to choose not to participate in certain platforms (I don’t use Facebook products) but that comes at a significant cost. It’s the digital equivalent of Thoreau taking himself off to Walden pond.

In a post from last month that I stumbled across this weekend, Nate Matias reflects on a talk he attended by Janet Vertesi at Princeton University’s Center for Information Technology Policy. Vertesi, says Matias, tried four different ways of opting out of technology companies gathering data on her:

  • Platform avoidance,
  • Infrastructural avoidance
  • Hardware experiments
  • Digital homesteading

Interestingly, the starting point is Vertesi’s rejection of ‘exit option democracy’:

The basic assumption of markets is that people have choices. This idea that “you can just vote with your feet” is called an “exit option democracy” in organizational sociology (Weeks, 2004). Opt-out democracy is not really much of a democracy, says Janet. She should know–she’s been opting out of tech products for years.

The option Vertesi advocates for going Google-free is a pain in the backside. I know, because I’ve tried it:

To prevent Google from accessing her data, Janet practices “data balkanization,” spreading her traces across multiple systems. She’s used DuckDuckGo, sandstorm.io, ResilioSync, and youtube-dl to access key services. She’s used other services occasionally and non-exclusively, and varied it with open source alternatives like etherpad and open street map. It’s also important to pay attention to who is talking to whom and sharing data with whom. Data balkanization relies on knowing what companies hate each other and who’s about to get in bed with whom.

The time I’ve spent doing these things was time I was not being productive, nor was it time I was spending with my wife and kids. It’s easy to roll your eyes at people “trading privacy for convenience” but it all adds up.

Talking of family, straying too far from societal norms has, for better or worse, negative consequences. Just as Linux users were targeted for surveillance, so Vertisi and her husband were suspected of fraud for browsing the web using Tor and using cash for transactions:

Trying to de-link your identity from data storage has consequences. For example, when Janet and her husband tried to use cash for their purchases, they faced risks of being reported to the authorities for fraud, even though their actions were legal.

And then, of course, there’s the tinfoil hat options:

…Janet used parts from electronics kits to make her own 2g phone. After making the phone Janet quickly realized even a privacy-protecting phone can’t connect to the network without identifying the user to companies through the network itself.

I’m rolling my eyes at this point. The farthest I’ve gone down this route is use the now-defunct Firefox OS and LineageOS for microG. Although both had their upsides, they were too annoying to use for extended periods of time.

Finally, Vertesi goes down the route of trying to own all your own data. I’ll just point out that there’s a reason those of us who had huge CD and MP3 collections switched to Spotify. Looking after any collection takes time and effort. It’s also a lot more cost effective for someone like me to ‘rent’ my music instead of own it. The same goes for Netflix.

What I do accept, though, is that Vertesi’s findings show that ‘exit democracy’ isn’t really an option here, so the world of technology isn’t really democratic. My takeaway from all this, and the reason for my pragmatic approach this year, is that it’s up to governments to do something about all this.

Western society teaches us that empowered individuals can change the world. But if you take a closer look, whether it’s surveillance capitalism or climate change, it’s legislation that’s going to make the biggest difference here. Just look at the shift that took place because of GDPR.

So whether or not I read Zuboff’s new book, I’m going to continue my pragmatic approach this year. Meanwhile, I’ll continue to mute the microphone on the smart speakers in our house when they’re not being used, block trackers on my Android smartphone, and continue my monthly donations to work of the Electronic Frontier Foundation and the Open Rights Group.

Source: J. Nathan Matias

The quixotic fools of imperialism

As an historian with an understanding of our country’s influence of the world over the last few hundred years, I look back at the British Empire with a sense of shame, not of pride.

But, even if you do flag-wave and talk about our nation’s glorious past, an article in yesterday’s New York Times shows how far we’ve falled:

The Brexiteers, pursuing a fantasy of imperial-era strength and self-sufficiency, have repeatedly revealed their hubris, mulishness and ineptitude over the past two years. Though originally a “Remainer,” Prime Minister Theresa May has matched their arrogant obduracy, imposing a patently unworkable timetable of two years on Brexit and laying down red lines that undermined negotiations with Brussels and doomed her deal to resoundingly bipartisan rejection this week in Parliament.

I think I’d forgotten how useful the word mendacious is in this context (“lying, untruthful”):

From David Cameron, who recklessly gambled his country’s future on a referendum in order to isolate some whingers in his Conservative party, to the opportunistic Boris Johnson, who jumped on the Brexit bandwagon to secure the prime ministerial chair once warmed by his role model Winston Churchill, and the top-hatted, theatrically retro Jacob Rees-Mogg, whose fund management company has set up an office within the European Union even as he vehemently scorns it, the British political class has offered to the world an astounding spectacle of mendacious, intellectually limited hustlers.

When leaving countries after their imperialist adventures, members of the British ruling elite were fond of dividing countries with arbitrary lines. Cases in point: India, Ireland,  the Middle East. That this doesn’t work is blatantly obvious, and is a lazy way to deal with complex issues.

It is a measure of English Brexiteers’ political acumen that they were initially oblivious to the volatile Irish question and contemptuous of the Scottish one. Ireland was cynically partitioned to ensure that Protestant settlers outnumber native Catholics in one part of the country. The division provoked decades of violence and consumed thousands of lives. It was partly healed in 1998, when a peace agreement removed the need for security checks along the British-imposed partition line.

I’d love to think that we’re nearing the end of what the Times calls ‘chumocracy’ and no longer have to suffer what Hannah Arendt called “the quixotic fools of imperialism”. We can but hope.

 

The endless Black Friday of the soul

This article by Ruth Whippman appears in the New York Times, so focuses on the US, but the main thrust is applicable on a global scale:

When we think “gig economy,” we tend to picture an Uber driver or a TaskRabbit tasker rather than a lawyer or a doctor, but in reality, this scrappy economic model — grubbing around for work, all big dreams and bad health insurance — will soon catch up with the bulk of America’s middle class.

Apparently, 94% of the jobs created in the last decade are freelancer or contract positions. That’s the trajectory we’re on.

Almost everyone I know now has some kind of hustle, whether job, hobby, or side or vanity project. Share my blog post, buy my book, click on my link, follow me on Instagram, visit my Etsy shop, donate to my Kickstarter, crowdfund my heart surgery. It’s as though we are all working in Walmart on an endless Black Friday of the soul.

[…]

Kudos to whichever neoliberal masterminds came up with this system. They sell this infinitely seductive torture to us as “flexible working” or “being the C.E.O. of You!” and we jump at it, salivating, because on its best days, the freelance life really can be all of that.

I don’t think this is a neoliberal conspiracy, it’s just the logic of capitalism seeping into every area of society. As we all jockey for position in the new-ish landscape of social media, everything becomes mediated by the market.

What I think’s missing from this piece, though, is a longer-term trend towards working less. We seem to be endlessly concerned about how the nature of work is changing rather than the huge opportunities for us to do more than waste away in bullshit jobs.

I’ve been advising anyone who’ll listen over the last few years that reducing the number of days you work has a greater impact on your happiness than earning more money. Once you reach a reasonable salary, there’s diminishing returns in any case.

Source: The New York Times (via Dense Discovery)

Insidious Instagram influencers?

There seems to a lot of pushback at the moment against the kind of lifestyle that’s a direct result of the Silicon Valley mindset. People are rejecting everything from the Instagram ‘influencer’ approach to life to the ‘techbro’-style crazy working hours.

This week saw Basecamp, a company that prides itself on the work/life balance of its employees and on rejecting venture capital, publish another book. You can guess at what it focuses on from its title, It doesn’t have to be crazy at work. I’ve enjoyed and have recommended their previous books (as ’37 Signals’), and am looking forward to reading this latest one.

Alongside that book, I’ve seen three articles that, to me at least, are all related to the same underlying issues. The first comes from Simone Stolzoff who writes in Quartz at Work that we’re no longer quite sure what we’re working for:

Before I became a journalist, I worked in an office with hot breakfast in the mornings and yoga in the evenings. I was #blessed. But I would reflect on certain weeks—after a string of days where I was lured in before 8am and stayed until well after sunset—like a driver on the highway who can’t remember the last five miles of road. My life had become my work. And my work had become a series of rinse-and-repeat days that started to feel indistinguishable from one another.

Part of this lack of work/life balance comes from our inability these days to simply have hobbies, or interests, or do anything just for the sake of it. As Tim Wu points out in The New York Times, it’s all linked some kind of existential issue around identity:

If you’re a jogger, it is no longer enough to cruise around the block; you’re training for the next marathon. If you’re a painter, you are no longer passing a pleasant afternoon, just you, your watercolors and your water lilies; you are trying to land a gallery show or at least garner a respectable social media following. When your identity is linked to your hobby — you’re a yogi, a surfer, a rock climber — you’d better be good at it, or else who are you?

To me, this is inextricably linked to George Monbiot’s recent piece in The Guardian about about the problem of actors being interviewed about the world’s issues disproportionately more often than anybody else. As a result, we’re rewarding those people who look like they know what they’re talking about with our collective attention, rather than those who actually do. Monbiot concludes:

The task of all citizens is to understand what we are seeing. The world as portrayed is not the world as it is. The personification of complex issues confuses and misdirects us, ensuring that we struggle to comprehend and respond to our predicaments. This, it seems, is often the point.

There’s always been a difference between appearance and reality in public life. However, previously, at least they seem to have been two faces of the same coin. These days, our working lives as well as our public lives seem to be

Sources: Basecamp / Quartz at Work / The New York Times / The Guardian

 

When we eat matters

As I get older, I’m more aware that some things I do are very affected by the world around me. For example, since finding out that the intensity of light you experience during the day is correlated with the amount of sleep you get, I don’t feel so bad about ‘sleeping in’ during the summer months.

So it shouldn’t be surprising that this article in The New York Times suggests that there’s a good and a bad time to eat:

A growing body of research suggests that our bodies function optimally when we align our eating patterns with our circadian rhythms, the innate 24-hour cycles that tell our bodies when to wake up, when to eat and when to fall asleep. Studies show that chronically disrupting this rhythm — by eating late meals or nibbling on midnight snacks, for example — could be a recipe for weight gain and metabolic trouble.

A more promising approach is what some call ‘intermittent fasting’ where you restrict your calorific intake to eight hours of the day, and don’t consume anything other than water for the other 16 hours.

This approach, known as early time-restricted feeding, stems from the idea that human metabolism follows a daily rhythm, with our hormones, enzymes and digestive systems primed for food intake in the morning and afternoon. Many people, however, snack and graze from roughly the time they wake up until shortly before they go to bed. Dr. Panda has found in his research that the average person eats over a 15-hour or longer period each day, starting with something like milk and coffee shortly after rising and ending with a glass of wine, a late night meal or a handful of chips, nuts or some other snack shortly before bed.

That pattern of eating, he says, conflicts with our biological rhythms.

So when should we eat? As early as possible in the day, it would seem:

Most of the evidence in humans suggests that consuming the bulk of your food earlier in the day is better for your health, said Dr. Courtney Peterson, an assistant professor in the department of nutrition sciences at the University of Alabama at Birmingham. Dozens of studies demonstrate that blood sugar control is best in the morning and at its worst in the evening. We burn more calories and digest food more efficiently in the morning as well.

That’s not great news for me. After a protein smoothie in the morning and eggs for lunch, I end up eating most of my calories in the evening. I’m going to have to rethink my regime…

Source: The New York Times

Our irresistible screens of splendour

Apple is touting a new feature in the latest version of iOS that helps you reduce the amount of time you spend on your smartphone. Facebook are doing something similar. As this article in The New York Times notes, that’s no accident:

There’s a reason tech companies are feeling this tension between making phones better and worrying they are already too addictive. We’ve hit what I call Peak Screen.

For much of the last decade, a technology industry ruled by smartphones has pursued a singular goal of completely conquering our eyes. It has given us phones with ever-bigger screens and phones with unbelievable cameras, not to mention virtual reality goggles and several attempts at camera-glasses.

The article even gives the example of Augmented Reality LEGO play sets which actively encourage you to stop building and spend more time on screens!

Tech has now captured pretty much all visual capacity. Americans spend three to four hours a day looking at their phones, and about 11 hours a day looking at screens of any kind.

So tech giants are building the beginning of something new: a less insistently visual tech world, a digital landscape that relies on voice assistants, headphones, watches and other wearables to take some pressure off our eyes.

[…]

Screens are insatiable. At a cognitive level, they are voracious vampires for your attention, and as soon as you look at one, you are basically toast.

It’s not enough to tell people not to do things. Technology can be addictive, just like anything else, so we need to find better ways of achieving similar ends.

But in addition to helping us resist phones, the tech industry will need to come up with other, less immersive ways to interact with digital world. Three technologies may help with this: voice assistants, of which Amazon’s Alexa and Google Assistant are the best, and Apple’s two innovations, AirPods and the Apple Watch.

All of these technologies share a common idea. Without big screens, they are far less immersive than a phone, allowing for quick digital hits: You can buy a movie ticket, add a task to a to-do list, glance at a text message or ask about the weather without going anywhere near your Irresistible Screen of Splendors.

The issue I have is that it’s going to take tightly-integrated systems to do this well, at least at first. So the chances are that Apple or Google will create an ecosystem that only works with their products, providing another way to achieve vendor lock-in.

Source: The New York Times

The benefits of reading aloud to children

This article in the New York Times by Perri Klass, M.D. focuses on studies that show a link between parents reading to their children and a reduction in problematic behaviour.

This study involved 675 families with children from birth to 5; it was a randomized trial in which 225 families received the intervention, called the Video Interaction Project, and the other families served as controls. The V.I.P. model was originally developed in 1998, and has been studied extensively by this research group.

Participating families received books and toys when they visited the pediatric clinic. They met briefly with a parenting coach working with the program to talk about their child’s development, what the parents had noticed, and what they might expect developmentally, and then they were videotaped playing and reading with their child for about five minutes (or a little longer in the part of the study which continued into the preschool years). Immediately after, they watched the videotape with the study interventionist, who helped point out the child’s responses.

I really like the way that they focus on the positives and point out how much the child loves the interaction with their parent through the text.

The Video Interaction Project started as an infant-toddler program, working with low-income urban families in New York during clinic visits from birth to 3 years of age. Previously published data from a randomized controlled trial funded by the National Institute of Child Health and Human Development showed that the 3-year-olds who had received the intervention had improved behavior — that is, they were significantly less likely to be aggressive or hyperactive than the 3-year-olds in the control group.

I don’t know enough about the causes of ADHD to be able to comment, but as a teacher and parent, I do know there’s a link between the attention you give and the attention you receive.

“The reduction in hyperactivity is a reduction in meeting clinical levels of hyperactivity,” Dr. Mendelsohn said. “We may be helping some children so they don’t need to have certain kinds of evaluations.” Children who grow up in poverty are at much higher risk of behavior problems in school, so reducing the risk of those attention and behavior problems is one important strategy for reducing educational disparities — as is improving children’s language skills, another source of school problems for poor children.

It is a bit sad when we have to encourage parents to play with children between the ages of birth and three, but I guess in the age of smartphone addiction, we kind of have to.

Source: The New York Times

Image CC BY Jason Lander

Browser extensions FTW

Last week, the New York Times issued a correction to an article written by Justin Bank about President Trump. This was no ordinary correction, however:

Because of an editing error involving a satirical text-swapping web browser extension, an earlier version of this article misquoted a passage from an article by the Times reporter Jim Tankersley. The sentence referred to America’s narrowing trade deficit during “the Great Recession,” not during “the Time of Shedding and Cold Rocks.” (Pro tip: Disable your “Millennials to Snake People” extension when copying and pasting.)

Social networks went crazy over it. 😂

The person responsible has written an excellent follow-up article about the joys of browser extensions:

Browser extensions, when used properly and sensibly, can make your internet experience more productive and efficient. They can make thesaurus recommendations, more accessible, create to-do lists in new tabs, or change the color scheme of web pages to make them more readable.

The examples given by the author are all for the Chrome web browser, but all modern browsers have extensions:

Unfortunately — if somewhat comically — my use of that extension last week was far from joyful or efficient. But, despite my embarrassment to have distracted from the good work of my colleagues, I still passionately recommend the subversive, web-altering extensions you can find in a category the Chrome Web Store lists as “fun“.

Here’s my three favourite of the ones he lists in the article (which, as ever, I suggest you check out in full):

I’m particularly pleased to have come across the Word Replacer (Chrome) extension which allows you to effectively make your own extension. But as the author notes, be careful of the consequences when copy/pasting…

Source: The New York Times

The three things you need to make friends over the age of 30

This article from 2012 was referenced in something I was reading last week:

As external conditions change, it becomes tougher to meet the three conditions that sociologists since the 1950s have considered crucial to making close friends: proximity; repeated, unplanned interactions; and a setting that encourages people to let their guard down and confide in each other, said Rebecca G. Adams, a professor of sociology and gerontology at the University of North Carolina at Greensboro. This is why so many people meet their lifelong friends in college, she added.

I’ve never particularly had wide group of friends, even a child. Acquaintances, absolutely. I was on the football team and reasonably popular, it’s just that I can be what some people would term ’emotionally distant’.

But making friends in your thirties seems to be something that’s difficult for many people. Not that I’m overly-concerned about it, to be honest. A good Stoic should be self-contained.

The article makes a good point about differences that don’t seem to matter when people are younger. For example, coming from a wealthy family (or having a job that pays well) seems to somehow play a bigger role.

And then…

Adding children to the mix muddles things further. Suddenly, you are surrounded by a new circle of parent friends — but the emotional ties can be tenuous at best, as the comedian Louis C. K. related in one stand-up routine: “I spend whole days with people, I’m like, I never would have hung out with you, I didn’t choose you. Our children chose each other. Based on no criteria, by the way. They’re the same size.”

Indeed, although there’s some really interesting people I’ve met through my children. I wouldn’t particularly call those people friends, though. Perhaps I set the bar too high?

Ultimately, though, there’s more at work here than just life changes happening to us.

External factors are not the only hurdle. After 30, people often experience internal shifts in how they approach friendship. Self-discovery gives way to self-knowledge, so you become pickier about whom you surround yourself with, said Marla Paul, the author of the 2004 book The Friendship Crisis: Finding, Making, and Keeping Friends When You’re Not a Kid Anymore. “The bar is higher than when we were younger and were willing to meet almost anyone for a margarita,” she said.

Manipulators, drama queens, egomaniacs: a lot of them just no longer make the cut.

Well, exactly. And I think things are different for men and women (as well as, I guess, those who don’t strongly identify as either).

Source: The New York Times