Tag: The Atlantic (page 1 of 16)

Conspicuously sesquipedalian communication

Getting people to understand your ideas is a difficult thing. That’s why it’s been so gratifying to work at various times with Bryan Mathers over the last decade. We humans are much better at processing visual inputs than deciphering text.

That being said, as Derek Thompson shows in this article, you have to begin with the realisation that simple is smart. It’s much easier to just write down what’s in your head that do so in a way that’s easy for others to understand.

In some ways, this reminds me of my work on ambiguity, which was a side-product of the work I did on my doctoral thesis. It’s also a good reminder that one of the best uses that most people can make of AI tools such as ChatGPT is to simplify their work.

Shadow of person typing

High school taught me big words. College rewarded me for using big words. Then I graduated and realized that intelligent readers outside the classroom don’t want big words. They want complex ideas made simple.  If you don’t believe it from a journalist, believe it from an academic: “When people feel insecure about their social standing in a group, they are more likely to use jargon in an attempt to be admired and respected,” the Columbia University psychologist Adam Galinsky told me. His study and other research found that when people use complicated language, they tend to come across as low-status or less intelligent. Why? It’s the complexity trap: Complicated language and jargon offer writers the illusion of sophistication, but jargon can send a signal to some readers that the writer is dense or overcompensating. Conspicuously sesquipedalian communication can signal compensatory behavior resulting from suboptimal perspective-taking strategies. What? Exactly; never write like that. Smart people respect simple language not because simple words are easy, but because expressing interesting ideas in small words takes a lot of work.

Source: Why Simple Is Smart | The Atlantic

The supermarket is a panopticon

My son’s now old enough to get ‘loyalty cards’ for supermarkets, coffee shops, and places to eat. He thinks this is great: free drinks! money off vouchers! What’s not to like? On a recent car journey, I explained why the only loyalty card I use is the one for the Co-op, and introduced him to the murky world of data brokers.

In this article, Ian Bogost writes in The Atlantic about the extensive data collection by retailers to personalise marketing. This not only predicts but also influences consumer behaviour, raising ethical concerns about the erosion of privacy and democratic ideals. Bogost argues that this data-driven approach shifts the power balance, allowing companies to manipulate consumer preferences.

In marketing, segmentation refers to the process of dividing customers into different groups, in order to make appeals to them based on shared characteristics. Though always somewhat artificial, segments used to correspond with real categories or identities—soccer moms, say, or gamers. Over decades, these segments have become ever smaller and more precise, and now retailers have enough data to create a segment just for you. And not even just for you, but for you right now: They customize marketing messages to unique individuals at distinct moments in time.

You might be thinking, Who cares? If stores can offer the best deals on the most relevant products to me, then let them do it. But you don’t even know which products are relevant anymore. Customizing offerings and prices to ever-smaller segments of customers works; it causes people to alter their shopping behavior to the benefit of the stores and their data-greedy machines. It gives retailers the ability, in other words, to use your private information to separate you from your money. The reason to worry about the erosion of retail privacy isn’t only because stores might discover or reveal your secrets based on the data they collect about you. It’s that they can use that data to influence purchasing so effectively that they’re rewiring your desires.

[…]

Ordinary people may not realize just how much offline information is collected and aggregated by the shopping industry rather than the tech industry. In fact, the two work together to erode our privacy effectively, discreetly, and thoroughly. Data gleaned from brick-and-mortar retailers get combined with data gleaned from online retailers to build ever-more detailed consumer profiles, with the intention of selling more things, online and in person—and to sell ads to sell those things, a process in which those data meet up with all the other information big Tech companies such as Google and Facebook have on you.“Retailing,” Joe Turow told me, “is the place where a lot of tech gets used and monetized.” The tech industry is largely the ad-tech industry. That makes a lot of data retail data. “There are a lot of companies doing horrendous things with your data, and people use them all the time, because they’re not on the public radar.” The supermarket, in other words, is a panopticon just the same as the social network.

Source: You Should Worry About the Data Retailers Collect About You | The Atlantic

An end to rabbit hole radicalization?

A new peer-reviewed study suggests that YouTube’s efforts to stop people being radicalized through its recommendation algorithm have been effective. The study monitored 1,181 people’s YouTube activity and found that only 6% watched extremist videos, with most of these deliberately subscribing to extremist channels.

Interestingly, though, the study cannot account for user behaviour prior to YouTube’s 2019 algorithm changes, which means we can only wonder about how influential the platform was in terms of radicalization up to and including pretty significant elections.

Around the time of the 2016 election, YouTube became known as a home to the rising alt-right and to massively popular conspiracy theorists. The Google-owned site had more than 1 billion users and was playing host to charismatic personalities who had developed intimate relationships with their audiences, potentially making it a powerful vector for political influence. At the time, Alex Jones’s channel, Infowars, had more than 2 million subscribers. And YouTube’s recommendation algorithm, which accounted for the majority of what people watched on the platform, looked to be pulling people deeper and deeper into dangerous delusions.

The process of “falling down the rabbit hole” was memorably illustrated by personal accounts of people who had ended up on strange paths into the dark heart of the platform, where they were intrigued and then convinced by extremist rhetoric—an interest in critiques of feminism could lead to men’s rights and then white supremacy and then calls for violence. Most troubling is that a person who was not necessarily looking for extreme content could end up watching it because the algorithm noticed a whisper of something in their previous choices. It could exacerbate a person’s worst impulses and take them to a place they wouldn’t have chosen, but would have trouble getting out of.

[…]

The… research is… important, in part because it proposes a specific, technical definition of ‘rabbit hole’. The term has been used in different ways in common speech and even in academic research. Nyhan’s team defined a “rabbit hole event” as one in which a person follows a recommendation to get to a more extreme type of video than they were previously watching. They can’t have been subscribing to the channel they end up on, or to similarly extreme channels, before the recommendation pushed them. This mechanism wasn’t common in their findings at all. They saw it act on only 1 percent of participants, accounting for only 0.002 percent of all views of extremist-channel videos.

Nyhan was careful not to say that this paper represents a total exoneration of YouTube. The platform hasn’t stopped letting its subscription feature drive traffic to extremists. It also continues to allow users to publish extremist videos. And learning that only a tiny percentage of users stumble across extremist content isn’t the same as learning that no one does; a tiny percentage of a gargantuan user base still represents a large number of people.

Source: The World Will Never Know the Truth About YouTube’s Rabbit Holes | The Atlantic