Tag: algorithms (page 1 of 3)

Algorithmic Anxiety

I listened to a great episode of CBC’s Spark podcast with the excellent Nora Young on what ownership will look like in 2050. One of the contributors talked about what it might look like to be “on the wrong side of the API”. In other words, the person responding to the request, rather than giving it.

We’re already heading towards a dystopia when people are having their behaviour influenced by black box algorithms that we don’t understand. This article talks about shopping on Instagram and listing property on Airbnb, but the point (and the anxiety) is universal.

Only in the middle of the past decade, though, did recommender systems become a pervasive part of life online. Facebook, Twitter, and Instagram all shifted away from chronological feeds—showing messages in the order in which they were posted—toward more algorithmically sequenced ones, displaying what the platforms determined would be most engaging to the user. Spotify and Netflix introduced personalized interfaces that sought to cater to each user’s tastes. (Top Picks for Kyle!) Such changes made platforms feel less predictable and less transparent. What you saw was never quite the same as what anyone else was seeing. You couldn’t count on a feed to work the same way from one month to the next. Just last week, Facebook implemented a new default Home tab on its app that prioritizes recommended content in the vein of TikTok, its main competitor.

Almost every other major Internet platform makes use of some form of algorithmic recommendation. Google Maps calculates driving routes using unspecified variables, including predicted traffic patterns and fuel efficiency, rerouting us mid-journey in ways that may be more convenient or may lead us astray. The food-delivery app Seamless front-loads menu items that it predicts you might like based on your recent ordering habits, the time of day, and what is “popular near you.” E-mail and text-message systems supply predictions for what you’re about to type. (“Got it!”) It can feel as though every app is trying to guess what you want before your brain has time to come up with its own answer, like an obnoxious party guest who finishes your sentences as you speak them. We are constantly negotiating with the pesky figure of the algorithm, unsure how we would have behaved if we’d been left to our own devices. No wonder we are made anxious. In a recent essay for Pitchfork, Jeremy D. Larson described a nagging feeling that Spotify’s algorithmic recommendations and automated playlists were draining the joy from listening to music by short-circuiting the process of organic discovery: “Even though it has all the music I’ve ever wanted, none of it feels necessarily rewarding, emotional, or personal.”

[…]

“Algorithmic anxiety,” however, is the most apt phrase I’ve found for describing the unsettling experience of navigating today’s online platforms. Shagun Jhaver, a scholar of social computing, helped define the phrase while conducting research and interviews in collaboration with Airbnb in 2018. Of fifteen hosts he spoke to, most worried about where their listings were appearing in users’ search results. They felt “uncertainty about how Airbnb algorithms work and a perceived lack of control,” Jhaver reported in a paper co-written with two Airbnb employees. One host told Jhaver, “Lots of listings that are worse than mine are in higher positions.” On top of trying to boost their rankings by repainting walls, replacing furniture, or taking more flattering photos, the hosts also developed what Jhaver called “folk theories” about how the algorithm worked. They would log on to Airbnb repeatedly throughout the day or constantly update their unit’s availability, suspecting that doing so would help get them noticed by the algorithm. Some inaccurately marked their listings as “child safe,” in the belief that it would give them a bump. (According to Jhaver, Airbnb couldn’t confirm that it had any effect.) Jhaver came to see the Airbnb hosts as workers being overseen by a computer overlord instead of human managers. In order to make a living, they had to guess what their capricious boss wanted, and the anxious guesswork may have made the system less efficient over all.

Source: The Age of Algorithmic Anxiety | The New Yorker

Hacking the application process

It’s perhaps a massive over-simplification, but my understanding of the so-called ‘skills gap’ is that two things are happening.

The first is a long-term trend for employers expecting to have to spend zero dollars on training for the people they hire.

The second is the use of algorithmic scanning of CV-scanning software to reject the majority of applicants. Not surprisingly, although it might make recruiters’ jobs a bit more manageable, it’s not great for diversity or finding people who haven’t done that exact job before.

Software can also disadvantage certain candidates, says Joseph Fuller, a management professor at Harvard Business School. Last fall, the US Equal Employment Opportunity Commission launched an initiative to examine the role of artificial intelligence in hiring, citing concerns that new technologies presented a “a high-tech pathway to discrimination.” Around the same time, Fuller published a report suggesting that applicant tracking systems routinely exclude candidates with irregularities on their résumés: a gap in employment, for example, or relevant skills that didn’t quite match the recruiter’s keywords. “When companies are focused on making their process hyperefficient, they can over-dignify the technology,” he says.

Source: How Job Applicants Try to Hack Résumé-Reading Software | WIRED

Twitter acknowledges right-wing bias in its algorithmic feed

I mentioned on Twitter last week how I noticed that I keep getting recommended stories about Nigel Farage and from outlets on the political right wing like The Telegraph.

Lo and behold, Twitter has published findings from its own investigation which found that its algorithms actively promote right wing accounts and news sources. Now I hope it does something about it.

Twitter logo

What did we find?

— Tweets about political content from elected officials, regardless of party or whether the party is in power, do see algorithmic amplification when compared to political content on the reverse chronological timeline.

— Group effects did not translate to individual effects. In other words, since party affiliation or ideology is not a factor our systems consider when recommending content, two individuals in the same political party would not necessarily see the same amplification.

— In six out of seven countries — all but Germany — Tweets posted by accounts from the political right receive more algorithmic amplification than the political left when studied as a group.

— Right-leaning news outlets, as defined by the independent organizations listed above, see greater algorithmic amplification on Twitter compared to left-leaning news outlets. However, as highlighted in the paper, these third-party ratings make their own, independent classifications and as such the results of analysis may vary depending on which source is used.

Source: Examining algorithmic amplification of political content on Twitter | Twitter blog