Tag: fake news

Bullshit receptivity scale

I love academia. Apparently researchers in psychology are using ‘hyperactive agency detection’ and a ‘Bullshit Receptivity Scale’ in their work to describe traits found in human subjects. It’s particularly useful when researching the tendency of people to believe in conspiracy theories, apparently:

Participants’ receptivity to superficially profound statements was measured using the Bullshit Receptivity Scale (Pennycook et al., 2015). This measure consists of nine seemingly impressive statements that follow rules of syntax and contain fancy words, but do not have any intentional meaning (e.g., “Wholeness quiets infinite phenomena”; “Imagination is inside exponential space time events”). Participants rated each of the items’ profoundness on a scale from 1 (Not at all profound) to 5 (Very profound). They were given the following definition of profound for reference: “of deep meaning; of great and broadly inclusive significance.”

[…]

To measure participants’ tendency to attribute intent to events, we asked them to interpret the actions portrayed by animated shapes (Abell, Happé, & Frith, 2000), a series of videos lasting from thirty seconds to one minute depicting two triangles
whose actions range from random (e.g., bumping around the screen following a geometric pattern) to resembling complex social interactions (e.g., one shape “bullying” the other). These animations were originally designed to detect deficits in the development of theory of mind.

I’ve no idea about the validity of the conclusions in this particular study (especially as it doesn’t seem to be peer-reviewed yet) but I always like discovering terms that provide a convenient shorthand.

For example, I can imagine exclaiming that someone is “off the Bullshit Receptivity Scale!” or has “hyperactive agency detection”. Nice.

Source: SSRN (via Pharyngula)

Legislating against manipulated ‘facts’ is a slippery slope

In this day and age it’s hard to know who to trust. I was raised to trust in authority but was particularly struck when I did a deep-dive into Vinay Gupta’s blog about the state being special only because it holds a monopoly on (legal) violence.

As an historian, I’m all too aware of the times that the state (usually represented by a monarch) has served to repress its citizens/subjects. It at least could pretend that it was protecting the majority of the people. As this article states:

Lies masquerading as news are as old as news itself. What is new today is not fake news but the purveyors of such news. In the past, only governments and powerful figures could manipulate public opinion. Today, it’s anyone with internet access. Just as elite institutions have lost their grip over the electorate, so their ability to act as gatekeepers to news, defining what is and is not true, has also been eroded.

So in the interaction between social networks such as Facebook, Twitter, and Instagram on the one hand, and various governments on the other hand, both are interested in power, not the people. Or even any notion of truth, it would seem:

This is why we should be wary of many of the solutions to fake news proposed by European politicians. Such solutions do little to challenge the culture of fragmented truths. They seek, rather, to restore more acceptable gatekeepers – for Facebook or governments to define what is and isn’t true. In Germany, a new law forces social media sites to take down posts spreading fake news or hate speech within 24 hours or face fines of up to €50m. The French president, Emmanuel Macron, has promised to ban fake news on the internet during election campaigns. Do we really want to rid ourselves of today’s fake news by returning to the days when the only fake news was official fake news?

We need to be vigilant. Those we trust today may not be trustworthy tomorrow.

Source: The Guardian

Audio Adversarial speech-to-text

I don’t usually go in for detailed technical papers on stuff that’s not directly relevant to what I’m working on, but I made an exception for this. Here’s the abstract:

We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose (at a rate of up to 50 characters per second). We apply our white-box iterative optimization-based attack to Mozilla’s implementation DeepSpeech end-to-end, and show it has a 100% success rate. The feasibility of this attack introduce a new domain to study adversarial examples.

In other words, the researchers managed to fool a neural network devoted to speech recognition into transcribing a phrase different to that which was uttered.

So how does it work?

By starting with an arbitrary waveform instead of speech (such as music), we can embed speech into audio that should not be recognized as speech; and by choosing silence as the target, we can hide audio from a speech-to-text system

The authors state that merely changing words so that something different occurs is a standard adverserial attack. But a targeted adverserial attack is different:

Not only are we able to construct adversarial examples converting a person saying one phrase to that of them saying a different phrase, we are also able to begin with arbitrary non-speech audio sample and make that recognize as any target phrase.

This kind of stuff is possible due to open source projects, in particular Mozilla Common Voice. Great stuff.
 

Source: Arxiv

Caulfield’s predictions for 2018

Some good stuff in Mike Caulfield’s “somewhat U.S.-centric predictions” for the coming year. In particular:

Creation of pro-government social media army focused domestically. My most out-there prediction. President Trump will announce the creation of a “Fake News Commission” to investigate both journalists and social media. One finding of the committee will be that the U.S. needs to emulate other countries and create an army of social media users to seek out anti-government information and “correct” it.

In other words, a 21st-century version of McCarthyism.

Source: Traces

Image: Washington Post, 1954 (via Spartacus Educational)

Your New Year’s resolution for 2018? Ditch Facebook.

If something’s been pre-filtered by Cory Doctorow and Jason Kottke then you know it’s going to be good. Sure enough, the open memo, to all marginally-smart people/consumers of internet “content” by Foster Kamer, is right on the money:

Literally, all you need to do: Type in web addresses. Use autofill! Or even: Google the website you want to go to, and go to it. Then bookmark it. Then go back every now and again.

Instead of reading stories that get to you because they’re popular, or just happen to be in your feed at that moment, you’ll read stories that get to you because you chose to go to them. Sounds simple, and insignificant, and almost too easy, right?

On our flight yesterday, my son asked how I was still reading articles on my phone, despite it being in aeroplane mode. I took the opportunity to explain to him how RSS powers feed readers (I use and pay for Feedly) as well as podcasts.

This stuff sounds obvious and easy when you’ve grown up with the open web. But given that the big five tech companies seem to be trying to progressively de-skill consumers, we shouldn’t be complacent.

By going to websites as a deliberate reader, you’re making a conscious choice about what you want a media outlet to be—as opposed to letting an algorithm choose the thing you’re most likely to click on. Or! As opposed to encouraging a world in which everyone is suckered into reading something with a headline optimized by a social media strategist armed with nothing more than “best practices” for conning you into a click.

Kamer blames Facebook, and given its impact on the news ecosystem, he’s correct in doing so:

Their goal, as a company, is to keep you on Facebook—and away from everything else—as long as they possibly can. They do that by making Facebook as addictive to you as possible. And they make it addictive by feeding you only the exact stripe of content you want to read, which they know to a precise, camel-eye-needle degree. It’s the kind of content, say, that you won’t just click on, but will “Like,” comment on, and share (not just for others to read, but so you can say something about yourself by sharing it, too). And that’s often before you’ve even read it!

It’s a great read. Why not start by adding Thought Shrapnel’s RSS feed to your shiny new feed reader? There’s plenty to choose from!

Source: Mashable

Purely technological answers to human problems don’t work

In a hugely surprising move, Facebook has found that marking an article as ‘disputed’ on a user’s news feed and putting a red flag next to it makes them want to click on it more. 🙄

The tech giant is doing this in response to academic research it conducted that shows the flags don’t work, and they often have the reverse effect of making people want to click even more. Related articles give people more context about what’s fake or not, according to Facebook.

The important thing is what comes next:

Facebook’s Sheryl Sandberg says Facebook is a technology company that doesn’t hire journalists. Without using editorial judgement to determine what’s real and what’s not, tackling fake news will forever be a technology experiment.

Until Facebook is forced to admit it’s a media comoany, and is regulated as such, we’ll continue to have these problems around technological solutionism.

Source: Axios