Mike Caulfield reflects on Facebook’s announcement that they’re going to allow users to rate the sources of news in terms of trustworthiness. Like me, and most people who have thought about this for more than two seconds, he thinks it’s a bad idea.

Instead, he thinks Facebook should try Google’s approach:

Most people misunderstand what the Google system looks like (misreporting on it is rife) but the way it works is this. Google produces guidance docs for paid search raters who use them to rate search results (not individual sites). These documents are public, and people can argue about whether Google’s take on what constitutes authoritative sources is right — because they are public.
Facebook's algorithms are opaque by design, whereas, Caulfield argues, Google's approach is documented:
I’m not saying it doesn’t have problems — it does. It has taken Google some time to understand the implications of some of their decisions and I’ve been critical of them in the past. But I am able to be critical partially because we can reference a common understanding of what Google is trying to accomplish and see how it was falling short, or see how guidance in the rater docs may be having unintended consequences.
This is one of the major issues of our time, particularly now that people have access to the kind of CGI only previously available to Hollywood. And what are they using this AI-powered technology for? Fake celebrity (and revenge) porn, of course.

Source: Hapgood