Tag: misinformation (page 1 of 4)

Bad Bard

Google is obviously a little freaked-out by tools such as ChatGPT and their potentially ability to destroy large sections of their search business. However, it seems like they didn’t do even the most cursory checks of the promotional material they put out as part of the hurried launch for ‘Bard’.

This, of course, is our future: ‘truthy’ systems leading individuals, groups, and civilizations down the wrong path. I’m not optimistic about our future.

Google Bard screenshot

In the advertisement, Bard is given the prompt: “What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year old about?”

Bard responds with a number of answers, including one suggesting the JWST was used to take the very first pictures of a planet outside the Earth’s solar system, or exoplanets. This is inaccurate.

Source: Google AI chatbot Bard offers inaccurate information in company ad | Reuters

Should we “resist trying to make things better” when it comes to online misinformation?

This is a provocative interview with Alex Stamos, “the former head of security at Facebook who now heads up the Stanford Internet Observatory, which does deep dives into the ways people abuse the internet”. His argument is that social media companies (like Twitter) sometimes try to hard to make the world better, which he thinks should be “resisted”.

I’m not sure what to make of this. On the one hand, I think we absolutely do need to be worried about misinformation. On the other, he does have a very good point about people being complicit in their own radicalisation. It’s complicated.

I think what has happened is there was a massive overestimation of the capability of mis- and disinformation to change people’s minds — of its actual persuasive power. That doesn’t mean it’s not a problem, but we have to reframe how we look at it — as less of something that is done to us and more of a supply and demand problem. We live in a world where people can choose to seal themselves into an information environment that reinforces their preconceived notions, that reinforces the things they want to believe about themselves and about others. And in doing so, they can participate in their own radicalization. They can participate in fooling themselves, but that is not something that’s necessarily being done to them.

[…]

The fundamental problem is that there’s a fundamental disagreement inside people’s heads — that people are inconsistent on what responsibility they believe information intermediaries should have for making society better. People generally believe that if something is against their side, that the platforms have a huge responsibility. And if something is on their side, [the platforms] should have no responsibility. It’s extremely rare to find people who are consistent in this.

[…]

Any technological innovation, you’re going to have some kind of balancing act. The problem is, our political discussion of these things never takes those balances into effect. If you are super into privacy, then you have to also recognize that when you provide people private communication, that some subset of people will use that in ways that you disagree with, in ways that are illegal in ways, and sometimes in some cases that are extremely harmful. The reality is that we have to have these kinds of trade-offs.

Source: Are we too worried about misinformation? | Vox

Every complex problem has a solution which is simple, direct, plausible — and wrong

This is a great article by Michał Woźniak (@rysiek) which cogently argues that the problem with misinformation and disinformation does not come through heavy-handed legislation, or even fact-checking, but rather through decentralisation of funding, technology, and power.

I really should have spoken with him when I was working on the Bonfire Zappa report.

While it is possible to define misinformation and disinformation, any such definition necessarily relies on things that are not easy (or possible) to quickly verify: a news item’s relation to truth, and its authors’ or distributors’ intent.

This is especially valid within any domain that deals with complex knowledge that is highly nuanced, especially when stakes are high and emotions heat up. Public debate around COVID-19 is a chilling example. Regardless of how much “own research” anyone has done, for those without an advanced medical and scientific background it eventually boiled down to the question of “who do you trust”. Some trusted medical professionals, some didn’t (and still don’t).

[…]

Disinformation peddlers are not just trying to push specific narratives. The broader aim is to discredit the very idea that there can at all exist any reliable, trustworthy information source. After all, if nothing is trustworthy, the disinformation peddlers themselves are as trustworthy as it gets. The target is trust itself.

[…]

I believe that we are looking for solutions to the wrong aspects of the problem. Instead of trying to legislate misinformation and disinformation away, we should instead be looking closely at how is it possible that it spreads so fast (and who benefits from this). We should be finding ways to fix the media funding crisis; and we should be making sure that future generations receive the mental tools that would allow them to cut through biases, hoaxes, rhetorical tricks, and logical fallacies weaponized to wage information wars.

Source: Fighting Disinformation: We’re Solving The Wrong Problems / Tactical Media Room