Tag: Vox (page 1 of 3)

Should we “resist trying to make things better” when it comes to online misinformation?

This is a provocative interview with Alex Stamos, “the former head of security at Facebook who now heads up the Stanford Internet Observatory, which does deep dives into the ways people abuse the internet”. His argument is that social media companies (like Twitter) sometimes try to hard to make the world better, which he thinks should be “resisted”.

I’m not sure what to make of this. On the one hand, I think we absolutely do need to be worried about misinformation. On the other, he does have a very good point about people being complicit in their own radicalisation. It’s complicated.

I think what has happened is there was a massive overestimation of the capability of mis- and disinformation to change people’s minds — of its actual persuasive power. That doesn’t mean it’s not a problem, but we have to reframe how we look at it — as less of something that is done to us and more of a supply and demand problem. We live in a world where people can choose to seal themselves into an information environment that reinforces their preconceived notions, that reinforces the things they want to believe about themselves and about others. And in doing so, they can participate in their own radicalization. They can participate in fooling themselves, but that is not something that’s necessarily being done to them.

[…]

The fundamental problem is that there’s a fundamental disagreement inside people’s heads — that people are inconsistent on what responsibility they believe information intermediaries should have for making society better. People generally believe that if something is against their side, that the platforms have a huge responsibility. And if something is on their side, [the platforms] should have no responsibility. It’s extremely rare to find people who are consistent in this.

[…]

Any technological innovation, you’re going to have some kind of balancing act. The problem is, our political discussion of these things never takes those balances into effect. If you are super into privacy, then you have to also recognize that when you provide people private communication, that some subset of people will use that in ways that you disagree with, in ways that are illegal in ways, and sometimes in some cases that are extremely harmful. The reality is that we have to have these kinds of trade-offs.

Source: Are we too worried about misinformation? | Vox

Why large tree-planting initiatives often fail

‘Carbon offsetting’ is just a way of the western middle classes assuaging their climate guilt. We can do better by thinking holistically.

In one recent study in the journal Nature, for example, researchers examined long-term restoration efforts in northern India, a country that has invested huge amounts of money into planting over the last 50 years. The authors found “no evidence” that planting offered substantial climate benefits or supported the livelihoods of local communities.

The study is among the most comprehensive analyses of restoration projects to date, but it’s just one example in a litany of failed campaigns that call into question the value of big tree-planting initiatives. Often, the allure of bold targets obscures the challenges involved in seeing them through, and the underlying forces that destroy ecosystems in the first place.

Instead of focusing on planting huge numbers of trees, experts told Vox, we should focus on growing trees for the long haul, protecting and restoring ecosystems beyond just forests, and empowering the local communities that are best positioned to care for them.

Source: Climate change: How to plant trillions of trees without hurting people and the planet | Vox

Brand-safe influencers and the blurring of reality

Earlier this week, in a soon-to-be released episode of the Tao of WAO podcast, we were talking about the benefits and pitfalls of NGOs like Greenpeace partnering with influencers. The upside? Engaging with communities that would otherwise be hard-to-reach. The downside? Influencers can be unpredictable.

It’s somewhat inevitable, therefore, that “brand-safe” fictional influencers would emerge. As detailed in this article, not only are teams of writers creating metaverses in which several characters exist, but they’re using machine learning to allow fans/followers to “interact”.

The boundary between the real and fictional is only going to get more blurred.

FourFront is part of a larger wave of tech startups devoted to, as aspiring Zuckerbergs like to say, building the metaverse, which can loosely be defined as “the internet” but is more specifically the interconnected, augmented reality virtual space that real people share. It’s an undoubtedly intriguing concept for people with a stake in the future of technology and entertainment, which is to say, the entirety of culture. It’s also a bit of an ethical minefield: Isn’t the internet already full of enough real-seeming content that is a) not real and b) ultimately an effort to make money? Are the characters exploiting the sympathies of well-meaning or media illiterate audiences? Maybe!

On the other hand, there’s something sort of darkly refreshing about an influencer “openly” being created by a room of professional writers whose job is to create the most likable and interesting social media users possible. Influencers already have to walk the delicate line between aspirational and inauthentic, to attract new followers without alienating existing fans, to use their voice for change while remaining “brand-safe.” The job has always been a performance; it’s just that now that performance can be convincingly replicated by a team of writers and a willing actor.

Source: What’s the deal with fictional influencers? | Vox