The world doesn’t particularly need my opinions on NFTs (‘non-fungible tokens’) as there’s plenty of opinions to go round in other newsletters, podcasts, and blog posts.
After doing a bunch of reading, though, I think that the main use case for NFTs will be ticket sales. That is to say, when there is a limited supply of something with intrinsic value, and both the original buyer and seller want to ensure authenticity.
The rest is speculation and gambling, as far as I’m concerned, with a side serving of ecological destruction. I’m also a bit concerned about the enforcement of copyright everywhere on the web it might lead to…
NFTs, explained — “Non-fungible” more or less means that it’s unique and can’t be replaced with something else. For example, a bitcoin is fungible — trade one for another bitcoin, and you’ll have exactly the same thing. A one-of-a-kind trading card, however, is non-fungible. If you traded it for a different card, you’d have something completely different. You gave up a Squirtle, and got a 1909 T206 Honus Wagner, which StadiumTalk calls “the Mona Lisa of baseball cards.” (I’ll take their word for it.)”
NFTs are a dangerous trap — “The more time and passion that creators devote to chasing the NFT, the more time they’ll spend trying to create the appearance of scarcity and hustling people to believe that the tokens will go up in value. They’ll become promoters of digital tokens more than they are creators. Because that’s the only reason that someone is likely to buy one–like a stock, they hope it will go up in value. Unlike some stocks, it doesn’t pay dividends or come with any other rights. And unlike actual works of art, NFTs aren’t usually aesthetically beautiful on their own, they simply represent something that is.”
Cryptodamages: Monetary value estimates of the air pollution and human health impacts of cryptocurrency mining — “Results indicate that in 2018, each $1 of Bitcoin value created was responsible for $0.49 in health and climate damages in the US and $0.37 in China. The similar value in China relative to the US occurs despite the extremely large disparity between the value of a statistical life estimate for the US relative to that of China. Further, with each cryptocurrency, the rising electricity requirements to produce a single coin can lead to an almost inevitable cliff of negative net social benefits, absent perpetual price increases.”
I’m having to write this ahead of time due to travel commitments. Still, there’s the usual mixed bag of content in here, everything from digital credentials through to survival, with a bit of panpsychism thrown in for good measure.
Recognition is from a certain point of view hyperlocal, and it is this hyperlocality that gives it its global value – not the other way around. The space of recognition is the community in which the competency is developed and activated. The recognition of a practitioner in a community is not reduced to those generally considered to belong to a “community of practice”, but to the intersection of multiple communities and practices, starting with the clients of these practices: the community of practice of chefs does not exist independently of the communities of their suppliers and clients. There is also a very strong link between individual recognition and that of the community to which the person is identified: shady notaries and politicians can bring discredit on an entire community.
As this roundup goes live I’ll be at Open Belgium, and I’m looking forward to catching up with Serge while I’m there! My take on the points that he’s making in this (long) post is actually what I’m talking about at the event: open initiatives need open organisations.
Mr Higgins, who was opening a celebration of Trinity College Dublin’s College Historical Debating Society, said “universities are not there merely to produce students who are useful”.
“They are there to produce citizens who are respectful of the rights of others to participate and also to be able to participate fully, drawing on a wide range of scholarship,” he said on Monday night.
The President said there is a growing cohort of people who are alienated and “who feel they have lost their attachment to society and decision making”.
Jack Horgan-Jones (The Irish Times)
As a Philosophy graduate, I wholeheartedly agree with this, and also with his assessment of how people are obsessed with ‘markets’.
Not everyone will accept this sort of inclusivism. Some will insist on a stark choice between Jesus or hell, the Quran or hell. In some ways, overcertain exclusivism is a much better marketing strategy than sympathetic inclusivism. But if just some of the world’s population opened their minds to the wisdom of other religions, without having to leave their own faith, the world would be a better, more peaceful place. Like Aldous Huxley, I still believe in the possibility of growing spiritual convergence between different religions and philosophies, even if right now the tide seems to be going the other way.
Jules Evans (Aeon)
This is an interesting article about the philosophy of Aldous Huxley, whose books have always fascinated me. For some reason, I hadn’t twigged that he was related to Thomas Henry Huxley (aka “Darwin’s bulldog”).
So what really failed, maybe, wasn’t iTunes at all—it was the implicit promise of Gmail-style computing. The explosion of cloud storage and the invention of smartphones both arrived at roughly the same time, and they both subverted the idea that we should organize our computer. What they offered in its place was a vision of ease and readiness. What the idealized iPhone user and the idealized Gmail user shared was a perfect executive-functioning system: Every time they picked up their phone or opened their web browser, they knew exactly what they wanted to do, got it done with a calm single-mindedness, and then closed their device. This dream illuminated Inbox Zero and Kinfolk and minimalist writing apps. It didn’t work. What we got instead was Inbox Infinity and the algorithmic timeline. Each of us became a wanderer in a sea of content. Each of us adopted the tacit—but still shameful—assumption that we are just treading water, that the clock is always running, and that the work will never end.
Robinson Meyer (The Atlantic)
This is curiously-written (and well-written) piece, in the form of an ordered list, that takes you through the changes since iTunes launched. It’s hard to disagree with the author’s arguments.
But what if YouTube had failed? Would we have missed out on decades of cultural phenomena and innovative ideas? Would we have avoided a wave of dystopian propaganda and misinformation? Or would the internet have simply spiraled into new — yet strangely familiar — shapes, with their own joys and disasters?
Adi Robertson (The Verge)
I love this approach of imagining how the world would have been different had YouTube not been the massive success it’s been over the last 15 years. Food for thought.
It’s tempting to look for laws of people the way we look for the laws of gravity. But science is hard, people are complex, and generalizing can be problematic. Although experiments might be the ultimate truthtellers, they can also lead us astray in surprising ways.
Hannah Fry (The New Yorker)
A balanced look at the way that companies, especially those we classify as ‘Big Tech’ tend to experiment for the purposes of engagement and, ultimately, profit. Definitely worth a read.
The trend to tap into is the changing nature of trust. One of the biggest social trends of our time is the loss of faith in institutions and previously trusted authorities. People no longer trust the Government to tell them the truth. Banks are less trusted than ever since the Financial Crisis. The mainstream media can no longer be trusted by many. Fake news. The anti-vac movement. At the same time, we have a generation of people who are looking to their peers for information.
Lawrence Lundy (Outlier Ventures)
This post is making the case for blockchain-based technologies. But the wider point is a better one, that we should trust people rather than companies.
Any sufficiently advanced technology is indistinguishable from nature. Agriculture de-wilded the meadows and the forests, so that even a seemingly pristine landscape can be a heavily processed environment. Manufactured products have become thoroughly mixed in with natural structures. Now, our machines are becoming so lifelike we can’t tell the difference. Each stage of technological development adds layers of abstraction between us and the physical world. Few people experience nature red in tooth and claw, or would want to. So, although the world of basic physics may always remain mindless, we do not live in that world. We live in the world of those abstractions.
George Musser (Nautilus)
This article, about artificial ‘panpsychism’ is really challenging to the reader’s initial assumptions (well, mine at least) and really makes you think.
It would appear that our brains are much better at coping in the cold than dealing with being too hot. This is because our bodies’ survival strategies centre around keeping our vital organs running at the expense of less essential body parts. The most essential of all, of course, is our brain. By the time that Shatayeva and her fellow climbers were experiencing cognitive issues, they were probably already experiencing other organ failures elsewhere in their bodies.
William Park (BBC Future)
Not just one story in this article, but several with fascinating links and information.
Benedict Evans recently posted his annual ‘macro trends’ slide deck. It’s incredibly insightful, and work of (minimalist) art. This article’s title comes from his conclusion, and you can see below which of the 128 slides jumped out at me from deck:
For me, what the deck as a whole does is place some of the issues I’ve been thinking about in a wider context.
My team is building a federated social network for educators, so I’m particularly tuned-in to conversations about the effect social media is having on society. A post by Harold Jarche where he writes about his experience of Twitter as a rage machine caught my attention, especially the part where he talks about how people are happy to comment based on the ‘preview’ presented to them in embedded tweets:
Research on the self-perception of knowledge shows how viewing previews without going to the original article gives an inflated sense of understanding on the subject, “audiences who only read article previews are overly confident in their knowledge, especially individuals who are motivated to experience strong emotions and, thus, tend to form strong opinions.” Social media have created a worldwide Dunning-Kruger effect. Our collective self-perception of knowledge acquired through social media is greater than it actually is.
I think our experiment with general-purpose social networks is slowly coming to an end, or at least will do over the next decade. What I mean is that, while we’ll still have places where you can broadcast anything to anyone, the digital environments we’ll spend more time will be what Venkatesh Rao calls the ‘cozyweb’:
Unlike the main public internet, which runs on the (human) protocol of “users” clicking on links on public pages/apps maintained by “publishers”, the cozyweb works on the (human) protocol of everybody cutting-and-pasting bits of text, images, URLs, and screenshots across live streams. Much of this content is poorly addressable, poorly searchable, and very vulnerable to bitrot. It lives in a high-gatekeeping slum-like space comprising slacks, messaging apps, private groups, storage services like dropbox, and of course, email.
That’s on a personal level. I should imagine organisational spaces will be a bit more organised. Back to Jarche:
We need safe communities to take time for reflection, consideration, and testing out ideas without getting harassed. Professional social networks and communities of practices help us make sense of the world outside the workplace. They also enable each of us to bring to bear much more knowledge and insight that we could do on our own.
…or to use Rao’s diagram which is so-awful-it’s-useful:
Of course, blockchain/crypto could come along and solve all of our problems. Except it won’t. Humans are humans (are humans).
Ever since Eli Parisier’s TED talk urging us to beware online “filter bubbles” people have been wringing their hands about ensuring we have ‘balance’ in our networks.
Interestingly, some recent research by the Reuters Institute at Oxford University, paints a slightly different picture. The researcher, Dr Richard Fletcher begins by investigating how people access the news.
Fletcher draws a distinction between different types of personalisation:
Self-selected personalisation refers to the personalisations that we voluntarily do to ourselves, and this is particularly important when it comes to news use. People have always made decisions in order to personalise their news use. They make decisions about what newspapers to buy, what TV channels to watch, and at the same time which ones they would avoid
Academics call this selective exposure. We know that it’s influenced by a range of different things such as people’s interest levels in news, their political beliefs and so on. This is something that has pretty much always been true.
Pre-selected personalisation is the personalisation that is done to people, sometimes by algorithms, sometimes without their knowledge. And this relates directly to the idea of filter bubbles because algorithms are possibly making choices on behalf of people and they may not be aware of it.
The reason this distinction is particularly important is because we should avoid comparing pre-selected personalisation and its effects with a world where people do not do any kind of personalisation to themselves. We can’t assume that offline, or when people are self-selecting news online, they’re doing it in a completely random way. People are always engaging in personalisation to some extent and if we want to understand the extent of pre-selected personalisation, we have to compare it with the realistic alternative, not hypothetical ideals.
Dr Richard Fletcher
Read the article for the details, but the takeaways for me were twofold. First, that we might be blaming social media for wider and deeper divisons within society, and second, that teaching people to search for information (rather than stumble across it via feeds) might be the best strategy:
People who use search engines for news on average use more news sources than people who don’t. More importantly, they’re more likely to use sources from both the left and the right. People who rely mainly on self-selection tend to have fairly imbalanced news diets. They either have more right-leaning or more left-leaning sources. People who use search engines tend to have a more even split between the two.
Dr Richard Fletcher
Useful as it is, what I think this research misses out is the ‘black box’ algorithms that seek to keep people engaged and consuming content. YouTube is the poster child for this. As Jarche comments:
We are left in a state of constant doubt as conspiratorial content becomes easier to access on platforms like YouTube than accessing solid scientific information in a journal, much of which is behind a pay-wall and inaccessible to the general public.
This isn’t an easy problem to solve.
We might like to pretend that human beings are rational agents, but this isn’t actually true. Let’s take something like climate change. We’re not arguing about the facts here, we’re arguing about politics. Adrian Bardon, writing in Fast Company, writes:
In theory, resolving factual disputes should be relatively easy: Just present evidence of a strong expert consensus. This approach succeeds most of the time, when the issue is, say, the atomic weight of hydrogen.
But things don’t work that way when the scientific consensus presents a picture that threatens someone’s ideological worldview. In practice, it turns out that one’s political, religious, or ethnic identity quite effectively predicts one’s willingness to accept expertise on any given politicized issue.
This is pretty obvious when we stop to think about it for a moment; beliefs are bound up with identity, and that’s not something that’s so easy to change.
In ideologically charged situations, one’s prejudices end up affecting one’s factual beliefs. Insofar as you define yourself in terms of your cultural affiliations, information that threatens your belief system—say, information about the negative effects of industrial production on the environment—can threaten your sense of identity itself. If it’s part of your ideological community’s worldview that unnatural things are unhealthful, factual information about a scientific consensus on vaccine or GM food safety feels like a personal attack.
So how do we change people’s minds when they’re objectively wrong?Brian Resnick, writing for Vox, suggests the best approach might be ‘deep canvassing’:
Giving grace. Listening to a political opponent’s concerns. Finding common humanity. In 2020, these seem like radical propositions. But when it comes to changing minds, they work.
The new research shows that if you want to change someone’s mind, you need to have patience with them, ask them to reflect on their life, and listen. It’s not about calling people out or labeling them fill-in-the-blank-phobic. Which makes it feel like a big departure from a lot of the current political dialogue.
This approach, it seems, works:
So it seems there is some hope to fixing the world’s problems. It’s just that the solutions point towards doing the hard work of talking to people and not just treating them as containers for opinions to shoot down at a distance.