Tag: crypto

Securing your digital life

Usually, guides to securing your digital life are very introductory and basic. This one from Ars Technica, however, is a bit more advanced. I particularly appreciate the advice to use authenticator apps for 2FA.

Remember, if it’s inconvenient for you it’s probably orders of magnitude more inconvenient for would-be attackers. To get into one of my cryptocurrency accounts, for example, I’ve set it so I need a password and three other forms of authentication.

Overkill? Probably. But it dramatically reduces the likelihood that someone else will make off with my meme stocks…

Security measures vary. I discovered after my Twitter experience that setting up 2FA wasn’t enough to protect my account—there’s another setting called “password protection” that prevents password change requests without authentication through email. Sending a request to reset my password and change the email account associated with it disabled my 2FA and reset the password. Fortunately, the account was frozen after multiple reset requests, and the attacker couldn’t gain control.

This is an example of a situation where “normal” risk mitigation measures don’t stack up. In this case, I was targeted because I had a verified account. You don’t necessarily have to be a celebrity to be targeted by an attacker (I certainly don’t think of myself as one)—you just need to have some information leaked that makes you a tempting target.

For example, earlier I mentioned that 2FA based on text messages is easier to bypass than app-based 2FA. One targeted scam we see frequently in the security world is SIM cloning—where an attacker convinces a mobile provider to send a new SIM card for an existing phone number and uses the new SIM to hijack the number. If you’re using SMS-based 2FA, a quick clone of your mobile number means that an attacker now receives all your two-factor codes.

Additionally, weaknesses in the way SMS messages are routed have been used in the past to send them to places they shouldn’t go. Until earlier this year, some services could hijack text messages, and all that was required was the destination phone number and $16. And there are still flaws in Signaling System 7 (SS7), a key telephone network protocol, that can result in text message rerouting if abused.

Source: Securing your digital life, part two: The bigger picture—and special circumstances | Ars Technica

On the dangers of CBDCs

I can’t remember the last time I used cash. Or rather, I can (for my son’s haircut) because it was so unusual; it’s been about 18 months since my default wasn’t paying via the Google Pay app on my smartphone.

As a result, and because I also have played around with buying, selling, and holding cryptocurrencies, that a Central Bank Digital Currency (CBDC) would be a benign thing. Sadly, as Edward Snowden explains, they really are not. His latest article is well worth a read in its entirety.

Rather, I will tell you what a CBDC is NOT—it is NOT, as Wikipedia might tell you, a digital dollar. After all, most dollars are already digital, existing not as something folded in your wallet, but as an entry in a bank’s database, faithfully requested and rendered beneath the glass of your phone.

Neither is a Central Bank Digital Currency a State-level embrace of cryptocurrency—at least not of cryptocurrency as pretty much everyone in the world who uses it currently understands it.

Instead, a CBDC is something closer to being a perversion of cryptocurrencyor at least of the founding principles and protocols of cryptocurrency—a cryptofascist currency, an evil twin entered into the ledgers on Opposite Day, expressly designed to deny its users the basic ownership of their money and to install the State at the mediating center of every transaction.

Source: Your Money and Your Life – by Edward Snowden – Continuing Ed — with Edward Snowden

Software ate the world, so all the world’s problems get expressed in software

Benedict Evans recently posted his annual ‘macro trends’ slide deck. It’s incredibly insightful, and work of (minimalist) art. This article’s title comes from his conclusion, and you can see below which of the 128 slides jumped out at me from deck:

For me, what the deck as a whole does is place some of the issues I’ve been thinking about in a wider context.


My team is building a federated social network for educators, so I’m particularly tuned-in to conversations about the effect social media is having on society. A post by Harold Jarche where he writes about his experience of Twitter as a rage machine caught my attention, especially the part where he talks about how people are happy to comment based on the ‘preview’ presented to them in embedded tweets:

Research on the self-perception of knowledge shows how viewing previews without going to the original article gives an inflated sense of understanding on the subject, “audiences who only read article previews are overly confident in their knowledge, especially individuals who are motivated to experience strong emotions and, thus, tend to form strong opinions.” Social media have created a worldwide Dunning-Kruger effect. Our collective self-perception of knowledge acquired through social media is greater than it actually is.

Harold Jarche

I think our experiment with general-purpose social networks is slowly coming to an end, or at least will do over the next decade. What I mean is that, while we’ll still have places where you can broadcast anything to anyone, the digital environments we’ll spend more time will be what Venkatesh Rao calls the ‘cozyweb’:

Unlike the main public internet, which runs on the (human) protocol of “users” clicking on links on public pages/apps maintained by “publishers”, the cozyweb works on the (human) protocol of everybody cutting-and-pasting bits of text, images, URLs, and screenshots across live streams. Much of this content is poorly addressable, poorly searchable, and very vulnerable to bitrot. It lives in a high-gatekeeping slum-like space comprising slacks, messaging apps, private groups, storage services like dropbox, and of course, email.

Venkatesh Rao

That’s on a personal level. I should imagine organisational spaces will be a bit more organised. Back to Jarche:

We need safe communities to take time for reflection, consideration, and testing out ideas without getting harassed. Professional social networks and communities of practices help us make sense of the world outside the workplace. They also enable each of us to bring to bear much more knowledge and insight that we could do on our own.

Harold Jarche

…or to use Rao’s diagram which is so-awful-it’s-useful:

Image by Venkatesh Rao

Of course, blockchain/crypto could come along and solve all of our problems. Except it won’t. Humans are humans (are humans).


Ever since Eli Parisier’s TED talk urging us to beware online “filter bubbles” people have been wringing their hands about ensuring we have ‘balance’ in our networks.

Interestingly, some recent research by the Reuters Institute at Oxford University, paints a slightly different picture. The researcher, Dr Richard Fletcher begins by investigating how people access the news.

Preferred access to news
Diagram via the Reuters Institute, Oxford University

Fletcher draws a distinction between different types of personalisation:

Self-selected personalisation refers to the personalisations that we voluntarily do to ourselves, and this is particularly important when it comes to news use. People have always made decisions in order to personalise their news use. They make decisions about what newspapers to buy, what TV channels to watch, and at the same time which ones they would avoid

Academics call this selective exposure. We know that it’s influenced by a range of different things such as people’s interest levels in news, their political beliefs and so on. This is something that has pretty much always been true.

Pre-selected personalisation is the personalisation that is done to people, sometimes by algorithms, sometimes without their knowledge. And this relates directly to the idea of filter bubbles because algorithms are possibly making choices on behalf of people and they may not be aware of it.

The reason this distinction is particularly important is because we should avoid comparing pre-selected personalisation and its effects with a world where people do not do any kind of personalisation to themselves. We can’t assume that offline, or when people are self-selecting news online, they’re doing it in a completely random way. People are always engaging in personalisation to some extent and if we want to understand the extent of pre-selected personalisation, we have to compare it with the realistic alternative, not hypothetical ideals.

Dr Richard Fletcher

Read the article for the details, but the takeaways for me were twofold. First, that we might be blaming social media for wider and deeper divisons within society, and second, that teaching people to search for information (rather than stumble across it via feeds) might be the best strategy:

People who use search engines for news on average use more news sources than people who don’t. More importantly, they’re more likely to use sources from both the left and the right. 
People who rely mainly on self-selection tend to have fairly imbalanced news diets. They either have more right-leaning or more left-leaning sources. People who use search engines tend to have a more even split between the two.

Dr Richard Fletcher

Useful as it is, what I think this research misses out is the ‘black box’ algorithms that seek to keep people engaged and consuming content. YouTube is the poster child for this. As Jarche comments:

We are left in a state of constant doubt as conspiratorial content becomes easier to access on platforms like YouTube than accessing solid scientific information in a journal, much of which is behind a pay-wall and inaccessible to the general public.

Harold Jarche

This isn’t an easy problem to solve.


We might like to pretend that human beings are rational agents, but this isn’t actually true. Let’s take something like climate change. We’re not arguing about the facts here, we’re arguing about politics. Adrian Bardon, writing in Fast Company, writes:

In theory, resolving factual disputes should be relatively easy: Just present evidence of a strong expert consensus. This approach succeeds most of the time, when the issue is, say, the atomic weight of hydrogen.

But things don’t work that way when the scientific consensus presents a picture that threatens someone’s ideological worldview. In practice, it turns out that one’s political, religious, or ethnic identity quite effectively predicts one’s willingness to accept expertise on any given politicized issue.

Adrian Bardon

This is pretty obvious when we stop to think about it for a moment; beliefs are bound up with identity, and that’s not something that’s so easy to change.

In ideologically charged situations, one’s prejudices end up affecting one’s factual beliefs. Insofar as you define yourself in terms of your cultural affiliations, information that threatens your belief system—say, information about the negative effects of industrial production on the environment—can threaten your sense of identity itself. If it’s part of your ideological community’s worldview that unnatural things are unhealthful, factual information about a scientific consensus on vaccine or GM food safety feels like a personal attack.

Adrian Bardon

So how do we change people’s minds when they’re objectively wrong? Brian Resnick, writing for Vox, suggests the best approach might be ‘deep canvassing’:

Giving grace. Listening to a political opponent’s concerns. Finding common humanity. In 2020, these seem like radical propositions. But when it comes to changing minds, they work.

[…]

The new research shows that if you want to change someone’s mind, you need to have patience with them, ask them to reflect on their life, and listen. It’s not about calling people out or labeling them fill-in-the-blank-phobic. Which makes it feel like a big departure from a lot of the current political dialogue.

Brian Resnick

This approach, it seems, works:

Diagram by Stanford University, via Vox

So it seems there is some hope to fixing the world’s problems. It’s just that the solutions point towards doing the hard work of talking to people and not just treating them as containers for opinions to shoot down at a distance.


Enjoy this? Sign up for the weekly roundup and/or become a supporter!