Category: Digital self-defence (page 1 of 2)

The security guide as literary genre

I stumbled across this conference presentation from back in January by Jeffrey Monro, “a doctoral student in English at the University of Maryland, College Park, where [he studies] the textual and material histories of media technologies”.

It’s a short, but very interesting one, taking a step back from the current state of play to ask what we’re actually doing as a society.

Over the past year, in an unsurprising response to a host of new geopolitical realities, we’ve seen a cottage industry of security recommendations pop up in venues as varied as The New York TimesVice, and even Teen Vogue. Together, these recommendations form a standard suite of answers to some of the most messy questions of our digital lives. “How do I stop advertisers from surveilling me?” “How do I protect my internet history from the highest bidder?” And “how do I protect my privacy in the face of an invasive or authoritarian government?”

It’s all very well having a plethora of guides to secure ourselves against digital adversaries, but this isn’t something that we need to really think about in a physical setting within the developed world. When I pop down to the shops, I don’t think about the route I take in case someone robs me at gunpoint.

So Monro is thinking about these security guides as a kind of ‘literary genre’:

I’m less interested in whether or not these tools are effective as such. Rather, I want to ask how these tools in particular orient us toward digital space, engage imaginaries of privacy and security, and structure relationships between users, hackers, governments, infrastructures, or machines themselves? In short: what are we asking for when we construe security as a browser plugin?

There’s a wider issue here about the pace of digital interactions, security theatre, and most of us getting news from an industry hyper-focused on online advertising. A recent article in the New York Times was thought-provoking in that sense, comparing what it’s like going back to (or in some cases, getting for the first time) all of your news from print media.

We live in a digital world where everyone’s seemingly agitated and angry, all of the time:

The increasing popularity of these guides evinces a watchful anxiety permeating even the most benign of online interactions, a paranoia that emerges from an epistemological collapse of the categories of “private” and “public.” These guides offer a way through the wilderness, techniques by which users can harden that private/public boundary.

The problem with this ‘genre’ of security guide, says Monro, is that even the good ones from groups like EFF (of which I’m a member) make you feel like locking down everything. The problem with that, of course, is that it’s very limiting.

Communication, by its very nature, demands some dimension of insecurity, some material vector for possible attack. Communication is always already a vulnerable act. The perfectly secure machine, as Chun notes, would be unusable: it would cease to be a computer at all. We can then only ever approach security asymptotically, always leaving avenues for attack, for it is precisely through those avenues that communication occurs.

I’m a great believer in serendipity, but the problem with that from a technical point of view is that it increases my attack surface. It’s a source of tension that I actually feel most days.

There is no room, or at least less room, in a world of locked-down browsers, encrypted messaging apps, and verified communication for qualities like serendipity or chance encounters. Certainly in a world chock-full with bad actors, I am not arguing for less security, particularly for those of us most vulnerable to attack online… But I have to wonder how our intensive speculative energies, so far directed toward all possibility for attack, might be put to use in imagining a digital world that sees vulnerability as a value.

At the end of the day, this kind of article serves to show just how different our online, digital environment is from our physical reality. It’s a fascinating sideways look, looking at the security guide as a ‘genre’. A recommended read in its entirety — and I really like the look of his blog!

Source: Jeffrey Moro

GDPR, blockchain, and privacy

I’m taking an online course about the impending General Data Protection Regulatin (GDPR), which I’ve writing about on my personal blog. An article in WIRED talks about the potential it will have, along with technologies such as blockchain.

People have talked about everyone having ‘private data accounts’ which they then choose to hook up to service providers for years. GDPR might just force that to happen:

A new generation of apps and websites will arise that use private-data accounts instead of conventional user accounts. Internet applications in 2018 will attach themselves to these, gaining access to a smart data account rich with privately held contextual information such as stress levels (combining sleep patterns, for example, with how busy a user’s calendar is) or motivation to exercise comparing historical exercise patterns to infer about the day ahead). All of this will be possible without the burden on the app supplier of undue sensitive data liability or any violation of consumers’ personal rights.

As the article points out, when we know what’s going to happen with our data, we’re probably more likely to share it. For example, I’m much more likely to invest in voice-assisted technologies once GDPR hits in May:

Paradoxically, the internet will become more private at a moment when we individuals begin to exchange more data. We will then wield a collective economic power that could make 2018 the year we rebalance the digital economy.

This will have a huge effect on our everyday information landscape:

The more we share data on our terms, the more the internet will evolve to emulate the physical domain where private spaces, commercial spaces and community spaces can exist separately, but side by side. Indeed, private-data accounts may be the first step towards the internet as a civil society, paving the way for a governing system where digital citizens, in the form of their private micro-server data account, do not merely have to depend on legislation to champion their private rights, but also have the economic power to enforce them as well.

I have to say, the more I discover about the provisions of GDPR, the more excited and optimistic I am about the future.

Source: WIRED

More haste, less speed

In the last couple of years, there’s been a move to give names to security vulnerabilities that would be otherwise too arcane to discuss in the mainstream media. For example, back in 2014, Heartbleed, “a security bug in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol”, had not only a name but a logo.

The recent media storm around the so-called ‘Spectre’ and ‘Meltdown’ shows how effective this approach is. It also helps that they sound a little like James Bond science fiction.

In this article, Zeynep Tufekci argues that the security vulnerabilities are built on our collective desire for speed:

We have built the digital world too rapidly. It was constructed layer upon layer, and many of the early layers were never meant to guard so many valuable things: our personal correspondence, our finances, the very infrastructure of our lives. Design shortcuts and other techniques for optimization — in particular, sacrificing security for speed or memory space — may have made sense when computers played a relatively small role in our lives. But those early layers are now emerging as enormous liabilities. The vulnerabilities announced last week have been around for decades, perhaps lurking unnoticed by anyone or perhaps long exploited.

Helpfully, she gives a layperson’s explanation of what went wrong with these two security vulnerabilities:

Almost all modern microprocessors employ tricks to squeeze more performance out of a computer program. A common trick involves having the microprocessor predict what the program is about to do and start doing it before it has been asked to do it — say, fetching data from memory. In a way, modern microprocessors act like attentive butlers, pouring that second glass of wine before you knew you were going to ask for it.

But what if you weren’t going to ask for that wine? What if you were going to switch to port? No problem: The butler just dumps the mistaken glass and gets the port. Yes, some time has been wasted. But in the long run, as long as the overall amount of time gained by anticipating your needs exceeds the time lost, all is well.

Except all is not well. Imagine that you don’t want others to know about the details of the wine cellar. It turns out that by watching your butler’s movements, other people can infer a lot about the cellar. Information is revealed that would not have been had the butler patiently waited for each of your commands, rather than anticipating them. Almost all modern microprocessors make these butler movements, with their revealing traces, and hackers can take advantage.

Right now, she argues, systems have to employ more and more tricks to squeeze performance out of hardware because the software we use is riddled with surveillance and spyware.

But the truth is that our computers are already quite fast. When they are slow for the end-user, it is often because of “bloatware”: badly written programs or advertising scripts that wreak havoc as they try to track your activity online. If we were to fix that problem, we would gain speed (and avoid threatening and needless surveillance of our behavior).

As things stand, we suffer through hack after hack, security failure after security failure. If commercial airplanes fell out of the sky regularly, we wouldn’t just shrug. We would invest in understanding flight dynamics, hold companies accountable that did not use established safety procedures, and dissect and learn from new incidents that caught us by surprise.

And indeed, with airplanes, we did all that. There is no reason we cannot do the same for safety and security of our digital systems.

There have been patches going out over the past few weeks since the vulnerabilities came to light from major vendors. For-profit companies have limited resources, of course, and proprietary, closed-source code. This means there’ll be some devices that won’t get the security updates at all, leaving end users in a tricky situation: their hardware is now almost worthless. So do they (a) keep on using it, crossing their fingers that nothing bad happens, or (b) bite the bullet and upgrade?

What I think the communities I’m part of could have done better at is shout loudly that there’s an option (c): open source software. No matter how old your hardware, the chances are that someone, somewhere, with the requisite skills will want to fix the vulnerabilities on that device.

Source: The New York Times

DuckDuckGo moves beyond search

This is excellent news:

Today we’re taking a major step to simplify online privacy with the launch of fully revamped versions of our browser extension and mobile app, now with built-in tracker network blocking, smarter encryption, and, of course, private search – all designed to operate seamlessly together while you search and browse the web. Our updated app and extension are now available across all major platforms – Firefox, Safari, Chrome, iOS, and Android – so that you can easily get all the privacy essentials you need on any device with just one download.

I have a multitude of blockers installed, which makes it difficult to recommend just one to people. Hopefully this will simplify things:

For the last decade, DuckDuckGo has been giving you the ability to search privately, but that privacy was only limited to our search box. Now, when you also use the DuckDuckGo browser extension or mobile app, we will provide you with seamless privacy protection on the websites you visit. Our goal is to expand this privacy protection over time by adding even more privacy features into this single package. While not all privacy protection can be as seamless, the essentials available today and those that we will be adding will go a long way to protecting your privacy online, without compromising your Internet experience.

It looks like the code is all open source, too! 👏 👏 👏

Source: DuckDuckGo blog

Barely anyone uses 2FA

This is crazy.

In a presentation at Usenix’s Enigma 2018 security conference in California, Google software engineer Grzegorz Milka today revealed that, right now, less than 10 per cent of active Google accounts use two-step authentication to lock down their services. He also said only about 12 per cent of Americans have a password manager to protect their accounts, according to a 2016 Pew study.

Two-factor authentication (2FA), especially the kind where you use an app authenticator is so awesome you can use a much weaker password than normal, should you wish. (I, however, stick to the 16-digit one created by a deterministic password manager.)

Please, if you haven’t already done so, just enable two-step authentication. This means when you or someone else tries to log into your account, they need not only your password but authorization from another device, such as your phone. So, simply stealing your password isn’t enough – they need your unlocked phone, or similar, to to get in.

I can’t understand people who basically live their lives permanently one step away from being hacked. And for what? A very slightly more convenient life? Mad.

Source: The Register

Choose your connected silo

The Verge reports back from CES, the yearly gathering where people usually get excited about shiny thing. This year, however, people are bit more wary…

And it’s not just privacy and security that people need to think about. There’s also lock-in. You can’t just buy a connected gadget, you have to choose an ecosystem to live in. Does it work with HomeKit? Will it work with Alexa? Will some tech company get into a spat with another tech company and pull its services from that hardware thing you just bought?

In other words, the kind of digital literacies required by the average consumer just went up a notch.

Here’s the thing: it’s unlikely that the connected toothpaste will go back in the tube at this point. Consumer products will be more connected, not less. Some day not long from now, the average person’s stroll down the aisle at Target or Best Buy will be just like our experiences at futuristic trade shows: everything is connected, and not all of it makes sense.

It won’t be long before we’ll be inviting techies around to debug our houses…

Source: The Verge

This isn’t the golden age of free speech

You’d think with anyone, anywhere, being able to post anything to a global audience, that this would be a golden age of free speech. Right?

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence? (Yes, there are systems that can create increasingly convincing fake videos.)

The problem is not with the free speech, it’s the means by which it’s disseminated:

In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms: Facebook, Google (which owns YouTube), and, to a lesser extent, Twitter.

It’s time to re-decentralise, people.

Source: WIRED

Attention is an arms race

Cory Doctorow writes:

There is a war for your attention, and like all adversarial scenarios, the sides develop new countermeasures and then new tactics to overcome those countermeasures.

Using a metaphor from virology, he notes that we become to immune to certain types of manipulation over time:

When a new attentional soft spot is discovered, the world can change overnight. One day, every­one you know is signal boosting, retweeting, and posting Upworthy headlines like “This video might hurt to watch. Luckily, it might also explain why,” or “Most Of These People Do The Right Thing, But The Guys At The End? I Wish I Could Yell At Them.” The style was compelling at first, then reductive and simplistic, then annoying. Now it’s ironic (at best). Some people are definitely still susceptible to “This Is The Most Inspiring Yet Depressing Yet Hilarious Yet Horrifying Yet Heartwarming Grad Speech,” but the rest of us have adapted, and these headlines bounce off of our attention like pre-penicillin bacteria being batted aside by our 21st century immune systems.

However, the thing I’m concerned about is the kind of AI-based manipulation that is forever shape-shifting. How do we become immune to a moving target?

Source: Locus magazine

Meltdown and Spectre explained by xkcd

There’s not much we mere mortals can do about the latest microprocessor-based vulnerabilites, except ensure we apply security patches immediately.

Source: xkcd

It doesn’t matter if you don’t use AI assistants if everyone else does

Email is an awesome system. It’s open, decentralised, and you can pick whoever you want to provide your emails. The trouble is, of course, that if you decide you don’t want a certain company, say Google, to read your emails, you only have control of your half of the equation. In other words, it doesn’t matter if you don’t want to use GMail, if most of your contacts do.

The same is true of AI assistant. You might not want an Amazon Echo device in your house, but you don’t spend all your life at home:

Amazon wants to bring Alexa to more devices than smart speakers, Fire TV and various other consumer electronics for the home, like alarm clocks. The company yesterday announced developer tools that would allow Alexa to be used in microwave ovens, for example – so you could just tell the oven what to do. Today, Amazon is rolling out a new set of developer tools, including one called the “Alexa Mobile Accessory Kit,” that would allow Alexa to work Bluetooth products in the wearable space, like headphones, smartwatches, fitness trackers, other audio devices, and more.

The future isn’t pre-ordained. We get to choose the society and culture in which we’d like to live. Huge, for-profit companies having listening devices everywhere sounds dystopian to me.

Source: TechCrunch