Tag: GDPR

Friday facings

This week’s links seem to have a theme about faces and looking at them through screens. I’m not sure what that says about either my network, or my interests, but there we are…

As ever, let me know what resonates with you, and if you have any thoughts on what’s shared below!


The Age of Instagram Face

The human body is an unusual sort of Instagram subject: it can be adjusted, with the right kind of effort, to perform better and better over time. Art directors at magazines have long edited photos of celebrities to better match unrealistic beauty standards; now you can do that to pictures of yourself with just a few taps on your phone.

Jia Tolentino (The New Yorker)

People, especially women, but there’s increasing pressure on young men too, are literally going to see plastic surgeons with ‘Facetuned’ versions of themselves. It’s hard not to think that we’re heading for a kind of dystopia when people want to look like cartoonish versions of themselves.


What Makes A Good Person?

What I learned as a child is that most people don’t even meet the responsibilities of their positions (husband, wife, teacher, boss, politicians, whatever.) A few do their duty, and I honor them for it, because it is rare. But to go beyond that and actually be a man of honor is unbelievably rare.

Ian Welsh

This question, as I’ve been talking with my therapist about, is one I ask myself all the time. Recently, I’ve settled on Marcus Aurelius’ approach: “Waste no more time arguing about what a good man should be. Be one.”


Boredom is but a window to a sunny day beyond the gloom

Boredom can be our way of telling ourselves that we are not spending our time as well as we could, that we should be doing something more enjoyable, more useful, or more fulfilling. From this point of view, boredom is an agent of change and progress, a driver of ambition, shepherding us out into larger, greener pastures.

Neel Burton (Aeon)

As I’ve discussed before, I’m not so sure about the fetishisation of ‘boredom’. It’s good to be creative and let the mind wander. But boredom? Nah. There’s too much interesting stuff out there.


Resting Risk Face

Unlock your devices with a surgical mask that looks just like you.

I don’t usually link to products in this roundup, but I’m not sure this is 100% serious. Good idea, though!


The world’s biggest work-from-home experiment has been triggered by coronavirus

For some employees, like teachers who have conducted classes digitally for weeks, working from home can be a nightmare.
But in other sectors, this unexpected experiment has been so well received that employers are considering adopting it as a more permanent measure. For those who advocate more flexible working options, the past few weeks mark a possible step toward widespread — and long-awaited — reform.

Jessie Yeung (CNN)

Every cloud has a silver lining, I guess? Working from home is great, especially when you have a decent setup.


Setting Up Your Webcam, Lights, and Audio for Remote Work, Podcasting, Videos, and Streaming

Only you really know what level of clarity you want from each piece of your setup. Are you happy with what you have? Please, dear Lord, don’t spend any money. This is intended to be a resource if you want more and don’t know how to do it, not a stress or a judgment to anyone happy with their current setup

And while it’s a lot of fun to have a really high-quality webcam for my remote work, would I have bought it if I didn’t have a more intense need for high quality video for my YouTube stuff? Hell no. Get what you need, in your budget. This is just a resource.

This is a fantastic guide. I bought a great webcam when I saw it drop in price via CamelCamelCamel and bought a decent mic when I recorded the TIDE podcast wiht Dai. It really does make a difference.


Large screen phones: a challenge for UX design (and human hands)

I know it might sound like I have more questions than answers, but it seems to me that we are missing out on a very basic solution for the screen size problem. Manufacturers did so much to increase the screen size, computational power and battery capacity whilst keeping phones thin, that switching the apps navigation to the bottom should have been the automatic response to this new paradigm.

Maria Grilo (Imaginary Cloud)

The struggle is real. I invested in a new phone this week (a OnePlus 7 Pro 5G) and, unlike the phone it replaced from 2017, it’s definitely a hold-with-two-hands device.


Society Desperately Needs An Alternative Web

What has also transpired is a web of unbridled opportunism and exploitation, uncertainty and disparity. We see increasing pockets of silos and echo chambers fueled by anxiety, misplaced trust, and confirmation bias. As the mainstream consumer lays witness to these intentions, we notice a growing marginalization that propels more to unplug from these communities and applications to safeguard their mental health. However, the addiction technology has produced cannot be easily remedied. In the meantime, people continue to suffer.

Hessie Jones (Forbes)

Another call to re-decentralise the web, this time based on arguments about centralised services not being able to handle the scale of abuse and fraudulent activity.


UK Google users could lose EU GDPR data protections

It is understood that Google decided to move its British users out of Irish jurisdiction because it is unclear whether Britain will follow GDPR or adopt other rules that could affect the handling of user data.

If British Google users have their data kept in Ireland, it would be more difficult for British authorities to recover it in criminal investigations.

The recent Cloud Act in the US, however, is expected to make it easier for British authorities to obtain data from US companies. Britain and the US are also on track to negotiate a broader trade agreement.

Samuel Gibbs (The Guardian)

I’m sure this is a business decision as well, but I guess it makes sense given post-Brexit uncertainty about privacy legislation. It’s a shame, though, and a little concerning.


Enjoy this? Sign up for the weekly roundup, become a supporter, or download Thought Shrapnel Vol.1: Personal Productivity!


Header image by Luc van Loon

Friday featherings

Behold! The usual link round-up of interesting things I’ve read in the last week.

Feel free to let me know if anything particularly resonated with you via the comments section below…


Part I – What is a Weird Internet Career?

Weird Internet Careers are the kinds of jobs that are impossible to explain to your parents, people who somehow make a living from the internet, generally involving a changing mix of revenue streams. Weird Internet Career is a term I made up (it had no google results in quotes before I started using it), but once you start noticing them, you’ll see them everywhere. 

Gretchen McCulloch (All Things Linguistic)

I love this phrase, which I came across via Dan Hon’s newsletter. This is the first in a whole series of posts, which I am yet to explore in its entirety. My aim in life is now to make my career progressively more (internet) weird.


Nearly half of Americans didn’t go outside to recreate in 2018. That has the outdoor industry worried.

While the Outdoor Foundation’s 2019 Outdoor Participation Report showed that while a bit more than half of Americans went outside to play at least once in 2018, nearly half did not go outside for recreation at all. Americans went on 1 billion fewer outdoor outings in 2018 than they did in 2008. The number of adolescents ages 6 to 12 who recreate outdoors has fallen four years in a row, dropping more than 3% since 2007 

The number of outings for kids has fallen 15% since 2012. The number of moderate outdoor recreation participants declined, and only 18% of Americans played outside at least once a week. 

Jason Blevins (The Colorado Sun)

One of Bruce Willis’ lesser-known films is Surrogates (2009). It’s a short, pretty average film with a really interesting central premise: most people stay at home and send their surrogates out into the world. Over a decade after the film was released, a combination of things (including virulent viruses, screen-focused leisure time, and safety fears) seem to suggest it might be a predictor of our medium-term future.


I’ll Never Go Back to Life Before GDPR

It’s also telling when you think about what lengths companies have had to go through to make the EU versions of their sites different. Complying with GDPR has not been cheap. Any online business could choose to follow GDPR by default across all regions and for all visitors. It would certainly simplify things. They don’t, though. The amount of money in data collection is too big.

Jill Duffy (OneZero)

This is a strangely-titled article, but a decent explainer on what the web looks and feels like to those outside the EU. The author is spot-on when she talks about how GDPR and the recent California Privacy Law could be applied everywhere, but they’re not. Because surveillance capitalism.


You Are Now Remotely Controlled

The belief that privacy is private has left us careening toward a future that we did not choose, because it failed to reckon with the profound distinction between a society that insists upon sovereign individual rights and one that lives by the social relations of the one-way mirror. The lesson is that privacy is public — it is a collective good that is logically and morally inseparable from the values of human autonomy and self-determination upon which privacy depends and without which a democratic society is unimaginable.

Shoshana Zuboff (The New York Times)

I fear that the length of Zuboff’s (excellent) book on surveillance capitalism, her use of terms in this article such as ‘epistemic inequality, and the subtlety of her arguments, may mean that she’s preaching to the choir here.


How to Raise Media-Savvy Kids in the Digital Age

The next time you snap a photo together at the park or a restaurant, try asking your child if it’s all right that you post it to social media. Use the opportunity to talk about who can see that photo and show them your privacy settings. Or if a news story about the algorithms on YouTube comes on television, ask them if they’ve ever been directed to a video they didn’t want to see.

Meghan Herbst (WIRED)

There’s some useful advice in this WIRED article, especially that given by my friend Ian O’Byrne. The difficulty I’ve found is when one of your kids becomes a teenager and companies like Google contact them directly telling them they can have full control of their accounts, should they wish…


Control-F and Building Resilient Information Networks

One reason the best lack conviction, though, is time. They don’t have the time to get to the level of conviction they need, and it’s a knotty problem, because that level of care is precisely what makes their participation in the network beneficial. (In fact, when I ask people who have unintentionally spread misinformation why they did so, the most common answer I hear is that they were either pressed for time, or had a scarcity of attention to give to that moment)

But what if — and hear me out here — what if there was a way for people to quickly check whether linked articles actually supported the points they claimed to? Actually quoted things correctly? Actually provided the context of the original from which they quoted

And what if, by some miracle, that function was shipped with every laptop and tablet, and available in different versions for mobile devices?

This super-feature actually exists already, and it’s called control-f.

Roll the animated GIF!

Mike Caulfield (Hapgood)

I find it incredible, but absolutely believable, that only around 10% of internet users know how to use Ctrl-F to find something within a web page. On mobile, it’s just as easy, as there’s an option within most (all?) browsers to ‘search within page’. I like Mike’s work, as not only is it academic, it’s incredibly practical.


EdX launches for-credit credentials that stack into bachelor’s degrees

The MicroBachelors also mark a continued shift for EdX, which made its name as one of the first MOOC providers, to a wider variety of educational offerings 

In 2018, EdX announced several online master’s degrees with selective universities, including the Georgia Institute of Technology and the University of Texas at Austin.

Two years prior, it rolled out MicroMasters programs. Students can complete the series of graduate-level courses as a standalone credential or roll them into one of EdX’s master’s degrees.

That stackability was something EdX wanted to carry over into the MicroBachelors programs, Agarwal said. One key difference, however, is that the undergraduate programs will have an advising component, which the master’s programs do not. 

Natalie Schwartz (Education Dive)

This is largely a rewritten press release with a few extra links, but I found it interesting as it’s a concrete example of a couple of things. First, the ongoing shift in Higher Education towards students-as-customers. Second, the viability of microcredentials as a ‘stackable’ way to build a portfolio of skills.

Note that, as a graduate of degrees in the Humanities, I’m not saying this approach can be used for everything, but for those using Higher Education as a means to an end, this is exactly what’s required.


How much longer will we trust Google’s search results?

Today, I still trust Google to not allow business dealings to affect the rankings of its organic results, but how much does that matter if most people can’t visually tell the difference at first glance? And how much does that matter when certain sections of Google, like hotels and flights, do use paid inclusion? And how much does that matter when business dealings very likely do affect the outcome of what you get when you use the next generation of search, the Google Assistant?

Dieter Bohn (The Verge)

I’ve used DuckDuckGo as my go-to search engine for years now. It used to be that I’d have to switch to Google for around 10% of my searches. That’s now down to zero.


Coaching – Ethics

One of the toughest situations for a product manager is when they spot a brewing ethical issue, but they’re not sure how they should handle the situation.  Clearly this is going to be sensitive, and potentially emotional. Our best answer is to discover a solution that does not have these ethical concerns, but in some cases you won’t be able to, or may not have the time.

[…]

I rarely encourage people to leave their company, however, when it comes to those companies that are clearly ignoring the ethical implications of their work, I have and will continue to encourage people to leave.

Marty Cagan (SVPG)

As someone with a sensitive radar for these things, I’ve chosen to work with ethical people and for ethical organisations. As Cagan says in this post, if you’re working for a company that ignores the ethical implications of their work, then you should leave. End of story.


Image via webcomic.name

The greatest obstacle to discovery is not ignorance—it is the illusion of knowledge

So said Daniel J. Boorstin. It’s been an interesting week for those, like me, who follow the development of interaction between humans and machines. Specifically, people seem shocked that voice assistants are being used for health questions, also that the companies who make them employ people to listen to samples of voice recordings to make them better.

Before diving into that, let’s just zoom out a bit and remind ourselves that the average level of digital literacies in the general population is pretty poor. Sometimes I wonder how on earth VC-backed companies manage to burn through so much cash. Then I remember the contortions that those who design visual interfaces go through so that people don’t have to think.

Discussing ‘fake news’ and our information literacy problem in Forbes, you can almost feel Kalev Leetaru‘s eye-roll when he says:

It is the accepted truth of Silicon Valley that every problem has a technological solution.

Most importantly, in the eyes of the Valley, every problem can be solved exclusively through technology without requiring society to do anything on its own. A few algorithmic tweaks, a few extra lines of code and all the world’s problems can be simply coded out of existence.

Kalev Leetaru

It’s somewhat tangential to the point I want to make in this article, but Cory Doctorow makes a a good point in this regard about fake news for Locus

Fake news is an instrument for measuring trauma, and the epistemological incoherence that trauma creates – the justifiable mistrust of the establishment that has nearly murdered our planet and that insists that making the richest among us much, much richer will benefit everyone, eventually.

Cory Doctorow

Before continuing, I’d just like to say that I’ve got some skin in the voice assistant game, given that our home has no fewer that six devices that use the Google Assistant (ten if you count smartphones and tablets).

Voice assistants are pretty amazing when you know exactly what you want and can form a coherent query. It’s essentially just clicking the top link on a Google search result, without any of the effort of pointing and clicking. “Hey Google, do I need an umbrella today?”

However, some people are suspicious of voice assistants to a degree that borders on the superstitious. There’s perhaps some valid reasons if you know your tech, but if you’re of the opinion that your voice assistant is ‘always recording’ and literally sending everything to Amazon, Google, Apple, and/or Donald Trump then we need to have words. Just think about that for a moment, realise how ridiculous it is, and move on.

This week an article by VRT NWS stoked fears like these. It was cleverly written so that those who read it quickly could easily draw the conclusion that Google is listening to everything you say. However, let me carve out the key paragraphs:

Why is Google storing these recordings and why does it have employees listening to them? They are not interested in what you are saying, but the way you are saying it. Google’s computer system consists of smart, self-learning algorithms. And in order to understand the subtle differences and characteristics of the Dutch language, it still needs to learn a lot.

[…]

Speech recognition automatically generates a script of the recordings. Employees then have to double check to describe the excerpt as accurately as possible: is it a woman’s voice, a man’s voice or a child? What do they say? They write out every cough and every audible comma. These descriptions are constantly improving Google’s search engines, which results in better reactions to commands. One of our sources explains how this works.

VRS NWS

Every other provider of speech recognition products does this. Obviously. How else would you manage to improve voice recognition in real-world situations? What VRS NWS did was to get a sub-contractor to break a Non-Disclosure Agreement (and violate GDPR) to share recordings.

Google responded on their blog The Keyword, saying:

As part of our work to develop speech technology for more languages, we partner with language experts around the world who understand the nuances and accents of a specific language. These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant.

We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.

We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google.

The Keyword

As I’ve said before, due to the GDPR actually having teeth (British Airways was fined £183m last week) I’m a lot happier to share my data with large companies than I was before the legislation came in. That’s the whole point.

The other big voice assistant story, in the UK at least, was that the National Health Service (NHS) is partnering with Amazon Alexa to offer health advice. The BBC reports:

From this week, the voice-assisted technology is automatically searching the official NHS website when UK users ask for health-related advice.

The government in England said it could reduce demand on the NHS.

Privacy campaigners have raised data protection concerns but Amazon say all information will be kept confidential.

The partnership was first announced last year and now talks are under way with other companies, including Microsoft, to set up similar arrangements.

Previously the device provided health information based on a variety of popular responses.

The use of voice search is on the increase and is seen as particularly beneficial to vulnerable patients, such as elderly people and those with visual impairment, who may struggle to access the internet through more traditional means.

The BBC

So long as this is available to all types of voice assistants, this is great news. The number of people I know, including family members, who have convinced themselves they’ve got serious problems by spending ages searching their symptoms, is quite frightening. Getting sensible, prosaic advice is much better.

Iliana Magra writes in the The New York Times that privacy campaigners are concerned about Amazon setting up a health care division, but that there are tangible benefits to certain sections of the population.

The British health secretary, Matt Hancock, said Alexa could help reduce strain on doctors and pharmacists. “We want to empower every patient to take better control of their health care,” he said in a statement, “and technology like this is a great example of how people can access reliable, world-leading N.H.S. advice from the comfort of their home.”

His department added that voice-assistant advice would be particularly useful for “the elderly, blind and those who cannot access the internet through traditional means.”

Iliana Magra

I’m not dismissing the privacy issues, of course not. But what I’ve found, especially recently, is that the knowledge, skills, and expertise required to be truly ‘Google-free’ (or the equivalent) is an order of magnitude greater than what is realistically possible for the general population.

It might be fatalistic to ask the following question, but I’ll do it anyway: who exactly do we expect to be building these things? Mozilla, one of the world’s largest tech non-profits is conspicuously absent in these conversations, and somehow I don’t think people aren’t going to trust governments to get involved.

For years, techies have talked about ‘personal data vaults’ where you could share information in a granular way without being tracked. Currently being trialled is the BBC box to potentially help with some of this:

With a secure Databox at its heart, BBC Box offers something very unusual and potentially important: it is a physical device in the person’s home onto which personal data is gathered from a range of sources, although of course (and as mentioned above) it is only collected with the participants explicit permission, and processed under the person’s control.

Personal data is stored locally on the box’s hardware and once there, it can be processed and added to by other programmes running on the box – much like apps on a smartphone. The results of this processing might, for example be a profile of the sort of TV programmes someone might like or the sort of theatre they would enjoy. This is stored locally on the box – unless the person explicitly chooses to share it. No third party, not even the BBC itself, can access any data in ‘the box’ unless it is authorised by the person using it, offering a secure alternative to existing services which rely on bringing large quantities of personal data together in one place – with limited control by the person using it.

The BBC

It’s an interesting concept and, if they can get the user experience right, a potentially groundbreaking concept. Eventually, of course, it will be in your smartphone, which means that device really will be a ‘digital self’.

You can absolutely opt-out of whatever you want. For example, I opt out of Facebook’s products (including WhatsApp and Instagram). You can point out to others the reasons for that, but at some point you have to realise it’s an opinion, a lifestyle choice, an ideology. Not everyone wants to be a tech vegan, or live their lives under those who act as though they are one.

Opting in and out of algorithms

It’s now over seven years since I submitted my doctoral thesis on digital literacies. Since then, almost the entire time my daughter has been alive, the world has changed a lot.

Writing in The Conversation, Anjana Susarla explains her view that digital literacy goes well beyond functional skills:

In my view, the new digital literacy is not using a computer or being on the internet, but understanding and evaluating the consequences of an always-plugged-in lifestyle. This lifestyle has a meaningful impact on how people interact with others; on their ability to pay attention to new information; and on the complexity of their decision-making processes.

Digital literacies are plural, context-dependent and always evolving. Right now, I think Susarla is absolutely correct to be focusing on algorithms and the way they interact with society. Ben Williamson is definitely someone to follow and read up on in that regard.

Over the past few years I’ve been trying (both directly and indirectly) to educate people about the impact of algorithms on everything from fake news to privacy. It’s one of the reasons I don’t use Facebook, for example, and go out of my way to explain to others why they shouldn’t either:

A study of Facebook usage found that when participants were made aware of Facebook’s algorithm for curating news feeds, about 83% of participants modified their behavior to try to take advantage of the algorithm, while around 10% decreased their usage of Facebook.

[…]

However, a vast majority of platforms do not provide either such flexibility to their end users or the right to choose how the algorithm uses their preferences in curating their news feed or in recommending them content. If there are options, users may not know about them. About 74% of Facebook’s users said in a survey that they were not aware of how the platform characterizes their personal interests.

Although I’m still not going to join Facebook, one reason I’m a little more chilled out about algorithms and privacy these days is because of the GDPR. If it’s regulated effectively (as I think it will be) then it should really keep Big Tech in check:

As part of the recently approved General Data Protection Regulation in the European Union, people have “a right to explanation” of the criteria that algorithms use in their decisions. This legislation treats the process of algorithmic decision-making like a recipe book. The thinking goes that if you understand the recipe, you can understand how the algorithm affects your life.

[…]

But transparency is not a panacea. Even when an algorithm’s overall process is sketched out, the details may still be too complex for users to comprehend. Transparency will help only users who are sophisticated enough to grasp the intricacies of algorithms.

I agree that it’s not enough to just tell people that they’re being tracked without them being able to do something about it. That leads to technological defeatism. We need a balance between simple, easy-to-use tools that enable user privacy and security. These aren’t going to come through tech industry self-regulation, but through regulatory frameworks like GDPR.

Source: The Conversation


Also check out:

Nobody is ready for GDPR

As a small business owner and co-op founder, GDPR applies to me as much as everyone else. It’s a massive ballache, but I support the philosophy behind what it’s trying to achieve.

After four years of deliberation, the General Data Protection Regulation (GDPR) was officially adopted by the European Union in 2016. The regulation gave companies a two-year runway to get compliant, which is theoretically plenty of time to get shipshape. The reality is messier. Like term papers and tax returns, there are people who get it done early, and then there’s the rest of us.

I’m definitely in “the rest of us” camp, meaning that, over the last week or so, my wife and I have spent time figuring stuff out. The main thing is getting things in order so that  you’ve got a process in place. Different things are going to affect different organisations, well, differently.

But perhaps the GDPR requirement that has everyone tearing their hair out the most is the data subject access request. EU residents have the right to request access to review personal information gathered by companies. Those users — called “data subjects” in GDPR parlance — can ask for their information to be deleted, to be corrected if it’s incorrect, and even get delivered to them in a portable form. But that data might be on five different servers and in god knows how many formats. (This is assuming the company even knows that the data exists in the first place.) A big part of becoming GDPR compliant is setting up internal infrastructures so that these requests can be responded to.

A data subject access request isn’t going to affect our size of business very much. If someone does make a request, we’ve got a list of places from which to manually export the data. That’s obviously not a viable option for larger enterprises, who need to automate.

To be fair, GDPR as a whole is a bit complicated. Alison Cool, a professor of anthropology and information science at the University of Colorado, Boulder, writes in The New York Times that the law is “staggeringly complex” and practically incomprehensible to the people who are trying to comply with it. Scientists and data managers she spoke to “doubted that absolute compliance was even possible.”

To my mind, GDPR is like an much more far-reaching version of the Freedom of Information Act that came into force in the year 2000. That changed the nature of what citizens could expect from public bodies. I hope that the GDPR similarly changes what we all can expect from organisations who process our personal data.

Source: The Verge

Microcast #002



What’s Doug working on this week?

Links:

GDPR, blockchain, and privacy

I’m taking an online course about the impending General Data Protection Regulatin (GDPR), which I’ve writing about on my personal blog. An article in WIRED talks about the potential it will have, along with technologies such as blockchain.

People have talked about everyone having ‘private data accounts’ which they then choose to hook up to service providers for years. GDPR might just force that to happen:

A new generation of apps and websites will arise that use private-data accounts instead of conventional user accounts. Internet applications in 2018 will attach themselves to these, gaining access to a smart data account rich with privately held contextual information such as stress levels (combining sleep patterns, for example, with how busy a user’s calendar is) or motivation to exercise comparing historical exercise patterns to infer about the day ahead). All of this will be possible without the burden on the app supplier of undue sensitive data liability or any violation of consumers’ personal rights.

As the article points out, when we know what’s going to happen with our data, we’re probably more likely to share it. For example, I’m much more likely to invest in voice-assisted technologies once GDPR hits in May:

Paradoxically, the internet will become more private at a moment when we individuals begin to exchange more data. We will then wield a collective economic power that could make 2018 the year we rebalance the digital economy.

This will have a huge effect on our everyday information landscape:

The more we share data on our terms, the more the internet will evolve to emulate the physical domain where private spaces, commercial spaces and community spaces can exist separately, but side by side. Indeed, private-data accounts may be the first step towards the internet as a civil society, paving the way for a governing system where digital citizens, in the form of their private micro-server data account, do not merely have to depend on legislation to champion their private rights, but also have the economic power to enforce them as well.

I have to say, the more I discover about the provisions of GDPR, the more excited and optimistic I am about the future.

Source: WIRED

Venture Communism?

As part of my Moodle work, I’ve been looking at GDPR and decentralised technologies, so I found the following interesting.

It’s worth pointing out that ‘disintermediation’ is the removal of intermediaries from a supply chain. Google, Amazon, Facebook, Microsoft, and Apple specialise in ‘anti-disintermediation’ or plain old vendor lock-in.  So ‘counter-anti-disintermediation’ is working against that in a forward-thinking way.

Central to the counter-anti-disintermediationist design is the End-to-End principle: platforms must not depend on servers and admins, even when cooperatively run, but must, to the greatest degree possible, run on the computers of the platform’s users. The computational capacity and network access of the users’ own computers must collectively make up the resources of the platform, such that, on average, each new user adds net resources to the platform. By keeping the computational capacity in the hands of the users, we prevent the communication platform from becoming capital, and we prevent the users from being instrumentalized as an audience commodity.

The great thing about that, of course, is that solutions such as ZeroNet allow for this, in a way similar to bitorrent networks ensuring more popular content becomes more available.

The linked slides from that article describe ‘venture communism’, an approach characterised by co-operative control, open federated systems, and commons ownership. Now that’s something I can get behind!

Source: P2P Foundation

GDPR could break the big five’s monopoly stranglehold on our data

Almost everyone has one or more account with the following companies: Apple, Amazon, Facebook, Google, and Microsoft. Between them they know more about you than your family and the state apparatus of your country, combined.

However, 2018 could be the year that changes all that, all thanks to the General Data Protection Regulation (GDPR), as this article explains.

There is legitimate fear that GDPR will threaten the data-profiling gravy train. It’s a direct assault on the surveillance economy, enforced by government regulators and an army of class-action lawyers. “It will require such a rethinking of the way Facebook and Google work, I don’t know what they will do,” says Jonathan Taplin, author of Move Fast and Break Things, a book that’s critical of the platform economy. Companies could still serve ads, but they would not be able to use data to target someone’s specific preferences without their consent. “I saw a study that talked about the difference in value of an ad if platforms track information versus do not track,” says Reback. “If you just honor that, it would cut the value Google could charge for an ad by 80 percent.”

If it was any other industry, these monolithic companies would already have been broken up. However, they may be another, technical, way of restricting their dominance: forcing them to be interoperable so that users can move their data between platforms.

Portability would break one of the most powerful dynamics cementing Big Tech dominance: the network effect. People want to use the social media site their friends use, forcing startups to swim against a huge tide. Competition is not a click away, as Google’s Larry Page once said; the costs of switching are too high. But if you could use a competing social media site with the confidence that you’ll reach all your friends, suddenly the Facebook lock gets jimmied open. This offers the opportunity for competition on the quality and usability of the service rather than the presence of friends.

Source: The American Prospect