Tag: privacy (page 1 of 3)

Friday flowerings

Did you see these things this week?

  • Happy 25th year, blogging. You’ve grown up, but social media is still having a brawl (The Guardian) — “The furore over social media and its impact on democracy has obscured the fact that the blogosphere not only continues to exist, but also to fulfil many of the functions of a functioning public sphere. And it’s massive. One source, for example, estimates that more than 409 million people view more than 20bn blog pages each month and that users post 70m new posts and 77m new comments each month. Another source claims that of the 1.7 bn websites in the world, about 500m are blogs. And WordPress.com alone hosts blogs in 120 languages, 71% of them in English.”
  • Emmanuel Macron Wants to Scan Your Face (The Washington Post) — “President Emmanuel Macron’s administration is set to be the first in Europe to use facial recognition when providing citizens with a secure digital identity for accessing more than 500 public services online… The roll-out is tainted by opposition from France’s data regulator, which argues the electronic ID breaches European Union rules on consent – one of the building blocks of the bloc’s General Data Protection Regulation laws – by forcing everyone signing up to the service to use the facial recognition, whether they like it or not.”
  • This is your phone on feminism (The Conversationalist) — “Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true. This cognitive dissonance confuses and paralyses us. And look around. Everyone has a smartphone. So it’s probably not so bad, and anyway, that’s just how things work. Right?”
  • Google’s auto-delete tools are practically worthless for privacy (Fast Company) — “In reality, these auto-delete tools accomplish little for users, even as they generate positive PR for Google. Experts say that by the time three months rolls around, Google has already extracted nearly all the potential value from users’ data, and from an advertising standpoint, data becomes practically worthless when it’s more than a few months old.”
  • Audrey Watters (Uses This) — “For me, the ideal set-up is much less about the hardware or software I am using. It’s about the ideas that I’m thinking through and whether or not I can sort them out and shape them up in ways that make for a good piece of writing. Ideally, that does require some comfort — a space for sustained concentration. (I know better than to require an ideal set up in order to write. I’d never get anything done.)”
  • Computer Files Are Going Extinct (OneZero) — “Files are skeuomorphic. That’s a fancy word that just means they’re a digital concept that mirrors a physical item. A Word document, for example, is like a piece of paper, sitting on your desk(top). A JPEG is like a painting, and so on. They each have a little icon that looks like the physical thing they represent. A pile of paper, a picture frame, a manila folder. It’s kind of charming really.”
  • Why Technologists Fail to Think of Moderation as a Virtue and Other Stories About AI (The LA Review of Books) — “Speculative fiction about AI can move us to think outside the well-trodden clichés — especially when it considers how technologies concretely impact human lives — through the influence of supersized mediators, like governments and corporations.”
  • Inside Mozilla’s 18-month effort to market without Facebook (Digiday) — “The decision to focus on data privacy in marketing the Mozilla brand came from research conducted by the company four years ago into the rise of consumers who make values-based decisions on not only what they purchase but where they spend their time.”
  • Core human values not eyeballs (Cubic Garden) — “Theres so much more to do, but the aims are high and important for not just the BBC, but all public service entities around the world. Measuring the impact and quality on peoples lives beyond the shallow meaningless metrics for public service is critical.”

Image: The why is often invisible via Jessica Hagy’s Indexed

I am not fond of expecting catastrophes, but there are cracks in the universe

So said Sydney Smith. Let’s talk about surveillance. Let’s talk about surveillance capitalism and surveillance humanitarianism. But first, let’s talk about machine learning and algorithms; in other words, let’s talk about what happens after all of that data is collected.

Writing in The Guardian, Sarah Marsh investigates local councils using “automated guidance systems” in an attempt to save money.

The systems are being deployed to provide automated guidance on benefit claims, prevent child abuse and allocate school places. But concerns have been raised about privacy and data security, the ability of council officials to understand how some of the systems work, and the difficulty for citizens in challenging automated decisions.

Sarah Marsh

The trouble is, they’re not particularly effective:

It has emerged North Tyneside council has dropped TransUnion, whose system it used to check housing and council tax benefit claims. Welfare payments to an unknown number of people were wrongly delayed when the computer’s “predictive analytics” erroneously identified low-risk claims as high risk

Meanwhile, Hackney council in east London has dropped Xantura, another company, from a project to predict child abuse and intervene before it happens, saying it did not deliver the expected benefits. And Sunderland city council has not renewed a £4.5m data analytics contract for an “intelligence hub” provided by Palantir.

Sarah Marsh

When I was at Mozilla there were a number of colleagues there who had worked on the OFA (Obama For America) campaign. I remember one of them, a DevOps guy, expressing his concern that the infrastructure being built was all well and good when there’s someone ‘friendly’ in the White House, but what comes next.

Well, we now know what comes next, on both sides of the Atlantic, and we can’t put that genie back in its bottle. Swingeing cuts by successive Conservative governments over here, coupled with the Brexit time-and-money pit means that there’s no attention or cash left.

If we stop and think about things for a second, we probably wouldn’t don’t want to live in a world where machines make decisions for us, based on algorithms devised by nerds. As Rose Eveleth discusses in a scathing article for Vox, this stuff isn’t ‘inevitable’ — nor does it constitute a process of ‘natural selection’:

Often consumers don’t have much power of selection at all. Those who run small businesses find it nearly impossible to walk away from Facebook, Instagram, Yelp, Etsy, even Amazon. Employers often mandate that their workers use certain apps or systems like Zoom, Slack, and Google Docs. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or, ‘I’m not on social media,’” says Rumman Chowdhury, a data scientist at Accenture. “You actually have to be so comfortable in your privilege that you can opt out of things.”

And so we’re left with a tech world claiming to be driven by our desires when those decisions aren’t ones that most consumers feel good about. There’s a growing chasm between how everyday users feel about the technology around them and how companies decide what to make. And yet, these companies say they have our best interests in mind. We can’t go back, they say. We can’t stop the “natural evolution of technology.” But the “natural evolution of technology” was never a thing to begin with, and it’s time to question what “progress” actually means.

Rose Eveleth

I suppose the thing that concerns me the most is people in dire need being subject to impersonal technology for vital and life-saving aid.

For example, Mark Latonero, writing in The New York Times, talks about the growing dangers around what he calls ‘surveillance humanitarianism’:

By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.

Mark Latonero

It’s easy to think that this is an emergency, so we should just do whatever is necessary. But Latonero explains the risks, arguing that the risk is shifted to a later time:

If an individual or group’s data is compromised or leaked to a warring faction, it could result in violent retribution for those perceived to be on the wrong side of the conflict. When I spoke with officials providing medical aid to Syrian refugees in Greece, they were so concerned that the Syrian military might hack into their database that they simply treated patients without collecting any personal data. The fact that the Houthis are vying for access to civilian data only elevates the risk of collecting and storing biometrics in the first place.

Mark Latonero

There was a rather startling article in last weekend’s newspaper, which I’ve found online. Hannah Devlin, again writing in The Guardian (which is a good source of information for those concerned with surveillance) writes about a perfect storm of social media and improved processing speeds:

[I]n the past three years, the performance of facial recognition has stepped up dramatically. Independent tests by the US National Institute of Standards and Technology (Nist) found the failure rate for finding a target picture in a database of 12m faces had dropped from 5% in 2010 to 0.1% this year.

The rapid acceleration is thanks, in part, to the goldmine of face images that have been uploaded to Instagram, Facebook, LinkedIn and captioned news articles in the past decade. At one time, scientists would create bespoke databases by laboriously photographing hundreds of volunteers at different angles, in different lighting conditions. By 2016, Microsoft had published a dataset, MS Celeb, with 10m face images of 100,000 people harvested from search engines – they included celebrities, broadcasters, business people and anyone with multiple tagged pictures that had been uploaded under a Creative Commons licence, allowing them to be used for research. The dataset was quietly deleted in June, after it emerged that it may have aided the development of software used by the Chinese state to control its Uighur population.

In parallel, hardware companies have developed a new generation of powerful processing chips, called Graphics Processing Units (GPUs), uniquely adapted to crunch through a colossal number of calculations every second. The combination of big data and GPUs paved the way for an entirely new approach to facial recognition, called deep learning, which is powering a wider AI revolution.

Hannah Devlin

Those of you who have read this far and are expecting some big reveal are going to be disappointed. I don’t have any ‘answers’ to these problems. I guess I’ve been guilty, like many of us have, of the kind of ‘privacy nihilism’ mentioned by Ian Bogost in The Atlantic:

Online services are only accelerating the reach and impact of data-intelligence practices that stretch back decades. They have collected your personal data, with and without your permission, from employers, public records, purchases, banking activity, educational history, and hundreds more sources. They have connected it, recombined it, bought it, and sold it. Processed foods look wholesome compared to your processed data, scattered to the winds of a thousand databases. Everything you have done has been recorded, munged, and spat back at you to benefit sellers, advertisers, and the brokers who service them. It has been for a long time, and it’s not going to stop. The age of privacy nihilism is here, and it’s time to face the dark hollow of its pervasive void.

Ian Bogost

The only forces that we have to stop this are collective action, and governmental action. My concern is that we don’t have the digital savvy to do the former, and there’s definitely the lack of will in respect of the latter. Troubling times.

Friday fluctuations

Have a quick skim through these links that I came across this week and found interesting:

  • Overrated: Ludwig Wittgenstein (Standpoint) — “Wittgenstein’s reputation for genius did not depend on incomprehensibility alone. He was also “tortured”, rude and unreliable. He had an intense gaze. He spent months in cold places like Norway to isolate himself. He temporarily quit philosophy, because he believed that he had solved all its problems in his 1922 Tractatus Logico-Philosophicus, and worked as a gardener. He gave away his family fortune. And, of course, he was Austrian, as so many of the best geniuses are.”
  • EdTech Resistance (Ben Williamson) ⁠— “We should not and cannot ignore these tensions and challenges. They are early signals of resistance ahead for edtech which need to be engaged with before they turn to public outrage. By paying attention to and acting on edtech resistances it may be possible to create education systems, curricula and practices that are fair and trustworthy. It is important not to allow edtech resistance to metamorphose into resistance to education itself.”
  • The Guardian view on machine learning: a computer cleverer than you? (The Guardian) — “The promise of AI is that it will imbue machines with the ability to spot patterns from data, and make decisions faster and better than humans do. What happens if they make worse decisions faster? Governments need to pause and take stock of the societal repercussions of allowing machines over a few decades to replicate human skills that have been evolving for millions of years.”
  • A nerdocratic oath (Scott Aaronson) — “I will never allow anyone else to make me a cog. I will never do what is stupid or horrible because “that’s what the regulations say” or “that’s what my supervisor said,” and then sleep soundly at night. I’ll never do my part for a project unless I’m satisfied that the project’s broader goals are, at worst, morally neutral. There’s no one on earth who gets to say: “I just solve technical problems. Moral implications are outside my scope”.”
  • Privacy is power (Aeon) — “The power that comes about as a result of knowing personal details about someone is a very particular kind of power. Like economic power and political power, privacy power is a distinct type of power, but it also allows those who hold it the possibility of transforming it into economic, political and other kinds of power. Power over others’ privacy is the quintessential kind of power in the digital age.”
  • The Symmetry and Chaos of the World’s Megacities (WIRED) — “Koopmans manages to create fresh-looking images by finding unique vantage points, often by scouting his locations on Google Earth. As a rule, he tries to get as high as he can—one of his favorite tricks is talking local work crews into letting him shoot from the cockpit of a construction crane.”
  • Green cities of the future – what we can expect in 2050 (RNZ) — “In their lush vision of the future, a hyperloop monorail races past in the foreground and greenery drapes the sides of skyscrapers that house communal gardens and vertical farms.”
  • Wittgenstein Teaches Elementary School (Existential Comics) ⁠— “And I’ll have you all know, there is no crying in predicate logic.”
  • Ask Yourself These 5 Questions to Inspire a More Meaningful Career Move (Inc.) — “Introspection on the right things can lead to the life you want.”

Image from Do It Yurtself

Friday feudalism

Check out these things I discovered this week, and wanted to pass along:

  • Study shows some political beliefs are just historical accidents (Ars Technica) — “Obviously, these experiments aren’t exactly like the real world, where political leaders can try to steer their parties. Still, it’s another way to show that some political beliefs aren’t inviolable principles—some are likely just the result of a historical accident reinforced by a potent form of tribal peer pressure. And in the early days of an issue, people are particularly susceptible to tribal cues as they form an opinion.”
  • Please, My Digital Archive. It’s Very Sick. (Lapham’s Quarterly) — “An archivist’s dream is immaculate preservation, documentation, accessibility, the chance for our shared history to speak to us once more in the present. But if the preservation of digital documents remains an unsolvable puzzle, ornery in ways that print materials often aren’t, what good will our archiving do should it become impossible to inhabit the world we attempt to preserve?”
  • So You’re 35 and All Your Friends Have Already Shed Their Human Skins (McSweeney’s) — “It’s a myth that once you hit 40 you can’t slowly and agonizingly mutate from a human being into a hideous, infernal arachnid whose gluttonous shrieks are hymns to the mad vampire-goddess Maggorthulax. You have time. There’s no biological clock ticking. The parasitic worms inside you exist outside of our space-time continuum.”
  • Investing in Your Ordinary Powers (Breaking Smart) — “The industrial world is set up to both encourage and coerce you to discover, as early as possible, what makes you special, double down on it, and build a distinguishable identity around it. Your specialness-based identity is in some ways your Industrial True Name. It is how the world picks you out from the crowd.”
  • Browser Fingerprinting: An Introduction and the Challenges Ahead (The Tor Project) — “This technique is so rooted in mechanisms that exist since the beginning of the web that it is very complex to get rid of it. It is one thing to remove differences between users as much as possible. It is a completely different one to remove device-specific information altogether.”
  • What is a Blockchain Phone? The HTC Exodus explained (giffgaff) — “HTC believes that in the future, your phone could hold your passport, driving license, wallet, and other important documents. It will only be unlockable by you which makes it more secure than paper documents.”
  • Debate rages in Austria over enshrining use of cash in the constitution (EURACTIV) — “Academic and author Erich Kirchler, a specialist in economic psychology, says in Austria and Germany, citizens are aware of the dangers of an overmighty state from their World War II experience.”
  • Cory Doctorow: DRM Broke Its Promise (Locus magazine) — “We gave up on owning things – property now being the exclusive purview of transhuman immortal colony organisms called corporations – and we were promised flexibility and bargains. We got price-gouging and brittle­ness.”
  • Five Books That Changed Me In One Summer (Warren Ellis) — “I must have been around 14. Rayleigh Library and the Oxfam shop a few doors down the high street from it, which someone was clearly using to pay things forward and warp younger minds.”

Friday fathomings

I enjoyed reading these:


Image via Indexed

Wretched is a mind anxious about the future

So said one of my favourite non-fiction authors, the 16th century proto-blogger Michel de Montaigne. There’s plenty of writing about how we need to be anxious because of the drift towards a future of surveillance states. Eventually, because it’s not currently affecting us here and now, we become blasé. We forget that it’s already the lived experience for hundreds of millions of people.

Take China, for example. In The Atlantic, Derek Thompson writes about the Chinese government’s brutality against the Muslim Uyghur population in the western province of Xinjiang:

[The] horrifying situation is built on the scaffolding of mass surveillance. Cameras fill the marketplaces and intersections of the key city of Kashgar. Recording devices are placed in homes and even in bathrooms. Checkpoints that limit the movement of Muslims are often outfitted with facial-recognition devices to vacuum up the population’s biometric data. As China seeks to export its suite of surveillance tech around the world, Xinjiang is a kind of R&D incubator, with the local Muslim population serving as guinea pigs in a laboratory for the deprivation of human rights.

Derek Thompson

As Ian Welsh points out, surveillance states usually involve us in the West pointing towards places like China and shaking our heads. However, if you step back a moment and remember that societies like the US and UK are becoming more unequal over time, then perhaps we’re the ones who should be worried:

The endgame, as I’ve been pointing out for years, is a society in which where you are and what you’re doing, and have done is, always known, or at least knowable. And that information is known forever, so the moment someone with power wants to take you out, they can go back thru your life in minute detail. If laws or norms change so that what was OK 10 or 30 years ago isn’t OK now, well they can get you on that.

Ian Welsh

As the world becomes more unequal, the position of elites becomes more perilous, hence Silicon Valley billionaires preparing boltholes in New Zealand. Ironically, they’re looking for places where they can’t be found, while making serious money from providing surveillance technology. Instead of solving the inequality, they attempt to insulate themselves from the effect of that inequality.

A lot of the crazy amounts of money earned in Silicon Valley comes at the price of infringing our privacy. I’ve spent a long time thinking about quite nebulous concept. It’s not the easiest thing to understand when you examine it more closely.

Privacy is usually considered a freedom from rather than a freedom to, as in “freedom from surveillance”. The trouble is that there are many kinds of surveillance, and some of these we actively encourage. A quick example: I know of at least one family that share their location with one another all of the time. At the same time, of course, they’re sharing it with the company that provides that service.

There’s a lot of power in the ‘default’ privacy settings devices and applications come with. People tend to go with whatever comes as standard. Sidney Fussell writes in The Atlantic that:

Many apps and products are initially set up to be public: Instagram accounts are open to everyone until you lock them… Even when companies announce convenient shortcuts for enhancing security, their products can never become truly private. Strangers may not be able to see your selfies, but you have no way to untether yourself from the larger ad-targeting ecosystem.

Sidney Fussell

Some of us (including me) are willing to trade some of that privacy for more personalised services that somehow make our lives easier. The tricky thing is when it comes to employers and state surveillance. In these cases there are coercive power relationships at play, rather than just convenience.

Ellen Sheng, writing for CNBC explains how employees in the US are at huge risk from workplace surveillance:

In the workplace, almost any consumer privacy law can be waived. Even if companies give employees a choice about whether or not they want to participate, it’s not hard to force employees to agree. That is, unless lawmakers introduce laws that explicitly state a company can’t make workers agree to a technology…

One example: Companies are increasingly interested in employee social media posts out of concern that employee posts could reflect poorly on the company. A teacher’s aide in Michigan was suspended in 2012 after refusing to share her Facebook page with the school’s superintendent following complaints about a photo she had posted. Since then, dozens of similar cases prompted lawmakers to take action. More than 16 states have passed social media protections for individuals.

Ellen Sheng

It’s not just workplaces, though. Schools are hotbeds for new surveillance technologies, as Benjamin Herold notes in an article for Education Week:

Social media monitoring companies track the posts of everyone in the areas surrounding schools, including adults. Other companies scan the private digital content of millions of students using district-issued computers and accounts. Those services are complemented with tip-reporting apps, facial-recognition software, and other new technology systems.

[…]

While schools are typically quiet about their monitoring of public social media posts, they generally disclose to students and parents when digital content created on district-issued devices and accounts will be monitored. Such surveillance is typically done in accordance with schools’ responsible-use policies, which students and parents must agree to in order to use districts’ devices, networks, and accounts.
Hypothetically, students and families can opt out of using that technology. But doing so would make participating in the educational life of most schools exceedingly difficult.

Benjamin Herold

In China, of course, a social credit system makes all of this a million times worse, but we in the West aren’t heading in a great direction either.

We’re entering a time where, by the time my children are my age, companies, employers, and the state could have decades of data from when they entered the school system through to them finding jobs, and becoming parents themselves.

There are upsides to all of this data, obviously. But I think that in the midst of privacy-focused conversations about Amazon’s smart speakers and Google location-sharing, we might be missing the bigger picture around surveillance by educational institutions, employers, and governments.

Returning to Ian Welsh to finish up, remember that it’s the coercive power relationships that make surveillance a bad thing:

Surveillance societies are sterile societies. Everyone does what they’re supposed to do all the time, and because we become what we do, it affects our personalities. It particularly affects our creativity, and is a large part of why Communist surveillance societies were less creative than the West, particularly as their police states ramped up.

Ian Welsh

We don’t want to think about all of this, though, do we?


Also check out:

The drawbacks of Artificial Intelligence

It’s really interesting to do philosophical thought experiments with kids. For example, the trolley problem, a staple of undergradate Philosophy courses, is also accessible to children from a fairly young age.

You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options:

  1. Do nothing and allow the trolley to kill the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the more ethical option?

With the advent of autonomous vehicles, these are no longer idle questions. The vehicles, which have to make split-second decision, may have to decide whether to hit a pram containing a baby, or swerve and hit a couple of pensioners. Due to cultural differences, even that’s not something that can be easily programmed, as the diagram below demonstrates.

Self-driving cards: pedestrians vs passengers

For two countries that are so close together, it’s really interesting that Japan and China are on the opposite ends of the spectrum when it comes to saving passengers or pedestrians!

The authors of the paper cited in the article are careful to point out that countries shouldn’t simply create laws based on popular opinion:

Edmond Awad, an author of the paper, brought up the social status comparison as an example. “It seems concerning that people found it okay to a significant degree to spare higher status over lower status,” he said. “It’s important to say, ‘Hey, we could quantify that’ instead of saying, ‘Oh, maybe we should use that.’” The results, he said, should be used by industry and government as a foundation for understanding how the public would react to the ethics of different design and policy decisions.

This is why we need more people with a background in the Humanities in tech, and be having a real conversation about ethics and AI.

Of course, that’s easier said than done, particularly when those companies who are in a position to make significant strides in this regard have near-monopolies in their field and are pulling in eye-watering amounts of money. A recent example of this, where Google convened an AI ethics committee was attacked as a smokescreen:

Academic Ben Wagner says tech’s enthusiasm for ethics paraphernalia is just “ethics washing,” a strategy to avoid government regulation. When researchers uncover new ways for technology to harm marginalized groups or infringe on civil liberties, tech companies can point to their boards and charters and say, “Look, we’re doing something.” It deflects criticism, and because the boards lack any power, it means the companies don’t change.

 […]

“It’s not that people are against governance bodies, but we have no transparency into how they’re built,” [Rumman] Chowdhury [a data scientist and lead for responsible AI at management consultancy Accenture] tells The Verge. With regard to Google’s most recent board, she says, “This board cannot make changes, it can just make suggestions. They can’t talk about it with the public. So what oversight capabilities do they have?”

As we saw around privacy, it takes a trusted multi-national body like the European Union to create a regulatory framework like GDPR for these issues. Thankfully, they’ve started that process by releasing guidelines containing seven requirements to create trustworthy AI:

  1. Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  2. Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  3. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  4. Transparency: The traceability of AI systems should be ensured.
  5. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  6. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The problem isn’t that people are going out of their way to build malevolent systems to rob us of our humanity. As usual, bad things happen because of more mundane requirements. For example, The Guardian has recently reported on concerns around predictive policing and hospitals using AI to predict everything from no-shows to risk of illness.

When we throw facial recognition into the mix, things get particularly scary. It’s all very well for Taylor Swift to use this technology to identify stalkers at her concerts, but given its massive drawbacks, perhaps we should restrict facial recognition somehow?

Human bias can seep into AI systems. Amazon abandoned a recruiting algorithm after it was shown to favor men’s resumes over women’s; researchers concluded an algorithm used in courtroom sentencing was more lenient to white people than to black people; a study found that mortgage algorithms discriminate against Latino and African American borrowers.

Facial recognition might be a cool way to unlock your phone, but the kind of micro-expressions that made for great television in the series Lie to Me is now easily exploited in what is expected to become a $20bn industry.

The difficult thing with all of this is that it’s very difficult for us as individuals to make a difference here. The problem needs to be tackled at a much higher level, as with GDPR. That will take time, and meanwhile the use of AI is exploding. Be careful out there.


Also check out:

Location data in old tweets

What use are old tweets? Do you look back through them? If not, then they’re only useful to others, who are able to data mine you using a new toold:

The tool, called LPAuditor (short for Location Privacy Auditor), exploits what the researchers call an “invasive policy” Twitter deployed after it introduced the ability to tag tweets with a location in 2009. For years, users who chose to geotag tweets with any location, even something as geographically broad as “New York City,” also automatically gave their precise GPS coordinates. Users wouldn’t see the coordinates displayed on Twitter. Nor would their followers. But the GPS information would still be included in the tweet’s metadata and accessible through Twitter’s API.

I deleted around 77,500 tweets in 2017 for exactly this kind of reason.

Source: WIRED

Confusing tech questions

Today is the first day of the Consumer Electronics Show, or CES, in Las Vegas. Each year, tech companies showcase their latest offerings and concepts. Nilay Patel, Editor-in-Chief for The Verge, comments that, increasingly, the tech industry is built on a number of assumptions about consumers and human behaviour:

[T]hink of the tech industry as being built on an ever-increasing number of assumptions: that you know what a computer is, that saying “enter your Wi-Fi password” means something to you, that you understand what an app is, that you have the desire to manage your Bluetooth device list, that you’ll figure out what USB-C dongles you need, and on and on.

Lately, the tech industry is starting to make these assumptions faster than anyone can be expected to keep up. And after waves of privacy-related scandals in tech, the misconceptions and confusion about how things works are both greater and more reasonable than ever.

I think this is spot-on. At Mozilla, and now at Moodle, I spend a good deal of my time among people who are more technically-minded than me. And, in turn, I’m more technically-minded than the general population. So what’s ‘obvious’ or ‘easy’ to developers feels like magic to the man or woman on the street.

Patel keeps track of the questions his friends and family ask him, and has listed them in the post. The number one thing he says that everyone is talking about is how people assume their phones are listening to them, and then serving up advertising based on that. They don’t get that that Facebook (and other platforms) use multiple data points to make inferences.

I’ll not reproduce his list here, but here are three questions which I, too, get a lot from friends and family:

“How do I make sure deleting photos from my iPhone won’t delete them from my computer?”

“How do I keep track of what my kid is watching on YouTube?”

“Why do I need to make another username and password?”

As I was discussing with the MoodleNet team just yesterday, there’s a difference between treating users as ‘stupid’ (which they’re not) and ensuring that they don’t have to think too much when they’re using your product.

Source: The Verge (via Orbital Operations)

Configuring your iPhone for productivity (and privacy, security?)

At an estimated read time of 70 minutes, though, this article is the longest I’ve seen on Medium! It includes a bunch of advice from ‘Coach Tony’, the CEO of Coach.me, about how he uses his iPhone, and perhaps how you should too:

The iPhone could be an incredible tool, but most people use their phone as a life-shortening distraction device.

However, if you take the time to follow the steps in this article you will be more productive, more focused, and — I’m not joking at all — live longer.

Practically every iPhone setup decision has tradeoffs. I will give you optimal defaults and then trust you to make an adult decision about whether that default is right for you.

As an aside, I appreciate the way he sets up different ways to read the post, from skimming the headlines through to reading the whole thing in-depth.

However, the problem is that for a post that the author describes as a ‘very very complete’ guide to configuring your iPhone to ‘work for you, not against you’, it doesn’t go into enough depth about privacy and security for my liking. I’m kind of tired of people thinking that using a password manager and increasing your lockscreen password length is enough.

For example, Coach Tony talks about basically going all-in on Google Cloud. When people point out the privacy concerns of doing this, he basically uses the tinfoil hat defence in response:

Moving to the Google cloud does trade privacy for productivity. Google will use your data to advertise to you. However, this is a productivity article. If you wish it were a privacy article, then use Protonmail. Last, it’s not consistent that I have you turn off Apple’s ad tracking while then making yourself fully available to Google’s ad tracking. This is a tradeoff. You can turn off Apple’s tracking with zero downside, so do it. With Google, I think it’s worthwhile to use their services and then fight ads in other places. The Reader feature in Safari basically hides most Google ads that you’d see on your phone. On your computer, try an ad blocker.

It’s all very well saying that it’s a productivity article rather than a privacy article. But it’s 2018, you need to do both. Don’t recommend things to people that give them gains in one area but causes them new problems in others.

That being said, I appreciate Coach Tony’s focus on what I would call ‘notification literacy’. Perhaps read his article, ignore the bits where he suggests compromising your privacy, and follow his advice on configuring your device for a calmer existence.

 

Source: Better Humans