Tag: Facebook (page 1 of 2)

Cory Doctorow on the corruption at the heart of Facebook

I like Cory Doctorow. He’s a gifted communicator who wears his heart on his sleeve. In this article, he talks about Facebook and how what it’s wrought is a result of the corruption at its very heart.

It’s great that the privacy-matters message is finally reaching a wider audience, and it’s exciting to think that we’re approaching a tipping point for indifference to privacy and surveillance.

But while the acknowledgment of the problem of Big Tech is most welcome, I am worried that the diagnosis is wrong.

The problem is that we’re confusing automated persuasion with automated targeting. Laughable lies about Brexit, Mexican rapists, and creeping Sharia law didn’t convince otherwise sensible people that up was down and the sky was green.

Rather, the sophisticated targeting systems available through Facebook, Google, Twitter, and other Big Tech ad platforms made it easy to find the racist, xenophobic, fearful, angry people who wanted to believe that foreigners were destroying their country while being bankrolled by George Soros.

So, for example, people seem to think that Facebook advertisement caused people to vote for Trump. As if they were going to vote for someone else, and then changed their mind as a direct result of viewing ads. That’s not how it works.

Companies such as Cambridge Analytica might claim that they can rig elections and change people’s minds, but they’re not actually that sophisticated.

Cambridge Analytica are like stage mentalists: they’re doing something labor-intensive and pretending that it’s something supernatural. A stage mentalist will train for years to learn to quickly memorize a deck of cards and then claim that they can name your card thanks to their psychic powers. You never see the unglamorous, unimpressive memorization practice. Cambridge Analytica uses Facebook to find racist jerks and tell them to vote for Trump and then they claim that they’ve discovered a mystical way to get otherwise sensible people to vote for maniacs.

This isn’t to say that persuasion is impossible. Automated disinformation campaigns can flood the channel with contradictory, seemingly plausible accounts for the current state of affairs, making it hard for a casual observer to make sense of events. Long-term repetition of a consistent narrative, even a manifestly unhinged one, can create doubt and find adherents – think of climate change denial, or George Soros conspiracies, or the anti-vaccine movement.

These are long, slow processes, though, that make tiny changes in public opinion over the course of years, and they work best when there are other conditions that support them – for example, fascist, xenophobic, and nativist movements that are the handmaidens of austerity and privation. When you don’t have enough for a long time, you’re ripe for messages blaming your neighbors for having deprived you of your fair share.

Advertising and influencing works best when you provide a message that people already agree with in a way that they can easily share with others. The ‘long, slow processes’ that Doctorow refers to have been practised offline as well (think of Nazi propaganda, for example). Dark adverts on Facebook are tapping into feelings and reactions that aren’t peculiar to the digital world.

Facebook has thrived by providing ways for people to connect and communicate with one another. Unfortunately, because they’re so focused on profit over people, they’ve done a spectacularly bad job at making sure that the spaces in which people connect are healthy spaces that respect democracy.

There’s an old-fashioned word for this: corruption. In corrupt systems, a few bad actors cost everyone else billions in order to bring in millions – the savings a factory can realize from dumping pollution in the water supply are much smaller than the costs we all bear from being poisoned by effluent. But the costs are widely diffused while the gains are tightly concentrated, so the beneficiaries of corruption can always outspend their victims to stay clear.

Facebook doesn’t have a mind-control problem, it has a corruption problem. Cambridge Analytica didn’t convince decent people to become racists; they convinced racists to become voters.

That last phrase is right on the money.

Source: Locus magazine

Our irresistible screens of splendour

Apple is touting a new feature in the latest version of iOS that helps you reduce the amount of time you spend on your smartphone. Facebook are doing something similar. As this article in The New York Times notes, that’s no accident:

There’s a reason tech companies are feeling this tension between making phones better and worrying they are already too addictive. We’ve hit what I call Peak Screen.

For much of the last decade, a technology industry ruled by smartphones has pursued a singular goal of completely conquering our eyes. It has given us phones with ever-bigger screens and phones with unbelievable cameras, not to mention virtual reality goggles and several attempts at camera-glasses.

The article even gives the example of Augmented Reality LEGO play sets which actively encourage you to stop building and spend more time on screens!

Tech has now captured pretty much all visual capacity. Americans spend three to four hours a day looking at their phones, and about 11 hours a day looking at screens of any kind.

So tech giants are building the beginning of something new: a less insistently visual tech world, a digital landscape that relies on voice assistants, headphones, watches and other wearables to take some pressure off our eyes.

[…]

Screens are insatiable. At a cognitive level, they are voracious vampires for your attention, and as soon as you look at one, you are basically toast.

It’s not enough to tell people not to do things. Technology can be addictive, just like anything else, so we need to find better ways of achieving similar ends.

But in addition to helping us resist phones, the tech industry will need to come up with other, less immersive ways to interact with digital world. Three technologies may help with this: voice assistants, of which Amazon’s Alexa and Google Assistant are the best, and Apple’s two innovations, AirPods and the Apple Watch.

All of these technologies share a common idea. Without big screens, they are far less immersive than a phone, allowing for quick digital hits: You can buy a movie ticket, add a task to a to-do list, glance at a text message or ask about the weather without going anywhere near your Irresistible Screen of Splendors.

The issue I have is that it’s going to take tightly-integrated systems to do this well, at least at first. So the chances are that Apple or Google will create an ecosystem that only works with their products, providing another way to achieve vendor lock-in.

Source: The New York Times

Why NASA is better than Facebook at writing software

Facebook’s motto, until recently, was “move fast and break things”. This chimed with a wider Silicon Valley brogrammer mentality of “f*ck it, ship it”.

NASA’s approach, as this (long-ish) Fast Company article explains, couldn’t be more different to the Silicon Valley narrative. The author, Charles Fishman, explains that the group who write the software for space shuttles are exceptional at what they do. And they don’t even start writing code until they’ve got a complete plan in place.

This software is the work of 260 women and men based in an anonymous office building across the street from the Johnson Space Center in Clear Lake, Texas, southeast of Houston. They work for the “on-board shuttle group,” a branch of Lockheed Martin Corps space mission systems division, and their prowess is world renowned: the shuttle software group is one of just four outfits in the world to win the coveted Level 5 ranking of the federal governments Software Engineering Institute (SEI) a measure of the sophistication and reliability of the way they do their work. In fact, the SEI based it standards in part from watching the on-board shuttle group do its work.

There’s an obvious impact, both in terms of financial and human cost, if something goes wrong with a shuttle. Imagine if we had these kinds of standards for the impact of social networks on the psychological health of citizens and democratic health of nations!

NASA knows how good the software has to be. Before every flight, Ted Keller, the senior technical manager of the on-board shuttle group, flies to Florida where he signs a document certifying that the software will not endanger the shuttle. If Keller can’t go, a formal line of succession dictates who can sign in his place.

Bill Pate, who’s worked on the space flight software over the last 22 years, [/url]says the group understands the stakes: “If the software isn’t perfect, some of the people we go to meetings with might die.

Software powers everything. It’s in your watch, your television, and your car. Yet the quality of most software is pretty poor.

“It’s like pre-Sumerian civilization,” says Brad Cox, who wrote the software for Steve Jobs NeXT computer and is a professor at George Mason University. “The way we build software is in the hunter-gatherer stage.”

John Munson, a software engineer and professor of computer science at the University of Idaho, is not quite so generous. “Cave art,” he says. “It’s primitive. We supposedly teach computer science. There’s no science here at all.”

The NASA team can sum-up their process in four propositions:

  1. The product is only as good as the plan for the product.
  2. The best teamwork is a healthy rivalry.
  3. The database is the software base.
  4. Don’t just fix the mistakes — fix whatever permitted the mistake in the first place.

They don’t pull all-nighters. They don’t switch to the latest JavaScript library because it’s all over Hacker News. Everything is documented, and genealogy of the whole code is available to everyone working on it.

The most important things the shuttle group does — carefully planning the software in advance, writing no code until the design is complete, making no changes without supporting blueprints, keeping a completely accurate record of the code — are not expensive. The process isn’t even rocket science. Its standard practice in almost every engineering discipline except software engineering.

I’m going to be bearing this in mind as we build MoodleNet. We’ll have to be a bit more agile than NASA, of course. But planning and process is important stuff.

 

Source: Fast Company

Designing for privacy

Someone described the act of watching Mark Zuckerberg, CEO of Facebook, testifying before Congress as “low level self-harm”. In this post, Joe Edelman explains why:

Zuckerberg and the politicians—they imagine privacy as if it were a software feature. They imagine a system has “good privacy” if it’s consensual and configurable; that is, if people explicitly agree to something, and understand what they agree to, that’s somehow “good for privacy”. Even usually-sophisticated-analysts like Zeynep Tufekci are missing all the nuance here.

Giving the example of a cocktail party where you’re talking to a friend about something confidential and someone else you don’t know comes along, Edelman introduces this definition of privacy:

Privacy, n. Maintaining a sense of what to show in each environment; Locating social spaces for aspects of yourself which aren’t ready for public display, where you can grow those parts of yourself until they can be more public.

I really like this definition, especially the part around “locating social spaces for aspects of yourself which aren’t ready for public display”. I think educators in particular should note this.

Referencing his HSC1 Curriculum which is the basis for workshops he runs for staff from major tech companies, Edelman includes a graphic on the structural features of privacy. I’ll type this out here for the sake of legibility:

  • Relational depth (close friends / acquaintances / strangers / anonymous / mixed)
  • Presentation (crafted / basic / disheveled)
  • Connectivity (transient / pairwise / whole-group)
  • Stakes (high / low)
  • Status levels (celebrities / rank / flat)
  • Reliance (interdependent / independent)
  • Time together (none / brief / slow)
  • Audience size (big / small / unclear)
  • Audience loyalty (loyal / transient / unclear)
  • Participation (invited / uninvited)
  • Pretext (shared goal / shared values / shared topic / many goals (exchange) / emergent)
  • Social Gestures (like / friend / follow / thank / review / comment / join / commit / request / buy)

The post is, of course, both an expert response to the zeitgeist, and a not-too-subtle hint that people should take his course. I’m sure Edelman goes into more depth about each of these structural features in his workshops.

Nevertheless, and even without attending his sessions (which I’m sure are great) there’s value in thinking through each of these elements for the work I’m doing around the MoodleNet project. I’ve probably done some thinking around 70% of these, but it’s great to have a list that helps me organise my thinking a little more.

Source: Joe Edelman

The death of the newsfeed (is much exaggerated)

Benedict Evans is a venture capitalist who focuses on technology companies. He’s a smart guy with some important insights, and I thought his recent post about the ‘death of the newsfeed’ on social networks was particularly useful.

He points out that it’s pretty inevitable that the average person will, over the course of a few years, add a few hundred ‘friends’ to their connections on any given social network. Let’s say you’re connected with 300 people, and they all share five things each day. That’s 1,500 things you’ll be bombarded with, unless the social network does something about it.

This overload means it now makes little sense to ask for the ‘chronological feed’ back. If you have 1,500 or 3,000 items a day, then the chronological feed is actually just the items you can be bothered to scroll through before giving up, which can only be 10% or 20% of what’s actually there. This will be sorted by no logical order at all except whether your friends happened to post them within the last hour. It’s not so much chronological in any useful sense as a random sample, where the randomizer is simply whatever time you yourself happen to open the app. ’What did any of the 300 people that I friended in the last 5 years post between 16:32 and 17:03?’ Meanwhile, giving us detailed manual controls and filters makes little more sense – the entire history of the tech industry tells us that actual normal people would never use them, even if they worked. People don’t file.

So we end up with algorithmic feeds, which is an attempt by social networks to ensure that you see the stuff that you deem important. It is, of course, an almost impossible mission.

[T]here are a bunch of problems around getting the algorithmic newsfeed sample ‘right’, most of which have been discussed at length in the last few years. There are lots of incentives for people (Russians, game developers) to try to manipulate the feed. Using signals of what people seem to want to see risks over-fitting, circularity and filter bubbles. People’s desires change, and they get bored of things, so Facebook has to keep changing the mix to try to reflect that, and this has made it an unreliable partner for everyone from Zynga to newspapers. Facebook has to make subjective judgements about what it seems that people want, and about what metrics seem to capture that, and none of this is static or even in in principle perfectible. Facebook surfs user behaviour.

Evans then goes on to raise the problem of what you want to see may be different from what your friends want you to see. So people solve the problem of algorithmic feeds not showing them what they really want by using messaging apps such as WhatsApp and Telegram to interact individually with people or small groups.

The problem with that, though?

The catch is that though these systems look like they reduce sharing overload, you really want group chats. And lots of groups. And when you have 10 WhatsApp groups with 50 people in each, then people will share to them pretty freely. And then you think ‘maybe there should be a screen with a feed of the new posts in all of my groups. You could call it a ‘news feed’. And maybe it should get some intelligence, to show the posts you care about most…

So, to Evans mind (and I’m tempted to agree with him) we’re in a never-ending spiral. The only way I can see out of it is user education, particularly around owning one’s own data and IndieWeb approaches.

Source: Benedict Evans

No-one wants a single identity, online or offline

It makes sense for companies reliant on advertising to not only get as much data as they can about you, but to make sure that you have a single identity on their platform to which to associate it.

This article by Cory Doctorow in BoingBoing reports on some research around young people and social media. As Doctorow states:

Social media has always had a real-names problem. Social media companies want their users to use their real names because it makes it easier to advertise to them. Users want to be able to show different facets of their identities to different people, because only a sociopath interacts with their boss, their kids, and their spouse in the same way.

I was talking to one of my Moodle colleagues about how, in our mid-thirties, we’re a ‘bridging’ generation between those who only went online in adulthood, and those who have only ever known a world with the internet. I got online for the first time when I was about fourteen or fifteen.

Those younger than me are well aware of the perils and pitfalls of a single online identity:

Amy Lancaster from the Journalism and Digital Communications school at the University of Central Lancashire studies the way that young people resent “the way Facebook ties them into a fixed self…[linking] different areas of a person’s life, carrying over from school to university to work.”

I think Doctorow has made an error around Amy’s surname, which is given as ‘Binns’ instead of ‘Lancaster’ both in the journal article and the original post.

Binns writes:

Young people know their future employers, parents and grandparents are present online, and so they behave accordingly. And it’s not only older people that affect behaviour.

My research shows young people dislike the way Facebook ties them into a fixed self. Facebook insists on real names and links different areas of a person’s life, carrying over from school to university to work. This arguably restricts the freedom to explore new identities – one of the key benefits of the web.

The desire for escapable transience over damning permanence has driven Snapchat’s success, precisely because it’s a messaging app that allows users to capture videos and pictures that are quickly removed from the service.

This is important for the work I’m leading around Project MoodleNet. It’s not just teenagers who want “escapable transience over damning permanence”.

Source: BoingBoing

Survival in the age of surveillance

The Guardian has a list of 18 tips to ‘survive’ (i.e. be safe) in an age where everyone wants to know everything about you — so that they can package up your data and sell it to the highest bidder.

On the internet, the adage goes, nobody knows you’re a dog. That joke is only 15 years old, but seems as if it is from an entirely different era. Once upon a time the internet was associated with anonymity; today it is synonymous with surveillance. Not only do modern technology companies know full well you’re not a dog (not even an extremely precocious poodle), they know whether you own a dog and what sort of dog it is. And, based on your preferred category of canine, they can go a long way to inferring – and influencing – your political views.

Mozilla has pointed out in a recent blog post that the containers feature in Firefox can increase your privacy and prevent ‘leakage’ between tabs as you navigate the web. But there’s more to privacy and security than just that.

Here’s the Guardian’s list:

  1. Download all the information Google has on you.
  2. Try not to let your smart toaster take down the internet.
  3. Ensure your AirDrop settings are dick-pic-proof.
  4. Secure your old Yahoo account.
  5. 1234 is not an acceptable password.
  6. Check if you have been pwned.
  7. Be aware of personalised pricing.
  8. Say hi to the NSA guy spying on you via your webcam.
  9. Turn off notifications for anything that’s not another person speaking directly to you.
  10. Never put your kids on the public internet.
  11. Leave your phone in your pocket or face down on the table when you’re with friends.
  12. Sometimes it’s worth just wiping everything and starting over.
  13. An Echo is fine, but don’t put a camera in your bedroom.
  14. Have as many social-media-free days in the week as you have alcohol-free days.
  15. Retrain your brain to focus.
  16. Don’t let the algorithms pick what you do.
  17. Do what you want with your data, but guard your friends’ info with your life.
  18. Finally, remember your privacy is worth protecting.

A bit of a random list in places, but useful all the same.

Source: The Guardian

Alternatives to all of Facebook’s main features

Over on a microcast at Patreon (subscribers only, I’m afraid) I referenced an email conversation I’ve been having about getting people off Facebook.

WIRED has a handy list of apps that replicate the functionality of the platform. It’s important to bear in mind that no other platform has the same feature set as Facebook. Of course it doesn’t, because no other platform has the dollars and support of the military-industrial complex quite like Facebook.

Nevertheless, here’s what WIRED suggests:

(Note: I haven’t included ‘birthday reminders’ as that would have involved linking to a Facebook help page, and I don’t link to Facebook. Full stop.)

I’ve used, and like, all of the apps on that list, with the exception of Paperless Post, which looks like it’s iOS-only.

OK, so it’s not easy getting people off a site that provides so much functionality, but it’s certainly possible. Lead by example, people.

Source: WIRED

The only privacy policy that matters is your own

Dave Pell writes NextDraft, a daily newsletter that’s one of the most popular on the web. I used to subscribe, and it’s undeniably brilliant, but a little US-centric for my liking.

My newsletter, Thought Shrapnel, doesn’t track you. In fact, I have to keep battling MailChimp (the platform I use to send it out) as it thinks I’ve made a mistake. Tracking is so pervasive but I have no need to know exactly how many people clicked on a particular link. It’s an inexact science, anyway.

Pell has written a great post about online privacy:

The story of Cambridge Analytica accessing your personal data on Facebook, supposedly creating a spot-on psychographic profile, and then weaponizing your own personality against you with a series of well-worded messages is now sweeping the media. And it will get louder. And it will pass. And then, I promise, there will be another story about your data being stolen, borrowed, hacked, misused, shared, bought, sold and on and on.

He points out the disconnect between rich people such as Mark Zuckerberg, CEO of Facebook, going to “great lengths” to protect his privacy, whilst simultaneously depriving Facebook users of theirs.

They are right to want privacy. They are right to want to keep their personal lives walled off from anyone from nosy neighbors to potential thieves to, well, Matt Richtel. They should lock their doors and lock down their information. They are right not to want you to know where they live, with whom they live, or how much they spend. They’re right to want to plug a cork in the social media champagne bottle we’ve shaken up in our blind celebration of glass houses.

They are right not to want to toss the floor planks that represent their last hint of personal privacy into the social media wood chipper. They are right in their unwillingness to give in to the seeming inevitability of the internet sharing machine. Do you really think it’s a coincidence that most of the buttons you press on the web are labeled with the word submit?

A Non-Disclosure Agreement (NDA) is something that’s been in the news recently as Donald Trump has taken his shady business practices to the whitehouse. Pell notes that the principle behind NDAs is nevertheless sound: you don’t get to divulge my personal details without my permission.

So you should follow their lead. Don’t do what they say. Do what they do. Better yet, do what they NDA.

[…]

There’s a pretty simple rule: never share anything on any site anywhere on the internet regardless of any privacy settings unless you are willing to accept that the data might one day be public.

The only privacy policy that matters is your own.

Source: Dave Pell

Derek Sivers has quit Facebook (hint: you should, too)

I have huge respect for Derek Sivers, and really enjoyed his book Anything You WantHis book reviews are also worth trawling through.

In this post, which made its way to the Hacker News front page, Sivers talks about his relationship with Facebook, and why he’s finally decided to quit the platform:

When people would do their “DELETE FACEBOOK!” campaigns, I didn’t bother because I wasn’t using it anyway. It was causing me no harm. I think it’s net-negative for the world, and causing many people harm, but not me, so why bother deleting it?

But today I had a new thought:

Maybe the fact that I use it to share my blog posts is a tiny tiny reason why others are still using it. It’s like I’m still visiting friends in the smoking area, even though I don’t smoke. Maybe if I quit going entirely, it will help my friends quit, too.

Last year, I wrote a post entitled Friends don’t let friends use Facebook. The problem is, it’s difficult. Despite efforts to suggest alternatives, most of the clubs our children are part of (for activities such as swimming and karate) use Facebook. I don’t have an account, but my wife has to if we’re to keep up-to-date. It’s a vicious circle.

Like Sivers, I’ve considered just being on Facebook to promote my blog posts. But I don’t want to be part of the problem:

I had a selfish business reason to keep it. I’m going to be publishing three different books over the next year, and plan to launch a new business, too. But I’m willing to take that small loss in promotion, because it’s the right thing to do. It always feels good to get rid of things I’m not using.

So if you’ve got a Facebook account and reading the Cambridge Analytica revelations concerns you, then try to wean yourself of Facebook. It’s literally for the good of democracy.

Ultimately, as Sivers notes, Facebook will go away because of the adoption lifecycle of platforms and products. It’s difficult to think of that, but I’ll leave the last word to the late, great Ursula Le Guin:

We live in capitalism, its power seems inescapable – but then, so did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art. Very often in our art, the art of words.

Source: Sivers.org