Tag: advertising

Big Tech companies may change their names but they will not voluntarily change their economics

I based a good deal of Truth, Lies, and Digital Fluency, a talk I gave in NYC in December 2019, on the work of Shoshana Zuboff. Writing in The New York Times, she starts to get a bit more practical as to what we do about surveillance capitalism.

As Zuboff points out, Big Tech didn’t set out to cause the harms it has any more than fossil fuel companies set out to destroy the earth. The problem is that they are following economic incentives. They’ve found a metaphorical goldmine in hoovering up and selling personal data to advertisers.

Legislating for that core issue looks like it could be more fruitful in terms of long-term consequences. Other calls like “breaking up Big Tech” are the equivalent of rearranging the deckchairs on the Titanic.

Democratic societies riven by economic inequality, climate crisis, social exclusion, racism, public health emergency, and weakened institutions have a long climb toward healing. We can’t fix all our problems at once, but we won’t fix any of them, ever, unless we reclaim the sanctity of information integrity and trustworthy communications. The abdication of our information and communication spaces to surveillance capitalism has become the meta-crisis of every republic, because it obstructs solutions to all other crises.

[…]

We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes. This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content. Such “remedies” only treat the symptoms without challenging the illegitimacy of the human data extraction that funds private control over society’s information spaces. Similarly, structural solutions like “breaking up” the tech giants may be valuable in some cases, but they will not affect the underlying economic operations of surveillance capitalism.

Instead, discussions about regulating big tech should focus on the bedrock of surveillance economics: the secret extraction of human data from realms of life once called “private.” Remedies that focus on regulating extraction are content neutral. They do not threaten freedom of expression. Instead, they liberate social discourse and information flows from the “artificial selection” of profit-maximizing commercial operations that favor information corruption over integrity. They restore the sanctity of social communications and individual expression.

No secret extraction means no illegitimate concentrations of knowledge about people. No concentrations of knowledge means no targeting algorithms. No targeting means that corporations can no longer control and curate information flows and social speech or shape human behavior to favor their interests. Regulating extraction would eliminate the surveillance dividend and with it the financial incentives for surveillance.

Source: You Are the Object of Facebook’s Secret Extraction Operation | The New York Times

Tracking vs advertising

We tend to use words to denote something right up to the time that term becomes untenable. Someone has to invent a better one. Take mobile phones, for example. They’re literally named after the least-used app on there, so we’re crying out for a different way to refer to them. Perhaps a better name would be ‘trackers’.

These days, most people use mobile devices for social networking. These are available free at the point of access, funded by what we’re currently calling ‘advertising’. However, as this author notes, it’s nothing of the sort:

What we have today is not advertising. The amount of personally identifiable information companies have about their customers is absolutely perverse. Some of the world’s largest companies are in the business of selling your personal information for use in advertising. This might sound innocuous but the tracking efforts of these companies are so accurate that many people believe that Facebook listens to their conversations to serve them relevant ads. Even if it’s true that the microphone is not used, the sum of all other data collected is still enough to show creepily relevant advertising.

Unfortunately, the author doesn’t seem to have come to the conclusion yet that it’s the logic of capitalism that go us here. Instead, he just points out that people’s privacy is being abused.

[P]eople now get most of their information from social networks yet these networks dictate the order in which content is served to the user. Google makes the worlds most popular mobile operating system and it’s purpose is drive the company’s bottom line (ad blocking is forbidden). “Smart” devices are everywhere and companies are jumping over each other to put more shit in your house so they can record your movements and sell the information to advertisers. This is all a blatant abuse of privacy that is completely toxic to society.

Agreed, and it’s easy to feel a little helpless against this onslaught. While it’s great to have a list of things that users can do, if those things are difficult to implement and/or hard to understand, then it’s an uphill battle.

That being said, the three suggestions he makes are use

To combat this trend, I have taken the following steps and I think others should join the movement:

  • Aggressively block all online advertisements
  • Don’t succumb to the “curated” feeds
  • Not every device needs to be “smart”

I feel I’m already way ahead of the author in this regard:

  • Aggressively block all online advertisements
  • Don’t succumb to the “curated” feeds
    • I quit Facebook years ago, haven’t got an Instagram account, and pretty much only post links to my own spaces on Twitter and LinkedIn.
  • Not every device needs to be “smart”
    • I don’t really use my Philips Hue lights, and don’t have an Amazon Alexa — or even the Google Assistant on my phone).

It’s not easy to stand up to Big Tech. The amount of money they pour into things make their ‘innovations’ seem inevitable. They can afford to make things cheap and frictionless so you get hooked.

As an aside, it’s interesting to note that those that previously defended Apple as somehow ‘different’ on privacy, despite being the world’s most profitable company, are starting to backtrack.

Source: Nicholas Rempel

How to build a consensual social network

Here’s another article that was linked to from the source of a post I shared recently. The paragraph quoted here is from the section entitled ‘Consent-Oriented Architecture’:

Corporations built to maximize profits are unable to build consensual platforms. Their business model depend fundamentally on surveillance and behavioral control. To build consensual platforms require that privacy, security, and anonymity be built into the platforms as core features. The most effective way to secure consent is to ensure that all user data and control of all user interaction resides with the software running on the user’s own computer, not on any intermediary servers.

Earlier in that section, the author makes the obvious (but nevertheless alarming point) that audiences are sorted and graded as commodities to be bought and sold:

Audiences, like all commodities, are sold by measure and grade. Eggs are sold in dozens as grade A, for example. An advertisers might buy a thousand clicks from middle-aged white men who own a car and have a good credit rating.

In a previous section, the author notes that those who use social networks are subjects of an enclosed system:

The profits of the media monopolies are formed after surplus value has already been extracted. Their users are not exploited, but subjected, captured as an audience, and instrumentalized to extract surplus profits from other sectors of the ownership class.

I had to read some sections twice, but I’m glad I did. Great stuff, and very thought-provoking.

In short, to ensure Project MoodleNet is a consensual social network, we need to ensure full transparency and, if possible, that the majority of the processing of personal data is done on the user’s own device.

Source: P2P Foundation