Tag: Big Tech (page 1 of 5)

The rise of first-party online tracking

In a startling example of the Matthew effect of accumulated advantage, the incumbent advertising giants are actually being strengthened by legislation aimed to curb their influence. Because, of course.

For years, digital businesses relied on what is known as “third party” tracking. Companies such as Facebook and Google deployed technology to trail people everywhere they went online. If someone scrolled through Instagram and then browsed an online shoe store, marketers could use that information to target footwear ads to that person and reap a sale.

[…]

Now tracking has shifted to what is known as “first party” tracking. With this method, people are not being trailed from app to app or site to site. But companies are still gathering information on what people are doing on their specific site or app, with users’ consent. This kind of tracking, which companies have practiced for years, is growing.

[…]

The rise of this tracking has implications for digital advertising, which has depended on user data to know where to aim promotions. It tilts the playing field toward large digital ecosystems such as Google, Snap, TikTok, Amazon and Pinterest, which have millions of their own users and have amassed information on them. Smaller brands have to turn to those platforms if they want to advertise to find new customers.

Source: How You’re Still Being Tracked on the Internet | The New York Times

Hamiltonians and Jeffersonians

Cory Doctorow quite rightly calls out that Big Tech’s “too big to fail” status has created “oligopolistic power” which limits our choice over how we’re connected to the people we want to interact with.

I like his reference of Frank Pasquale’s two approaches to regulation. I guess I’m a Jeffersonian, too…

Every community has implicit and explicit rules about what kinds of speech are acceptable, and metes out punishments to people who violate those rules, ranging from banishment to shaming to compelling the speaker to silence. You’re not allowed to get into a shouting match at a funeral, you’re not allowed to use slurs when addressing your university professor, you’re not allowed to explicitly describe your sex-life to your work colleagues. Your family may prohibit swear-words at Christmas dinner or arguments about homework at the breakfast table.

One of the things that defines a community are its speech norms. In the online world, moderators enforce those “house rules” by labeling or deleting rule-breaking speech, and by cautioning or removing users.

Doing this job well is hard even when the moderator is close to the community and understands its rules. It’s much harder when the moderator is a low-waged employee following company policy at a frenzied pace. Then it’s impossible to do well and consistently.

[…]

It’s not that we value the glorious free speech of our harassers, nor that we want our views “fact-checked” or de-monetized by unaccountable third parties, nor that we want copyright filters banishing the videos we love, nor that we want juvenile sensationalism rammed into our eyeballs or controversial opinions buried at the bottom of an impossibly deep algorithmically sorted pile.

We tolerate all of that because the platforms have taken hostages: the people we love, the communities we care about, and the customers we rely upon. Breaking up with the platform means breaking up with those people.

It doesn’t have to be this way. The internet was designed on protocols, not platforms: the principle of running lots of different, interconnected services, each with its own “house rules” based on its own norms and goals. These services could connect to one another, but they could also block one another, allowing communities to isolate themselves from adversaries who wished to harm or disrupt their fellowship.

[…]

Frank Pasquale’s Tech Platforms and the Knowledge Problem poses two different approaches to tech regulation: “Hamiltonians” and “Jeffersonians” (the paper was published in 2018, and these were extremely zeitgeisty labels!).

Hamiltonians favor “improving the regulation of leading firms rather than breaking them up,” while Jeffersonians argue that the “very concentration (of power, patents, and profits) in megafirms” is itself a problem, making them both unaccountable and dangerous.

That’s where we land. We think that technology users shouldn’t have to wait for Big Tech platform owners to have a moment of enlightenment that leads to its moral reform, and we understand that the road to external regulation is long and rocky, thanks to the oligopolistic power of cash-swollen, too-big-to-fail tech giants.

Source: To Make Social Media Work Better, Make It Fail Better | Electronic Frontier Foundation

Big Tech companies may change their names but they will not voluntarily change their economics

I based a good deal of Truth, Lies, and Digital Fluency, a talk I gave in NYC in December 2019, on the work of Shoshana Zuboff. Writing in The New York Times, she starts to get a bit more practical as to what we do about surveillance capitalism.

As Zuboff points out, Big Tech didn’t set out to cause the harms it has any more than fossil fuel companies set out to destroy the earth. The problem is that they are following economic incentives. They’ve found a metaphorical goldmine in hoovering up and selling personal data to advertisers.

Legislating for that core issue looks like it could be more fruitful in terms of long-term consequences. Other calls like “breaking up Big Tech” are the equivalent of rearranging the deckchairs on the Titanic.

Democratic societies riven by economic inequality, climate crisis, social exclusion, racism, public health emergency, and weakened institutions have a long climb toward healing. We can’t fix all our problems at once, but we won’t fix any of them, ever, unless we reclaim the sanctity of information integrity and trustworthy communications. The abdication of our information and communication spaces to surveillance capitalism has become the meta-crisis of every republic, because it obstructs solutions to all other crises.

[…]

We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes. This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content. Such “remedies” only treat the symptoms without challenging the illegitimacy of the human data extraction that funds private control over society’s information spaces. Similarly, structural solutions like “breaking up” the tech giants may be valuable in some cases, but they will not affect the underlying economic operations of surveillance capitalism.

Instead, discussions about regulating big tech should focus on the bedrock of surveillance economics: the secret extraction of human data from realms of life once called “private.” Remedies that focus on regulating extraction are content neutral. They do not threaten freedom of expression. Instead, they liberate social discourse and information flows from the “artificial selection” of profit-maximizing commercial operations that favor information corruption over integrity. They restore the sanctity of social communications and individual expression.

No secret extraction means no illegitimate concentrations of knowledge about people. No concentrations of knowledge means no targeting algorithms. No targeting means that corporations can no longer control and curate information flows and social speech or shape human behavior to favor their interests. Regulating extraction would eliminate the surveillance dividend and with it the financial incentives for surveillance.

Source: You Are the Object of Facebook’s Secret Extraction Operation | The New York Times