Paying to avoid ads is paying to avoid tracking

    This article is the standard way of reporting Meta’s announcement that, to comply with a new EU ruling, they will allow users to pay not to be shown adverts. It’s likely that only privacy-minded and better-off people are likely to do so, given the size of the charge.

    What isn’t mentioned in this type of article, but which TechCrunch helpfully notes, is that the issue is really about tracking. By introducing a charge, Meta hopes that they can gain legitimate consent for users to be tracked so as to avoid a monthly fee.

    X, formerly Twitter, is also trialling a monthly subscription. Of course, if you’re going to pay for your social media, why not set up your own Fediverse instance, or donate to a friendly admin who runs it for you. I do the latter with social.coop.

    Icon that looks like the Meta logo
    Meta is responding to "evolving European regulations" by introducing a premium subscription option for Facebook and Instagram from Nov. 1.

    Anyone over the age of 18 who resides in the European Union (EU), European Economic Area (EEA), or Switzerland will be able to pay a monthly subscription in order to stop seeing ads. Meta states that “while people are subscribed, their information will not be used for ads.”

    […]

    Subscribing via the web costs around $10.50 per month, but subscribing on an Android or iOS device pushes the cost up to almost $14 per month. The difference in price is down to the commission Apple and Google charge for in-app payments.

    The monthly charge covers all linked accounts in a user’s Accounts Center. However, that only applies until March 1 next year. After that, an extra $6 per month will be payable for each additional account listed in a user’s Accounts Center. That extra charge increases to $8.50 per month on Android and iOS.

    Source: Meta Introduces Ad-Free Subscription for Facebook, Instagram | PC Magazine

    Image: Unsplash

    A lonely and surveilled landscape

    Kyle Chayka, writing in The New Yorker, points to what many of us have felt over the decade or so: the internet just isn’t fun any more. This makes me sad, as my kids will never experience what it was like.

    Instead of discovery and peer-to-peer relationships, we’ve got algorithms and influencer broadcasts. It’s an increasingly lonely and surveilled landscape. Thankfully, places of joy still exist, but they feel like pockets of resistance rather than mainstream hangouts.

    The social-media Web as we knew it, a place where we consumed the posts of our fellow-humans and posted in return, appears to be over. The precipitous decline of X is the bellwether for a new era of the Internet that simply feels less fun than it used to be. Remember having fun online? It meant stumbling onto a Web site you’d never imagined existed, receiving a meme you hadn’t already seen regurgitated a dozen times, and maybe even playing a little video game in your browser. These experiences don’t seem as readily available now as they were a decade ago. In large part, this is because a handful of giant social networks have taken over the open space of the Internet, centralizing and homogenizing our experiences through their own opaque and shifting content-sorting systems. When those platforms decay, as Twitter has under Elon Musk, there is no other comparable platform in the ecosystem to replace them. A few alternative sites, including Bluesky and Discord, have sought to absorb disaffected Twitter users. But like sproutlings on the rain-forest floor, blocked by the canopy, online spaces that offer fresh experiences lack much room to grow.

    […]

    The Internet today feels emptier, like an echoing hallway, even as it is filled with more content than ever. It also feels less casually informative. Twitter in its heyday was a source of real-time information, the first place to catch wind of developments that only later were reported in the press. Blog posts and TV news channels aggregated tweets to demonstrate prevailing cultural trends or debates. Today, they do the same with TikTok posts—see the many local-news reports of dangerous and possibly fake “TikTok trends”—but the TikTok feed actively dampens news and political content, in part because its parent company is beholden to the Chinese government’s censorship policies. Instead, the app pushes us to scroll through another dozen videos of cooking demonstrations or funny animals. In the guise of fostering social community and user-generated creativity, it impedes direct interaction and discovery.

    According to Eleanor Stern, a TikTok video essayist with nearly a hundred thousand followers, part of the problem is that social media is more hierarchical than it used to be. “There’s this divide that wasn’t there before, between audiences and creators,” Stern said. The platforms that have the most traction with young users today—YouTube, TikTok, and Twitch—function like broadcast stations, with one creator posting a video for her millions of followers; what the followers have to say to one another doesn’t matter the way it did on the old Facebook or Twitter. Social media “used to be more of a place for conversation and reciprocity,” Stern said. Now conversation isn’t strictly necessary, only watching and listening.

    Source: Why the Internet Isn’t Fun Anymore | The New Yorker

    AI = surveillance

    Social networks are surveillance systems. Loyalty cards are surveillance systems. AI language models are surveillance systems.

    We live in a panopticon.

    Why is it that so many companies that rely on monetizing the data of their users seem to be extremely hot on AI? If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.”

    Onstage at TechCrunch Disrupt 2023, Whittaker explained her perspective that AI is largely inseparable from the big data and targeting industry perpetuated by the likes of Google and Meta, as well as less consumer-focused but equally prominent enterprise and defense companies. (Her remarks lightly edited for clarity.)

    “It requires the surveillance business model; it’s an exacerbation of what we’ve seen since the late ’90s and the development of surveillance advertising. AI is a way, I think, to entrench and expand the surveillance business model,” she said. “The Venn diagram is a circle.”

    “And the use of AI is also surveillant, right?” she continued. “You know, you walk past a facial recognition camera that’s instrumented with pseudo-scientific emotion recognition, and it produces data about you, right or wrong, that says ‘you are happy, you are sad, you have a bad character, you’re a liar, whatever.’ These are ultimately surveillance systems that are being marketed to those who have power over us generally: our employers, governments, border control, etc., to make determinations and predictions that will shape our access to resources and opportunities.”

    Source: Signal’s Meredith Whittaker: AI is fundamentally ‘a surveillance technology’ | TechCrunch

    Facial recognition and the morality police

    As this article points out, before 1979 removal of the traditional hijab was encouraged as part of Iran’s modernisation agenda. Once a theocracy came to power, however, the ‘morality police’ started using any means at their disposal to repress women.

    Things have come to a head recently with high-profile women, for example athletes, removing the hijab. It would seem that the Iranian state is responding to this not with discussion, debate, or compassion, but rather with more repression -this time in the form of facial recognition and increasingly levels of surveillance.

    We should be extremely concerned about this, as once there is no semblance of anonymity anywhere, then repression by bad actors (of which governments are some of the worst) will increase exponentially.

    After Iranian lawmakers suggested last year that face recognition should be used to police hijab law, the head of an Iranian government agency that enforces morality law said in a September interview that the technology would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.” Individuals could be identified by checking faces against a national identity database to levy fines and make arrests, he said.

    Two weeks later, a 22-year-old Kurdish woman named Jina Mahsa Amini died after being taken into custody by Iran’s morality police for not wearing a hijab tightly enough. Her death sparked historic protests against women’s dress rules, resulting in an estimated 19,000 arrests and more than 500 deaths. Shajarizadeh and others monitoring the ongoing outcry have noticed that some people involved in the protests are confronted by police days after an alleged incident—including women cited for not wearing a hijab. “Many people haven’t been arrested in the streets,” she says. “They were arrested at their homes one or two days later.”

    Although there are other ways women could have been identified, Shajarizadeh and others fear that the pattern indicates face recognition is already in use—perhaps the first known instance of a government using face recognition to impose dress law on women based on religious belief.

    Source: Iran to use facial recognition to identify women without hijabs | Ars Technica

    Assume that your devices are compromised

    I was in Catalonia in 2017 during the independence referendum. The way that people were treated when trying to exercise democratic power I still believe to be shameful.

    These days, I run the most secure version of an open operating system on my mobile device that I can. And yet I still need to assume it's been compromised.

    In Catalonia, more than sixty phones—owned by Catalan politicians, lawyers, and activists in Spain and across Europe—have been targeted using Pegasus. This is the largest forensically documented cluster of such attacks and infections on record. Among the victims are three members of the European Parliament, including Solé. Catalan politicians believe that the likely perpetrators of the hacking campaign are Spanish officials, and the Citizen Lab’s analysis suggests that the Spanish government has used Pegasus. A former NSO employee confirmed that the company has an account in Spain. (Government agencies did not respond to requests for comment.) The results of the Citizen Lab’s investigation are being disclosed for the first time in this article. I spoke with more than forty of the targeted individuals, and the conversations revealed an atmosphere of paranoia and mistrust. Solé said, “That kind of surveillance in democratic countries and democratic states—I mean, it’s unbelievable.”

    [...]

    [T]here is evidence that Pegasus is being used in at least forty-five countries, and it and similar tools have been purchased by law-enforcement agencies in the United States and across Europe. Cristin Flynn Goodwin, a Microsoft executive who has led the company’s efforts to fight spyware, told me, “The big, dirty secret is that governments are buying this stuff—not just authoritarian governments but all types of governments.”

    [...]

    The Citizen Lab’s researchers concluded that, on July 7, 2020, Pegasus was used to infect a device connected to the network at 10 Downing Street, the office of Boris Johnson, the Prime Minister of the United Kingdom. A government official confirmed to me that the network was compromised, without specifying the spyware used. “When we found the No. 10 case, my jaw dropped,” John Scott-Railton, a senior researcher at the Citizen Lab, recalled. “We suspect this included the exfiltration of data,” Bill Marczak, another senior researcher there, added. The official told me that the National Cyber Security Centre, a branch of British intelligence, tested several phones at Downing Street, including Johnson’s. It was difficult to conduct a thorough search of phones—“It’s a bloody hard job,” the official said—and the agency was unable to locate the infected device. The nature of any data that may have been taken was never determined.

    Source: How Democracies Spy On Their Citizens | The New Yorker

    Spatial Finance

    Using real-time satellite imagery to ensure that people are building (or not-building) what they say they’re going to.

    ‘Spatial finance’ is the integration of geospatial data and analysis into financial theory and practice. Earth observation and remote sensing combined with machine learning have the potential to transform the availability of information in our financial system. It will allow financial markets to better measure and manage climate-related risks, as well as a vast range of other factors that affect risk and return in different asset classes.
    Source: Spatial Finance Initiative - Greening Finance and Investment

    Health surveillance

    It’s possible to be entirely in favour mass vaccination (as I am) while also concerned about the over-reach of states with our personal health data.

    As this article discusses, based on a report from an German non-profit called AlgorithmWatch, such health surveillance is being normalised due to the requirements of responding to a global pandemic.

    The idea that technology can be used to solve complex social issues, including public health, is not a new one. But the pandemic strongly influenced how technology is applied, with much of the push coming from public health policymaking and public perceptions, said the report.

    The report also highlighted the growing divide between people who fervently defend the schemes and those who staunchly oppose them - and how fear and misinformation have influenced both sides.

    Source: Pandemic Exploited To Normalise Mass Surveillance? | The ASEAN Post

    Big Tech companies may change their names but they will not voluntarily change their economics

    I based a good deal of Truth, Lies, and Digital Fluency, a talk I gave in NYC in December 2019, on the work of Shoshana Zuboff. Writing in The New York Times, she starts to get a bit more practical as to what we do about surveillance capitalism.

    As Zuboff points out, Big Tech didn’t set out to cause the harms it has any more than fossil fuel companies set out to destroy the earth. The problem is that they are following economic incentives. They’ve found a metaphorical goldmine in hoovering up and selling personal data to advertisers.

    Legislating for that core issue looks like it could be more fruitful in terms of long-term consequences. Other calls like “breaking up Big Tech” are the equivalent of rearranging the deckchairs on the Titanic.

    Democratic societies riven by economic inequality, climate crisis, social exclusion, racism, public health emergency, and weakened institutions have a long climb toward healing. We can’t fix all our problems at once, but we won’t fix any of them, ever, unless we reclaim the sanctity of information integrity and trustworthy communications. The abdication of our information and communication spaces to surveillance capitalism has become the meta-crisis of every republic, because it obstructs solutions to all other crises.

    […]

    We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes. This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content. Such “remedies” only treat the symptoms without challenging the illegitimacy of the human data extraction that funds private control over society’s information spaces. Similarly, structural solutions like “breaking up” the tech giants may be valuable in some cases, but they will not affect the underlying economic operations of surveillance capitalism.

    Instead, discussions about regulating big tech should focus on the bedrock of surveillance economics: the secret extraction of human data from realms of life once called “private.” Remedies that focus on regulating extraction are content neutral. They do not threaten freedom of expression. Instead, they liberate social discourse and information flows from the “artificial selection” of profit-maximizing commercial operations that favor information corruption over integrity. They restore the sanctity of social communications and individual expression.

    No secret extraction means no illegitimate concentrations of knowledge about people. No concentrations of knowledge means no targeting algorithms. No targeting means that corporations can no longer control and curate information flows and social speech or shape human behavior to favor their interests. Regulating extraction would eliminate the surveillance dividend and with it the financial incentives for surveillance.

    Source: You Are the Object of Facebook’s Secret Extraction Operation | The New York Times

    Surveillance vs working openly

    Austin Kleon is famous for his book Show Your Work, something that our co-op references from time to time, as it backs up our belief in working openly.

    However, as Kleon points out in this post, it doesn’t mean you need to livestream your creative process! For me, this is another example of the tension between being able to be a privacy advocate at the same time as a believer in sharing your work freely and openly.

    It’s bad enough trying to create something when nobody’s watching — the worst trolls are the ones that live in your head!

    The danger of sharing online is this ambient buildup of a feeling of being surveilled.

    The feeling of being watched, or about to be watched.

    You have to disconnect from that long enough to connect with yourself and what you’re working on.

    Source: You can’t create under surveillance | Austin Kleon

    Singapore is turning into a dystopian surveillance state

    Well, this is concerning. Especially given governments' love for authoritarian technologies and copying one another’s surveillance practices.

    Singapore surveillance robot

    Singapore has trialled patrol robots that blast warnings at people engaging in “undesirable social behaviour”, adding to an arsenal of surveillance technology in the tightly controlled city-state that is fuelling privacy concerns.

    From vast numbers of CCTV cameras to trials of lampposts kitted out with facial recognition tech, Singapore is seeing an explosion of tools to track its inhabitants.

    […]

    The government’s latest surveillance devices are robots on wheels, with seven cameras, that issue warnings to the public and detect “undesirable social behaviour”.

    This includes smoking in prohibited areas, improperly parking bicycles, and breaching coronavirus social-distancing rules.

    During a recent patrol, one of the “Xavier” robots wove its way through a housing estate and stopped in front of a group of elderly residents watching a chess match.

    “Please keep one-metre distancing, please keep to five persons per group,” a robotic voice blared out, as a camera on top of the machine trained its gaze on them.

    Source: ‘Dystopian world’: Singapore patrol robots stoke fears of surveillance state | Singapore | The Guardian

    Slowly-boiling frogs in Facebook's surveillance panopticon

    I can't think of a worse company than Facebook than to be creating a IRL surveillance panopticon. But, I have to say, it's entirely on-brand.

    On Wednesday, the company announced a plan to map the entire world, beyond street view. The company is launching a set of glasses that contains cameras, microphones, and other sensors to build a constantly updating map of the world in an effort called Project Aria. That map will include the inside of buildings and homes and all the objects inside of them. It’s Google Street View, but for your entire life.

    Dave Gershgorn, Facebook’s Project Aria Is Google Maps — For Your Entire Life (OneZero)

    We're like slowly-boiling frogs with this stuff. Everything seems fine. Until it's not.

    The company insists any faces and license plates captured by Aria glasses wearers will be anonymized. But that won’t protect the data from Facebook itself. Ostensibly, Facebook will possess a live map of your home, pictures of your loved ones, pictures of any sensitive documents or communications you might be looking at with the glasses on, passwords — literally your entire life. The employees and contractors who have agreed to wear the research glasses are already trusting the company with this data.

    Dave Gershgorn, Facebook’s Project Aria Is Google Maps — For Your Entire Life (OneZero)

    With Amazon cosying up to police departments in the US with its Ring cameras, we really are hurtling towards surveillance states in the West.

    Who has access to see the data from this live 3D map, and what, precisely, constitutes private versus public data? And who makes that determination? Faces might be blurred, but people can be easily identified without their faces. What happens if law enforcement wants to subpoena a day’s worth of Facebook’s LiveMap? Might Facebook ever build a feature to try to, say, automatically detect domestic violence, and if so, what would it do if it detected it?

    Dave Gershgorn, Facebook’s Project Aria Is Google Maps — For Your Entire Life (OneZero)

    Judges already requisition Fitbit data to solve crimes. No matter what Facebook say are their intentions around Project Aria, this data will end up in the hands of law enforcement, too.


    More details on Project Aria:

    If you have been put in your place long enough, you begin to act like the place

    Man is equally incapable of seeing the nothingness from which he emerges and the infinity in which he is engulfed

    Biometric surveillance in a post-pandemic future

    I woke up today to the news that, in the UK, the police will get access to to the data on people told to self-isolate on a 'case-by-case basis'. As someone pointed out on Mastodon, this was entirely predictable.

    They pointed to this article by Yuval Noah Harari from March of this year, which also feels like a decade ago. In it, he talks about post-pandemic society being a surveillance nightmare:

    You could, of course, make the case for biometric surveillance as a temporary measure taken during a state of emergency. It would go away once the emergency is over. But temporary measures have a nasty habit of outlasting emergencies, especially as there is always a new emergency lurking on the horizon. My home country of Israel, for example, declared a state of emergency during its 1948 War of Independence, which justified a range of temporary measures from press censorship and land confiscation to special regulations for making pudding (I kid you not). The War of Independence has long been won, but Israel never declared the emergency over, and has failed to abolish many of the “temporary” measures of 1948 (the emergency pudding decree was mercifully abolished in 2011). 

    Yuval Noah Harari: the world after coronavirus (The Financial times)

    Remember the US 'war on terror'? That led to an incredible level of domestic and foreign surveillance that was revealed by Edward Snowden a few years ago.

    The trouble, though, is that health is a clear and visible thing, a clear and present danger. Privacy is more nebulous with harms often being in the future, so the trade-off is between the here and now and, well, the opposite.

    Even when infections from coronavirus are down to zero, some data-hungry governments could argue they needed to keep the biometric surveillance systems in place because they fear a second wave of coronavirus, or because there is a new Ebola strain evolving in central Africa, or because . . . you get the idea. A big battle has been raging in recent years over our privacy. The coronavirus crisis could be the battle’s tipping point. For when people are given a choice between privacy and health, they will usually choose health.

    YUVAL NOAH HARARI: THE WORLD AFTER CORONAVIRUS (THE FINANCIAL TIMES)

    For me, just like Harari, the way that governments choose to deal with the pandemic shows their true colours.

    The coronavirus epidemic is thus a major test of citizenship. In the days ahead, each one of us should choose to trust scientific data and healthcare experts over unfounded conspiracy theories and self-serving politicians. If we fail to make the right choice, we might find ourselves signing away our most precious freedoms, thinking that this is the only way to safeguard our health.

    YUVAL NOAH HARARI: THE WORLD AFTER CORONAVIRUS (THE FINANCIAL TIMES)

    Using WhatsApp is a (poor) choice that you make

    People often ask me about my stance on Facebook products. They can understand that I don't use Facebook itself, but what about Instagram? And surely I use WhatsApp? Nope.

    Given that I don't usually have a single place to point people who want to read about the problems with WhatsApp, I thought I'd create one.


    WhatsApp is a messaging app that was acquired by Facebook for the eye-watering amount of $19 billion in 2014. Interestingly, a BuzzFeed News article from 2018 cites documents confidential documents from the time leading up to the acquisition that were acquired by the UK's Department for Culture, Media, and Sport. They show the threat WhatsApp posed to Facebook at the time.

    US mobile messenger apps (iPhone) graph from August 2012 to March 2013
    A document obtained by the DCMS as part of their investigations

    As you can see from the above chart, Facebook executives were shown in 2013 that WhatsApp (8.6% reach) was growing rapidly and posed a huge threat to Facebook Messenger (13.7% reach).

    So Facebook bought WhatsApp. But what did they buy? If, as we're led to believe, WhatsApp is 'end-to-end encrypted' then Facebook don't have access to the messages of users. So what's so valuable?


    Brian Acton, one of the founders of WhatsApp (and a man who got very rich through its sale) has gone on record saying that he feels like he sold his users' privacy to Facebook.

    Facebook, Acton says, had decided to pursue two ways of making money from WhatsApp. First, by showing targeted ads in WhatsApp’s new Status feature, which Acton felt broke a social compact with its users. “Targeted advertising is what makes me unhappy,” he says. His motto at WhatsApp had been “No ads, no games, no gimmicks”—a direct contrast with a parent company that derived 98% of its revenue from advertising. Another motto had been “Take the time to get it right,” a stark contrast to “Move fast and break things.”

    Facebook also wanted to sell businesses tools to chat with WhatsApp users. Once businesses were on board, Facebook hoped to sell them analytics tools, too. The challenge was WhatsApp’s watertight end-to-end encryption, which stopped both WhatsApp and Facebook from reading messages. While Facebook didn’t plan to break the encryption, Acton says, its managers did question and “probe” ways to offer businesses analytical insights on WhatsApp users in an encrypted environment.

    Parmy Olson (Forbes)

    The other way Facebook wanted to make money was to sell tools to businesses allowing them to chat with WhatsApp users. These tools would also give "analytical insights" on how users interacted with WhatsApp.

    Facebook was allowed to acquire WhatsApp (and Instagram) despite fears around monopolistic practices. This was because they made a promise not to combine data from various platforms. But, guess what happened next?

    In 2014, Facebook bought WhatsApp for $19b, and promised users that it wouldn't harvest their data and mix it with the surveillance troves it got from Facebook and Instagram. It lied. Years later, Facebook mixes data from all of its properties, mining it for data that ultimately helps advertisers, political campaigns and fraudsters find prospects for whatever they're peddling. Today, Facebook is in the process of acquiring Giphy, and while Giphy currently doesn’t track users when they embed GIFs in messages, Facebook could start doing that anytime.

    Cory Doctorow (EFF)

    So Facebook is harvesting metadata from its various platforms, tracking people around the web (even if they don't have an account), and buying up data about offline activities.

    All of this creates a profile. So yes, because of end-ot-end encryption, Facebook might not know the exact details of your messages. But they know that you've started messaging a particular user account around midnight every night. They know that you've started interacting with a bunch of stuff around anxiety. They know how the people you message most tend to vote.


    Do I have to connect the dots here? This is a company that sells targeted adverts, the kind of adverts that can influence the outcome of elections. Of course, Facebook will never admit that its platforms are the problem, it's always the responsibility of the user to be 'vigilant'.

    Man reading a newspaper
    A WhatsApp advert aiming to 'fighting false information' (via The Guardian)

    So you might think that you're just messaging your friend or colleague on a platform that 'everyone' uses. But your decision to go with the flow has consequences. It has implications for democracy. It has implications on creating a de facto monopoly for our digital information. And it has implications around the dissemination of false information.

    The features that would later allow WhatsApp to become a conduit for conspiracy theory and political conflict were ones never integral to SMS, and have more in common with email: the creation of groups and the ability to forward messages. The ability to forward messages from one group to another – recently limited in response to Covid-19-related misinformation – makes for a potent informational weapon. Groups were initially limited in size to 100 people, but this was later increased to 256. That’s small enough to feel exclusive, but if 256 people forward a message on to another 256 people, 65,536 will have received it.

    [...]

    A communication medium that connects groups of up to 256 people, without any public visibility, operating via the phones in their pockets, is by its very nature, well-suited to supporting secrecy. Obviously not every group chat counts as a “conspiracy”. But it makes the question of how society coheres, who is associated with whom, into a matter of speculation – something that involves a trace of conspiracy theory. In that sense, WhatsApp is not just a channel for the circulation of conspiracy theories, but offers content for them as well. The medium is the message.

    William Davies (The Guardian)

    I cannot control the decisions others make, nor have I forced my opinions on my two children, who (despite my warnings) both use WhatsApp to message their friends. But, for me, the risk to myself and society of using WhatsApp is not one I'm happy with taking.

    Just don't say I didn't warn you.


    Header image by Rachit Tank

    Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say

    Post-pandemic surveillance culture

    Today's title comes from Edward Snowden, and is a pithy overview of the 'nothing to hide' argument that I guess I've struggled to answer over the years. I'm usually so shocked that an intelligent person would say something to that effect, that I'm not sure how to reply.

    When you say, ‘I have nothing to hide,’ you’re saying, ‘I don’t care about this right.’ You’re saying, ‘I don’t have this right, because I’ve got to the point where I have to justify it.’ The way rights work is, the government has to justify its intrusion into your rights.

    Edward Snowden

    This, then, is the fifth article in my ongoing blogchain about post-pandemic society, which already includes:

    1. People seem not to see that their opinion of the world is also a confession of character
    2. We have it in our power to begin the world over again
    3. There is no creature whose inward being is so strong that it is not greatly determined by what lies outside it
    4. The old is dying and the new cannot be born

    It does not surprise me that those with either a loose grip on how the world works, or those who need to believe that someone, somewhere has 'a plan', believe in conspiracy theories around the pandemic.

    What is true, and what can easily be mistaken for 'planning' is the preparedness of those with a strong ideology to double-down on it during a crisis. People and organisations reveal their true colours under stress. What was previously a long game now becomes a short-term priority.

    For example, this week, the US Senate "voted to give law enforcement agencies access to web browsing data without a warrant", reports VICE. What's interesting, and concerning to me, is that Big Tech and governments are acting like they've already won the war on harvesting our online life, and now they're after our offline life, too.


    I have huge reservations about the speed in which Covid-19 apps for contact tracing are being launched when, ultimately, they're likely to be largely ineffective.

    [twitter.com/holden/st...](https://twitter.com/holden/status/1260813197402968071?s=20)

    We already know how to do contact tracing well and to train people how to do it. But, of course, it costs money and is an investment in people instead of technology, and privacy instead of surveillance.

    There are plenty of articles out there on the difference between the types of contact tracing apps that are being developed, and this BBC News article has a useful diagram showing the differences between the two.

    TL;DR: there is no way that kind of app is going on my phone. I can't imagine anyone who I know who understands tech even a little bit installing it either.


    Whatever the mechanics of how it goes about doing it happen to be, the whole point of a contact tracing app is to alert you and the authorities when you have been in contact with someone with the virus. Depending on the wider context, that may or may not be useful to you and society.

    However, such apps are more widely applicable. One of the things about technology is to think about the effects it could have. What else could an app like this have, especially if it's baked into the operating systems of devices used by 99% of smartphone users worldwide?

    CC BY-SA 3.0, Link

    The above diagram is Marshall McLuhan's tetrad of media effects, which is a useful frame for thinking about the impact of technology on society.

    Big Tech and governments have our online social graphs, a global map of how everyone relates to everyone else in digital spaces. Now they're going after our offline social graphs too.


    Exhibit A

    [twitter.com/globaltim...](https://twitter.com/globaltimesnews/status/1223257710033960960)

    The general reaction to this seemed to be one of eye-rolling and expressing some kind of Chinese exceptionalism when this was reported back in January.

    Exhibit B

    [www.youtube.com/watch](https://www.youtube.com/watch?v=viuR7N6E2LA)

    Today, this Boston Dynamics robot is trotting around parks in Singapore reminding everyone about social distancing. What are these robots doing in five years' time?

    Exhibit C

    [twitter.com/thehill/s...](https://twitter.com/thehill/status/1246592135358484480?s=20)

    Drones in different countries are disinfecting the streets. What's their role by 2030?


    I think it's drones that concern me most of all. Places like Baltimore were already planning overhead surveillance pre-pandemic, and our current situation has only accelerated and exacerbated that trend.

    In that case, it's US Predator drones that have previously been used to monitor and bomb places in the Middle East that are being deployed on the civilian population. These drones operate from a great height, unlike the kind of consumer drones that anyone can buy.

    However, as was reported last year, we're on the cusp of photovoltaic drones that can fly for days at a time:

    This breakthrough has big implications for technologies that currently rely on heavy batteries for power. Thermophotovoltaics are an ultralight alternative power source that could allow drones and other unmanned aerial vehicles to operate continuously for days. It could also be used to power deep space probes for centuries and eventually an entire house with a generator the size of an envelope.

    Linda Vu (TechXplore)

    Not only will the government be able to fly thousands of low-cost drones to monitor the population, but they can buy technology, like this example from DefendTex, to take down other drones.

    That is, of course, if civilian drones continue to be allowed, especially given the 'security risk' of Chinese-made drones flying around.

    It's interesting times for those who keep a watchful eye on their civil liberties and government invasion of privacy. Bear that in mind when tech bros tell you not to fear robots because they're dumb. The people behind them aren't, and they have an agenda.


    Header image via Pixabay

    Saturday scramblings

    I've spent a lot more time on Twitter recently, where my feed seems to be equal parts anger and indignation (especially at Andrew Adonis) on the one hand, and jokes, funny anecdotes, and re-posted TikToks on the other.

    In amongst all of that, and via Other Sources™, I've also found the following, some of which I think will resonate with you. Let me know on Twitter, Mastodon, or in the comments if that's the case!


    School Work and Surveillance

    So, what happens now that we're all doing school and work from home?

    Well, for one thing, schools are going to be under even more pressure to buy surveillance software — to prevent cheating, obviously, but also to fulfill all sorts of regulations and expectations about "compliance." Are students really enrolled? Are they actually taking classes? Are they doing the work? Are they logging into the learning management system? Are they showing up to Zoom? Are they really learning anything? How are they feeling? Are they "at risk"? What are teachers doing? Are they holding class regularly? How quickly do they respond to students' messages in the learning management system?

    Audrey Watters (Hack Education)

    Good stuff, as always, by Audrey Watters, who has been warning about this stuff for a decade.


    We're knee-deep in shit and drinking cups of tea

    Of course this government are failing to deal with a pandemic. At the fag end of neoliberalism, they don’t exist to do much more than transfer public assets into private hands. What we’re living through is exactly what would happen if we’d elected a firm of bailiffs to cure polio.  That’s not to say that they won’t use this crisis, as they would any other, to advance a profoundly reactionary agenda. The austerity they’ll tell us they need to introduce to pay for this will make the last decade seem like Christmas at Elton John’s house.

    There’s an old joke about a guy going to hell. The Devil shows him round all the rooms where people are being tortured in a variety of brutal ways. Eventually, they come to a room where everybody is standing knee-deep in shit and drinking cups of tea. The guy chooses this as the place to spend eternity, and the Devil shouts “Tea break’s over lads, back on your heads!” That, I suppose, is how I feel when I hear people crowing about how the government are being forced to implement socialist policies. Pretty soon, we’ll all be back on our heads.

    Frankie Boyle (The Overtake)

    As comedy has become more political over the last decade, one of the most biting commentators has been the Scottish comedian Frankie Boyle. I highly recommend following him on Twitter.


    Novel adventures: 12 video games for when you’re too restless to read

    A few keen readers have turned to essay collections, short stories or diaries, which are less demanding on the memory and attention, but video games may also offer a way back into reading during these difficult times. Here are 12 interesting puzzle and adventure games that play with words, text and narratives in innovative ways, which may well guide you back into a reading frame of mind.

    Keith Stuart (The Guardian)

    I hadn't heard of any of the games on this list (mobile/console/PC) and I think this is a great idea. Also check out the Family Video Game Database.


    Career advice for people with bad luck

    The company is not your family. Some of the people in the company are your friends in the current context. It’s like your dorm in college. Hopefully some of them will still be your friends after. But don’t stay because you’re comfortable.

    [...]

    When picking a job, yes, your manager matters. But if you have an amazing manager at a shit company you’ll still have a shit time. In some ways, it’ll actually be worse. If they’re good at their job (including retaining you), they’ll keep you at a bad company for too long. And then they’ll leave, because they’re smart and competent.

    Chief of Stuff (Chief's newsletter)

    Most of this advice is focused on the tech sector, but I wanted to highlight the above, about 'friends' at work and the relative importance of having a good boss.


    Are we too busy to enjoy life?

    “You cannot step into the same river twice, for other waters are continually flowing on,” supposedly said Heraclitus. Time is like a river. If you’re too busy to enjoy life—too busy to spend time with friends and family, too busy to learn how to paint or play the guitar, too busy to go on that hike, too busy to cook something nice for yourself—these moments will be gone, and you will never get that time back.

    You may think it’s too late. It’s not. Like many people, I personally experience time anxiety—the recurring thought that it’s too late to start or accomplish something new—but the reality is you probably still have many years in front of you. Defining what “time well spent” means to you and making space for these moments is one of the greatest gifts you can make to your future self.

    Anne-Laure Le Cunff (Ness Labs)

    Quality not quantity. Absolutely, and the best way to do that is to be in control of every area of your life, not beholden to someone else's clock.


    Labour HQ used Facebook ads to deceive Jeremy Corbyn during election campaign

    Labour officials ran a secret operation to deceive Jeremy Corbyn at last year’s general election, micro-targeting Facebook adverts at the leader and his closest aides to convince them the party was running the campaign they demanded.

    Campaign chiefs at Labour HQ hoodwinked their own leader because they disapproved of some of Corbyn’s left-wing messages.

    They convinced him they were following his campaign plans by spending just £5,000 on adverts solely designed to be seen by Corbyn, his aides and their favourite journalists, while pouring far more money into adverts with a different message for ordinary voters.

    Tim Shipman (The Times)

    This article by the political editor of The Times is behind a paywall. However, the above is all you need to get the gist of the story, which reminds me of a story about the CEO of AT&T, the mobile phone network.

    At a time when AT&T were known for patchy coverage, technicians mapped where the CEO frequently went (home, work, golf club, etc.) and ensured that those locations had full signal. Incredible.


    We can’t grow our way out of poverty

    Poverty isn’t natural or inevitable. It is an artifact of the very same policies that have been designed to syphon the lion’s share of global income into the pockets of the rich. Poverty is, at base, a problem of distribution.

    Jason Hickel (New Internationalist)

    There's some amazing data in this article, along with some decent suggestions on how we can make society work for the many, and not just the few. Also see this: wealth shown to scale.


    On Letting Go of Certainty in a Story That Never Ends

    Possessed of no such capacity for superior force, fairytale characters are given tasks that are often unfair verging on impossible, imposed by the more powerful—climb the glass mountain, sort the heap of mixed grain before morning, gather a feather from the tail of the firebird. They are often mastered by alliances with other overlooked and undervalued players—particularly old women (who often turn out to be possessed of supernatural powers) and small animals, the ants who sort the grain, the bees who find the princess who ate the honey, the birds who sing out warnings. Those tasks and ordeals and quests mirror the difficulty of the task of becoming faced by the young in real life and the powers that most of us have, alliance, persistence, resistance, innovation. Or the power to be kind and the power to listen—to name two powers that pertain to storytelling and to the characters these particular stories tell of.

    Rebecca Solnit (Literary Hub)

    What was it Einstein said? “If you want your children to be intelligent, read them fairy tales. If you want them to be more intelligent, read them more fairy tales.”


    Private gain must no longer be allowed to elbow out the public good

    The term ‘commons’ came into widespread use, and is still studied by most college students today, thanks to an essay by a previously little-known American academic, Garrett Hardin, called ‘The Tragedy of the Commons’ (1968). His basic claim: common property such as public land or waterways will be spoiled if left to the use of individuals motivated by self-interest. One problem with his theory, as he later admitted himself: it was mostly wrong.

    Our real problem, instead, might be called ‘the tragedy of the private’. From dust bowls in the 1930s to the escalating climate crisis today, from online misinformation to a failing public health infrastructure, it is the insatiable private that often despoils the common goods necessary for our collective survival and prosperity. Who, in this system based on the private, holds accountable the fossil fuel industry for pushing us to the brink of extinction? What happens to the land and mountaintops and oceans forever ravaged by violent extraction for private gain? What will we do when private wealth has finally destroyed our democracy?

    Dirk Philipsen (Aeon)

    Good to see more pushback on the notion of 'the tragedy of the commons'. What we need to do is, instead of metaphorically allowing everyone to graze their own cows on the common, we need to socialise all the cows.


    Header image by Jaymantri. Gifs via Giphy.

Older Posts →