Category: 21st Century Society (page 2 of 6)

Problems with the present and future of work are of our own making

This is a long essay in which the RSA announces that, along with its partners (one of which, inevitably, is Google) it’s launching the Future Work Centre. I’ve only selected quotations from the first section here.

From autonomous vehicles to cancer-detecting algorithms, and from picking and packing machines to robo-advisory tools used in financial services, every corner of the economy has begun to feel the heat of a new machine age. The RSA uses the term ‘radical technologies’ to describe these innovations, which stretch from the shiny and much talked about, including artificial intelligence and robotics, to the prosaic but equally consequential, such as smartphones and digital platforms.

I highly recommend reading Adam Greenfield’s book Radical Technologies: the design of everyday life, if you haven’t already. Greenfield isn’t beholden to corporate partners, and lets rip.

What is certain is that the world of work will evolve as a direct consequence of the invention and adoption of radical technologies — and in more ways than we might imagine. Alongside eliminating and creating jobs, these innovations will alter how workers are recruited, monitored, organised and paid. Companies like HireVue (video interviewing), Percolata (schedule setting) and Veriato (performance monitoring) are eager to reinvent all aspects of the workplace.

Indeed, and a lot of what’s going on is compliance and surveillance of workers smuggled in through the back door while people focus on ‘innovation’.

The main problems outlined with the current economy which is being ‘disrupted’ by technology are:

  1. Declining wages (in real terms)
  2. Economic insecurity (gig economy, etc.)
  3. Working conditions
  4. Bullshit jobs
  5. Work-life balance

Taken together, these findings paint a picture of a dysfunctional labour market — a world of work that offers little in the way of material security, let alone satisfaction. But that may be going too far. Overall, most workers enjoy what they do and relish the careers they have established. The British Social Attitudes survey found that twice as many people in 2015 as in 1989 strongly agreed they would enjoy having a job even if their financial circumstances did not require it.

The problem is not with work per se but rather with how it is orchestrated in the modern economy, and how rewards are meted out. As a society we have a vision of what work could and should look like — well paid, protective, meaningful, engaging — but the reality too often falls short.

I doubt the RSA would ever say it without huge caveats, but the problem is neoliberalism. It’s all very well looking to the past for examples of technological disruption, but that was qualitatively different from what’s going on now. Organisations can run on a skeleton staff and make obscene profits for a very few people.

I feel like warnings such as ‘the robots are coming’ and ‘be careful not to choose an easily-automated occupation!’ are a smokescreen for decisions that people are making about the kind of society they want to live in. It seems like that’s one where most of us (the ‘have nots’) are expendable, while the 0.01% (the ‘haves’) live in historically-unparalleled luxury.

In summary, the lives of workers will be shaped by more technologies than AI and robotics, and in more ways than through the loss of jobs.

Fears surrounding automaton should be taken seriously. Yet anxiety over job losses should not distract us from the subtler impacts of radical technologies, including on recruitment practices, employee monitoring and people’s work-life balance. Nor should we become so fixated on AI and robotics that we lose sight of the conventional technologies bringing about change in the present moment.

Exactly. Let’s fix 2018 before we start thinking about 2040, eh?

Source: The RSA

Cory Doctorow on the corruption at the heart of Facebook

I like Cory Doctorow. He’s a gifted communicator who wears his heart on his sleeve. In this article, he talks about Facebook and how what it’s wrought is a result of the corruption at its very heart.

It’s great that the privacy-matters message is finally reaching a wider audience, and it’s exciting to think that we’re approaching a tipping point for indifference to privacy and surveillance.

But while the acknowledgment of the problem of Big Tech is most welcome, I am worried that the diagnosis is wrong.

The problem is that we’re confusing automated persuasion with automated targeting. Laughable lies about Brexit, Mexican rapists, and creeping Sharia law didn’t convince otherwise sensible people that up was down and the sky was green.

Rather, the sophisticated targeting systems available through Facebook, Google, Twitter, and other Big Tech ad platforms made it easy to find the racist, xenophobic, fearful, angry people who wanted to believe that foreigners were destroying their country while being bankrolled by George Soros.

So, for example, people seem to think that Facebook advertisement caused people to vote for Trump. As if they were going to vote for someone else, and then changed their mind as a direct result of viewing ads. That’s not how it works.

Companies such as Cambridge Analytica might claim that they can rig elections and change people’s minds, but they’re not actually that sophisticated.

Cambridge Analytica are like stage mentalists: they’re doing something labor-intensive and pretending that it’s something supernatural. A stage mentalist will train for years to learn to quickly memorize a deck of cards and then claim that they can name your card thanks to their psychic powers. You never see the unglamorous, unimpressive memorization practice. Cambridge Analytica uses Facebook to find racist jerks and tell them to vote for Trump and then they claim that they’ve discovered a mystical way to get otherwise sensible people to vote for maniacs.

This isn’t to say that persuasion is impossible. Automated disinformation campaigns can flood the channel with contradictory, seemingly plausible accounts for the current state of affairs, making it hard for a casual observer to make sense of events. Long-term repetition of a consistent narrative, even a manifestly unhinged one, can create doubt and find adherents – think of climate change denial, or George Soros conspiracies, or the anti-vaccine movement.

These are long, slow processes, though, that make tiny changes in public opinion over the course of years, and they work best when there are other conditions that support them – for example, fascist, xenophobic, and nativist movements that are the handmaidens of austerity and privation. When you don’t have enough for a long time, you’re ripe for messages blaming your neighbors for having deprived you of your fair share.

Advertising and influencing works best when you provide a message that people already agree with in a way that they can easily share with others. The ‘long, slow processes’ that Doctorow refers to have been practised offline as well (think of Nazi propaganda, for example). Dark adverts on Facebook are tapping into feelings and reactions that aren’t peculiar to the digital world.

Facebook has thrived by providing ways for people to connect and communicate with one another. Unfortunately, because they’re so focused on profit over people, they’ve done a spectacularly bad job at making sure that the spaces in which people connect are healthy spaces that respect democracy.

There’s an old-fashioned word for this: corruption. In corrupt systems, a few bad actors cost everyone else billions in order to bring in millions – the savings a factory can realize from dumping pollution in the water supply are much smaller than the costs we all bear from being poisoned by effluent. But the costs are widely diffused while the gains are tightly concentrated, so the beneficiaries of corruption can always outspend their victims to stay clear.

Facebook doesn’t have a mind-control problem, it has a corruption problem. Cambridge Analytica didn’t convince decent people to become racists; they convinced racists to become voters.

That last phrase is right on the money.

Source: Locus magazine

Attention scarcity as an existential threat

This post is from Albert Wenger, a partner a New York-based early stage VC firm focused on investing in disruptive networks. It’s taken from his book World After Capital, currently in draft form.

In this section, Wenger is concerned with attention scarcity, which he believes to be both a threat to humanity, and an opportunity for us.

On the threat side, for example, we are not working nearly hard enough on how to recapture CO2 and other greenhouse gases from the atmosphere. Or on monitoring asteroids that could strike earth, and coming up with ways of deflecting them. Or containing the outbreak of the next avian flu: we should have a lot more collective attention dedicated to early detection and coming up with vaccines and treatments.

The reason the world’s population is so high is almost entirely due to the technological progress we’ve made. We’re simply better at keeping human beings alive.

On the opportunity side, far too little human attention is spent on environmental cleanup, free educational resources, and basic research (including the foundations of science), to name just a few examples. There are so many opportunities we could dedicate attention to that over time have the potential to dramatically improve quality of life here on Earth not just for humans but also for other species.

Interestingly, he comes up with a theory as to why we haven’t heard from any alien species yet:

I am proposing this as a (possibly new) explanation for the Fermi Paradox, which famously asks why we have not yet detected any signs of intelligent life elsewhere in our rather large universe. We now even know that there are plenty of goldilocks planets available that could harbor life forms similar to those on Earth. Maybe what happens is that all civilizations get far enough to where they generate huge amounts of information, but then they get done in by attention scarcity. They collectively take their eye off the ball of progress and are not prepared when something really bad happens such as a global pandemic.

Attention scarcity, then, has the opportunity to become an existential threat to our species. Pay attention to the wrong things and we could either neglect to avoid a disaster, or cause one of our own making.

Source: Continuations

Our irresistible screens of splendour

Apple is touting a new feature in the latest version of iOS that helps you reduce the amount of time you spend on your smartphone. Facebook are doing something similar. As this article in The New York Times notes, that’s no accident:

There’s a reason tech companies are feeling this tension between making phones better and worrying they are already too addictive. We’ve hit what I call Peak Screen.

For much of the last decade, a technology industry ruled by smartphones has pursued a singular goal of completely conquering our eyes. It has given us phones with ever-bigger screens and phones with unbelievable cameras, not to mention virtual reality goggles and several attempts at camera-glasses.

The article even gives the example of Augmented Reality LEGO play sets which actively encourage you to stop building and spend more time on screens!

Tech has now captured pretty much all visual capacity. Americans spend three to four hours a day looking at their phones, and about 11 hours a day looking at screens of any kind.

So tech giants are building the beginning of something new: a less insistently visual tech world, a digital landscape that relies on voice assistants, headphones, watches and other wearables to take some pressure off our eyes.

[…]

Screens are insatiable. At a cognitive level, they are voracious vampires for your attention, and as soon as you look at one, you are basically toast.

It’s not enough to tell people not to do things. Technology can be addictive, just like anything else, so we need to find better ways of achieving similar ends.

But in addition to helping us resist phones, the tech industry will need to come up with other, less immersive ways to interact with digital world. Three technologies may help with this: voice assistants, of which Amazon’s Alexa and Google Assistant are the best, and Apple’s two innovations, AirPods and the Apple Watch.

All of these technologies share a common idea. Without big screens, they are far less immersive than a phone, allowing for quick digital hits: You can buy a movie ticket, add a task to a to-do list, glance at a text message or ask about the weather without going anywhere near your Irresistible Screen of Splendors.

The issue I have is that it’s going to take tightly-integrated systems to do this well, at least at first. So the chances are that Apple or Google will create an ecosystem that only works with their products, providing another way to achieve vendor lock-in.

Source: The New York Times

Inequality, anarchy, and the course of human history

Sometimes I’m reminded of the fact that I haven’t checked in with someone’s worth for a few weeks, months, or even years. I’m continually impressed with the work of my near-namesake Dougald Hine. I hope to meet him in person one day.

Going back through his recent work led me to a long article in Eurozine by David Graeber and David Wengrow about how we tend to frame history incorrectly.

Overwhelming evidence from archaeology, anthropology, and kindred disciplines is beginning to give us a fairly clear idea of what the last 40,000 years of human history really looked like, and in almost no way does it resemble the conventional narrative. Our species did not, in fact, spend most of its history in tiny bands; agriculture did not mark an irreversible threshold in social evolution; the first cities were often robustly egalitarian. Still, even as researchers have gradually come to a consensus on such questions, they remain strangely reluctant to announce their findings to the public­ – or even scholars in other disciplines – let alone reflect on the larger political implications. As a result, those writers who are reflecting on the ‘big questions’ of human history – Jared Diamond, Francis Fukuyama, Ian Morris, and others – still take Rousseau’s question (‘what is the origin of social inequality?’) as their starting point, and assume the larger story will begin with some kind of fall from primordial innocence.

Graeber and Wengrow essentially argue that most people start from the assumption that we have a choice between a life that is ‘nasty, brutish, and short’ (i.e. most of human history) or one that is more civilised (i.e. today). If we want the latter, we have to put up with inequality.

‘Inequality’ is a way of framing social problems appropriate to technocratic reformers, the kind of people who assume from the outset that any real vision of social transformation has long since been taken off the political table. It allows one to tinker with the numbers, argue about Gini coefficients and thresholds of dysfunction, readjust tax regimes or social welfare mechanisms, even shock the public with figures showing just how bad things have become (‘can you imagine? 0.1% of the world’s population controls over 50% of the wealth!’), all without addressing any of the factors that people actually object to about such ‘unequal’ social arrangements: for instance, that some manage to turn their wealth into power over others; or that other people end up being told their needs are not important, and their lives have no intrinsic worth. The latter, we are supposed to believe, is just the inevitable effect of inequality, and inequality, the inevitable result of living in any large, complex, urban, technologically sophisticated society.

But inequality is not the inevitable result of living in a civilised society, as they point out with some in-depth examples. I haven’t got space to go through them here, but suffice to say that it seems a classic case of historians cherry-picking their evidence.

As Claude Lévi-Strauss often pointed out, early Homo sapiens were not just physically the same as modern humans, they were our intellectual peers as well. In fact, most were probably more conscious of society’s potential than people generally are today, switching back and forth between different forms of organization every year. Rather than idling in some primordial innocence, until the genie of inequality was somehow uncorked, our prehistoric ancestors seem to have successfully opened and shut the bottle on a regular basis, confining inequality to ritual costume dramas, constructing gods and kingdoms as they did their monuments, then cheerfully disassembling them once again.

If so, then the real question is not ‘what are the origins of social inequality?’, but, having lived so much of our history moving back and forth between different political systems, ‘how did we get so stuck?’

Definitely worth a read, particularly if you think that ‘anarchy’ is the opposite of ‘civilisation’.

Source: Eurozine (via Dougald Hine)


Image CC BY-NC-SA xina

The New Octopus: going beyond managerial interventions for internet giants

This article in Logic magazine was brought to my attention by a recent issue of Ian O’Byrne’s excellent TL;DR newsletter. It’s a long read, focusing on the structural power of internet giants such as Amazon, Facebook, and Google.

The author, K. Sabeel Rahman, is an assistant professor of law at Brooklyn Law School and a fellow at the Roosevelt Institute. He uses historical analogues to make his points, while noting how different the current state of affairs is from a century ago.

As in the Progressive Era, technological revolutions have radically transformed our social, economic, and political life. Technology platforms, big data, AI—these are the modern infrastructures for today’s economy. And yet the question of what to do about technology is fraught, for these technological systems paradoxically evoke both bigness and diffusion: firms like Amazon and Alphabet and Apple are dominant, yet the internet and big data and AI are technologies that are by their very nature diffuse.

The problem, however, is not bigness per se. Even for Brandeisians, the central concern was power: the ability to arbitrarily influence the decisions and opportunities available to others. Such unchecked power represented a threat to liberty. Therefore, just as the power of the state had to be tamed through institutional checks and balances, so too did this private power have to be contested—controlled, held to account.

This emphasis on power and contestation, rather than literal bigness, helps clarify the ways in which technology’s particular relationship to scale poses a challenge to ideals of democracy, liberty, equality—and what to do about it.

I think this is the thing that concerns me most. Just as the banks were ‘too big to fail’ during the economic crisis and had to be bailed out by the taxpayer, so huge technology companies are increasingly playing that kind of role elsewhere in our society.

The problem of scale, then, has always been a problem of power and contestability. In both our political and our economic life, arbitrary power is a threat to liberty. The remedy is the institutionalization of checks and balances. But where political checks and balances take a common set of forms—elections, the separation of powers—checks and balances for private corporate power have proven trickier to implement.

These various mechanisms—regulatory oversight, antitrust laws, corporate governance, and the countervailing power of organized labor— together helped create a relatively tame, and economically dynamic, twentieth-century economy. But today, as technology creates new kinds of power and new kinds of scale, new variations on these strategies may be needed.

“Arbitrary power is a threat to liberty.” Absolutely, no matter whether the company holding that power has been problematic in the past, has a slogan promising not to do anything wrong, or is well-liked by the public.

We need more than regulatory oversight of such organisations because of how insidious their power can be — much like the image of Luks’ octopus that accompanies this and the original post.

Rahman explains three types of power held by large internet companies:

First, there is transmission power. This is the ability of a firm to control the flow of data or goods. Take Amazon: as a shipping and logistics infrastructure, it can be seen as directly analogous to the railroads of the nineteenth century, which enjoyed monopolized mastery over the circulation of people, information, and commodities. Amazon provides the literal conduits for commerce.

[…]

A second type of power arises from what we might think of as a gatekeeping power. Here, the issue is not necessarily that the firm controls the entire infrastructure of transmission, but rather that the firm controls the gateway to an otherwise decentralized and diffuse landscape.

This is one way to understand the Facebook News Feed, or Google Search. Google Search does not literally own and control the entire internet. But it is increasingly true that for most users, access to the internet is mediated through the gateway of Google Search or YouTube’s suggested videos. By controlling the point of entry, Google exercises outsized influence on the kinds of information and commerce that users can ultimately access—a form of control without complete ownership.

[…]

A third kind of power is scoring power, exercised by ratings systems, indices, and ranking databases. Increasingly, many business and public policy decisions are based on big data-enabled scoring systems. Thus employers will screen potential applicants for the likelihood that they may quit, be a problematic employee, or participate in criminal activity. Or judges will use predictive risk assessments to inform sentencing and bail decisions.

These scoring systems may seem objective and neutral, but they are built on data and analytics that bake into them existing patterns of racial, gender, and economic bias.

[…]

Each of these forms of power is infrastructural. Their impact grows as more and more goods and services are built atop a particular platform. They are also more subtle than explicit control: each of these types of power enable a firm to exercise tremendous influence over what might otherwise look like a decentralized and diffused system.

As I quote Adam Greenfield as saying in Microcast #021 (supporters only!) this infrastructural power is less obvious because of the immateriality of the world controlled by internet giants. We need more than managerial approaches to solving the problems faced by their power.

A more radical response, then, would be to impose structural restraints: limits on the structure of technology firms, their powers, and their business models, to forestall the dynamics that lead to the most troubling forms of infrastructural power in the first place.

One solution would be to convert some of these infrastructures into “public options”—publicly managed alternatives to private provision. Run by the state, these public versions could operate on equitable, inclusive, and nondiscriminatory principles. Public provision of these infrastructures would subject them to legal requirements for equal service and due process. Furthermore, supplying a public option would put competitive pressures on private providers.

[…]

We can also introduce structural limits on technologies with the goal of precluding dangerous concentrations of power. While much of the debate over big data and privacy has tended to emphasize the concerns of individuals, we might view a robust privacy regime as a kind of structural limit: if firms are precluded from collecting or using certain types of data, that limits the kinds of power they can exercise.

Some of this is already happening, thankfully, through structural limitations such as GDPR. I hope this is the first step in a more coordinated response to internet giants who increasingly have more impact on the day-to-day lives of citizens than their governments.

Moving fast and breaking things is inevitable in moments of change. The issue is which things we are willing to break—and how broken we are willing to let them become. Moving fast may not be worth it if it means breaking the things upon which democracy depends.

It’s a difficult balance. However, just as GDPR has put in place mechanisms to prevent the over-reaching of governments and of companies, I think we could think differently about perhaps organisations with non-profit status and community ownership that could provide some of the infrastructure being built by shareholder-owned organisations.

Having just finished reading Utopia for Realists, I definitely think the left needs to think bigger than it’s currently doing, and really push that Overton window.

Source: Logic magazine (via Ian O’Byrne)

Trust and the cult of your PLN

This is a long article with a philosophical take on one of my favourite subjects: social networks and the flow of information. The author, C Thi Nguyen, is an assistant professor of philosophy at Utah Valley University and distinguishes between two things that he things have been conflated:

Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.

Teasing things apart a bit, Nguyen gives some definitions:

Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission.

[…]

An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited.

[…]

In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. The way to break an echo chamber is not to wave “the facts” in the faces of its members. It is to attack the echo chamber at its root and repair that broken trust.

It feels like towards the end of my decade as an active user of Twitter there was a definite shift from it being an ‘epistemic bubble’ towards being an ‘echo chamber’. My ‘Personal Learning Network’ (or ‘PLN’) seemed to be a bit more militant in its beliefs.

Nguyen goes on to talk at length about fake news, sociological theories, and Cartesian epistemology. Where he ends up, however, is where I would: trust.

As Elijah Millgram argues in The Great Endarkenment (2015), modern knowledge depends on trusting long chains of experts. And no single person is in the position to check up on the reliability of every member of that chain. Ask yourself: could you tell a good statistician from an incompetent one? A good biologist from a bad one? A good nuclear engineer, or radiologist, or macro-economist, from a bad one? Any particular reader might, of course, be able to answer positively to one or two such questions, but nobody can really assess such a long chain for herself. Instead, we depend on a vastly complicated social structure of trust. We must trust each other, but, as the philosopher Annette Baier says, that trust makes us vulnerable. Echo chambers operate as a kind of social parasite on that vulnerability, taking advantage of our epistemic condition and social dependency.

That puts us a double-bind. We need to make ourselves vulnerable in order to participate in a society built on trust, but that very vulnerability puts us at danger of being manipulated.

I see this in fanatical evangelism of blockchain solutions to the ‘problem’ of operating in a trustless environment. To my mind, we need to be trusting people more, not less. Of course, there are obvious exceptions, but breaches of trust are near the top of the list of things we should punish most in a society.

Is there anything we can do, then, to help an echo-chamber member to reboot? We’ve already discovered that direct assault tactics – bombarding the echo-chamber member with ‘evidence’ – won’t work. Echo-chamber members are not only protected from such attacks, but their belief systems will judo such attacks into further reinforcement of the echo chamber’s worldview. Instead, we need to attack the root, the systems of discredit themselves, and restore trust in some outside voices.

So the way forward is for people to develop empathy and to show trust. Not present people with evidence that they’re wrong. That’s never worked in the past, and it won’t work now. Our problem isn’t a deficit in access to information, it’s a deficit in trust.

Source: Aeon (via Ian O’Byrne)

Alexa for Kids as babysitter?

I’m just on my way out if the house to head for Scotland to climb some mountains with my wife.

But while she does (what I call) her ‘last minute faffing’ I read Dan Hon’s newsletter. I’ll just quite the relevant section without any attempt at comment or analysis.

He includes references in his newsletter, but you’ll just have to click through for those.

Mat Honan reminded me that Amazon have made an Alexa for Kids (during the course of which Tom Simonite had a great story about Alexa diligently and non-plussedly educating a group of preschoolers about the history of FARC after misunderstanding their requests for farts) and Honan has a great article about it. There are now enough Alexa (plural?) out there that the phenomenon of “the funny things kids say to Alexa” is pretty well documented as well as the earlier “Alexa is teaching my kid to be rude” observation. This isn’t to say that Amazon haven’t done *any* work thinking about how Alexa works in a kid context (Honan’s article shows that they’ve demonstrably thought about how Alexa might work and that they’ve made changes to the product to accommodate children as a specific class of user) but the overwhelming impression I had after reading Honan’s piece was that, as a parent, I still don’t think Amazon haven’t gone far enough in making Alexa kid-friendly.

They’ve made some executive decisions like coming down hard on curation versus algorithmic selection of content (see James Bridle’s excellent earlier essay on YouTube, that something is wrong on the internet and recent coverage of YouTube Kids’ content selection method still finding ways to recommend, shall we say, videos espousing extreme views). And Amazon have addressed one of the core reported issues of having an Alexa in the house (the rudeness) by designing in support for a “magic word” Easter Egg that will reward kids for saying “please”. But that seems rather tactical and dealing with a specific issue and not, well, foundational. I think that the foundational issue is something more like this: parenting is a *very* personal subject. As I have become a parent, I have discovered (and validated through experimental data) that parents have very specific views about how to do things! Many parents do not agree with each other! Parents who agree with each other on some things do not agree on other things! In families where there are two parents there is much scope for disagreement on both desired outcome and method!

All of which is to say is that the current design, architecture and strategy of Alexa for Kids indicates one sort of one-size-fits-all method and that there’s not much room for parental customization. This isn’t to say that Amazon are actively preventing it and might not add it down the line – it’s just that it doesn’t really exist right now. Honan’s got a great point that:

“[For example,] take the magic word we mentioned earlier. There is no universal norm when it comes to what’s polite or rude. Manners vary by family, culture, and even region. While “yes, sir” may be de rigueur in Alabama, for example, it might be viewed as an element of the patriarchy in parts of California.”

Some parents may have very specific views on how they want to teach their kids to be polite. This kind of thinking leads me down the path of: well, are we imagining a world where Alexa or something like it is a sort of universal basic babysitter, with default norms and those who can get, well, customization? Or what someone else might call: attentive, individualized parenting?

When Alexa for Kids came out, I did about 10 seconds’ worth of thinking and, based on how Alexa gets used in our house (two parents, a five year old and a 19 month old) and how our preschooler is behaving, I was pretty convinced that I’m in no way ready or willing to leave him alone with an Alexa for Kids in his room. My family is, in what some might see as that tedious middle class way, pretty strict about the amount of screen time our kids get (unsupervised and supervised) and suffice it to say that there’s considerable difference of opinion between my wife and myself on what we’re both comfortable with and at what point what level of exposure or usage might be appropriate.

And here’s where I reinforce that point again: are you okay with leaving your kids with a default babysitter, or are you the kind of person who has opinions about how you want your babysitter to act with your kids? (Yes, I imagine people reading this and clutching their pearls at the mere *thought* of an Alexa “babysitting” a kid but need I remind you that books are a technological object too and the issue here is in the degree of interactivity and access). At least with a babysitter I can set some parameters and I’ve got an idea of how the babysitter might interact with the kids because, well, that’s part of the babysitter screening process.

Source: Things That Have Caught My Attention s5e11

Systems thinking and AI

Edge is an interesting website. Its aim is:

To arrive at the edge of the world’s knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.

One recent article on the site is from Mary Catherine Bateson, a writer and cultural anthropologist who retired in 2004 from her position as Professor in Anthropology and English at George Mason University. She’s got some interesting insights into systems thinking and artificial intelligence.

We all think with metaphors of various sorts, and we use metaphors to deal with complexity, but the way human beings use computers and AI depends on their basic epistemologies—whether they’re accustomed to thinking in systemic terms, whether they’re mainly interested in quantitative issues, whether they’re used to using games of various sorts. A great deal of what people use AI for is to simulate some pattern outside in the world. On the other hand, people use one pattern in the world as a metaphor for another one all the time.

That’s such an interesting way of putting it, the insinuation being that some people have epistemologies (theories of knowledge) that are not really nuanced enough to deal with the world in all of its complexity. As a result, they use reductive metaphors that don’t really work that well. This is obviously problematic when dealing with AI that you want to do some work for you, hence the bias (racism, sexism) which has plagued the field.

One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it’s willing to make projections when it hasn’t been provided with everything that would be relevant to those projections. How do we get there? I don’t know. It’s important to be aware of it, to realize that there are limits to what we can do with AI. It’s great for computation and arithmetic, and it saves huge amounts of labor. It seems to me that it lacks humility, lacks imagination, and lacks humor. It doesn’t mean you can’t bring those things into your interactions with your devices, particularly, in communicating with other human beings. But it does mean that elements of intelligence and wisdom—I like the word wisdom, because it’s more multi-dimensional—are going to be lacking.

Something I always say is that technology is not neutral and that anyone who claims it to be so is a charlatan. Technologies are always designed by a person, or group of people, for a particular purpose. That person, or people, has hopes, fears, dreams, opinions, and biases. Therefore, AI has limits.

You don’t have to know a lot of technical terminology to be a systems thinker. One of the things that I’ve been realizing lately, and that I find fascinating as an anthropologist, is that if you look at belief systems and religions going way back in history, around the world, very often what you realize is that people have intuitively understood systems and used metaphors to think about them. The example that grabbed me was thinking about the pantheon of Greek gods—Zeus and Hera, Apollo and Demeter, and all of them. I suddenly realized that in the mythology they’re married, they have children, the sun and the moon are brother and sister. There are quarrels among the gods, and marriages, divorces, and so on. So you can use the Greek pantheon, because it is based on kinship, to take advantage of what people have learned from their observation of their friends and relatives.

I like the way that Bateson talks about the difference between computer science and systems theory. It’s a bit like the argument I gave about why kids need to learn to code back in 2013: it’s more about algorithmic thinking than it is about syntax.

The tragedy of the cybernetic revolution, which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in.

The article is worth reading in its entirety, as Bateson goes off at tangents that make it difficult to quote sections here. It reminds me that I need to revisit the work of Donella Meadows.

Source: Edge

Automated Chinese jaywalking fines are a foretaste of so-called ‘smart cities’

Given the choice of living in a so-called ‘smart city’ and living in rural isolation, I think I’d prefer the latter. This opinion has been strengthened by reading about what’s going on in China at the moment:

Last April, the industrial capital of Shenzhen installed anti-jaywalking cameras that use facial recognition to automatically identify people crossing without a green pedestrian light; jaywalkers are shamed on a public website and their photos are displayed on large screens at the intersection,

Nearly 14,000 people were identified by the system in its first ten months of its operation. Now, Intellifusion, who created the system, is planning to send warnings by WeChat and Sina Weibo messages; repeat offenders will get their social credit scores docked.

Yes, that’s right: social credit. Much more insidious than a fine, having a low social credit rating means that you can’t travel.

Certainly something to think about when you hear people talking about ‘smart cities of the future’.

Source: BoingBoing

(related: 99% Invisible podcast on the invention of ‘jaywalking’)