Tag: robots

What can dreams of a communist robot utopia teach us about human nature?

This article in Aeon by Victor Petrov posits that, in the post-industrial age, we no longer see human beings as primarily manual workers, but as thinkers using digital screens to get stuff done. What does that do to our self-image?

The communist parties of eastern Europe grappled with this new question, too. The utopian social order they were promising from Berlin to Vladivostok rested on the claim that proletarian societies would use technology to its full potential, in the service of all working people. Bourgeois information society would alienate workers even more from their own labour, turning them into playthings of the ruling classes; but a socialist information society would free Man from drudgery, unleash his creative powers, and enable him to ‘hunt in the morning … and criticise after dinner’, as Karl Marx put it in 1845. However, socialist society and its intellectuals foresaw many of the anxieties that are still with us today. What would a man do in a world of no labour, and where thinking was done by machines?

Bulgaria was a communist country that, after the Second World War, went from producing cigarettes to being one of the world’s largest producers of computers. This had a knock-on effect on what people wrote about in the country.

The Bulgarian reader was increasingly treated to debates about what humanity would be in this new age. Some, such as the philosopher Mityu Yankov, argued that what set Man apart from the animals was his ability to change and shape nature. For thousands of years, he had done this through physical means and his own brawn. But the Industrial Revolution had started a change of Man’s own nature, which was culminating with the Information Revolution – humanity now was becoming not a worker but a ‘governor’, a master of nature, and the means of production were not machines or muscles, but the human brain.

Lyuben Dilov, a popular sci-fi author, focused on “the boundaries between man and machine, brain and computer”. His books were full of societies obsessed with technology.

Added to this, there is technological anxiety, too – what is it to be a man when there are so many machines? Thus, Dilov invents a Fourth Law of Robotics, to supplement Asimov’s famous three, which states that ‘the robot must, in all circumstances, legitimate itself as a robot’. This was a reaction by science to the roboticists’ wish to give their creations ever more human qualities and appearance, making them subordinate to their function – often copying animal or insect forms. Zenon muses on human interactions with robots that start from a young age, giving the child power over the machine from the outset. This undermines our trust in the very machines on which we depend. Humans need a distinction from the robots, they need to know that they are always in power and couldn’t be lied to. For Dilov, the anxiety was about the limits of humanity, at least in its current stage – fearful, humans could not yet treat anything else, including their machines, as equals.

This all seems very pertinent at a time when deepfakes make us question what is real online. We’re perhaps less worried about a Blade Runner-style dystopia and more concerned about digital ‘reality’ but, nevertheless, questions about what it means to be human persist.

Bulgarian robots were both to be feared and they were the future. Socialism promised to end meaningless labour but reproduced many of the anxieties that are still with us today in our ever-automating world. What can Man do that a machine cannot do is something we still haven’t solved. But, like Kesarovski, perhaps we need not fear this new world so much, nor give up our reservations for the promise of a better, easier world.

Source: Aeon

The spectrum of work autonomy

Some companies have (and advertise as a huge perk) their ‘unlimited vacation’ policy. That, of course, sounds amazing. Except, of course, that there’s a reason why companies are so benevolent.

I can think of at least two:

  1. Your peers will exert downward pressure on the number of holidays you actually take.
  2. If there’s no set holiday entitlement, when you leave the company doesn’t have to pay for unused holiday days.

This article by Gaby Hinsliff in The Guardian uses the unlimited vacation policy as an example of the difference between two ends of the spectrum when it comes to jobs.

And that, increasingly, is the dividing line in modern workplaces: trust versus the lack of it; autonomy versus micro-management; being treated like a human being or programmed like a machine. Human jobs give the people who do them chances to exercise their own judgment, even if it’s only deciding what radio station to have on in the background, or set their own pace. Machine jobs offer at best a petty, box-ticking mentality with no scope for individual discretion, and at worst the ever-present threat of being tracked, timed and stalked by technology – a practice reaching its nadir among gig economy platforms controlling a resentful army of supposedly self-employed workers.

Never mind robots coming to steal our jobs, that’s just a symptom in a wider trend of neoliberal, late-stage capitalism:

There have always been crummy jobs, and badly paid ones. Not everyone gets to follow their dream or discover a vocation – and for some people, work will only ever be a means of paying the rent. But the saving grace of crummy jobs was often that there was at least some leeway for goofing around; for taking a fag break, gossiping with your equally bored workmates, or chatting a bit longer than necessary to lonely customers.

The ‘contract’ with employers these days goes way beyond the piece of paper you sign that states such mundanities as how much you will be paid or how much holiday you get. It’s about trust, as Hinsliff comments:

The mark of human jobs is an increasing understanding that you don’t have to know where your employees are and what they’re doing every second of the day to ensure they do it; that people can be just as productive, say, working from home, or switching their hours around so that they are working in the evening. Machine jobs offer all the insecurity of working for yourself without any of the freedom.

Embedded in this are huge diversity issues. I purposely chose a photo of a young white guy to go with the post, as they’re disproportionately likely to do well from this ‘trust-based’ workplace approach. People of colour, women, and those with disabilities are more likely to suffer from implicit bias and other forms of discrimination.

The debate about whether robots will soon be coming for everyone’s jobs is real. But it shouldn’t blind us to the risk right under our noses: not so much of people being automated out of jobs, as automated while still in them.

I consume a lot of what I post to Thought Shrapnel online, but I originally red this one in the dead-tree version of The Guardian. Interestingly, in the same issue there was a letter from a doctor by the name of Jonathan Shapiro, who wrote that he divides his colleagues into three different types:

  1. Passionate
  2. Dispassionate
  3. Compassionate

The first group suffer burnout, he said. The second group survive but are “lousy”. It’s the third group that cope, as they “care for patients without sacrificing themselves on the altar of professional vocation”.

What we need to be focusing on in education is preparing young people to be compassionate human beings, not cogs in the capitalist machine.

Source: The Guardian

Robo-advisors are coming for your job (and that’s OK)

Algorithms and artificial intelligence are an increasingly-normal part of our everyday lives, notes this article, so the next step is in the workplace:

Each one of us is becoming increasingly more comfortable being advised by robots for everything from what movie to watch to where to put our retirement. Given the groundwork that has been laid for artificial intelligence in companies, it’s only a matter of time before the $60 billion consulting industry in the U.S. is going to be disrupted by robotic advisors.

I remember years ago being told that by 2020 it would be normal to have an algorithm on your team. It sounded fanciful at the time, but now we just take it for granted:

Robo-advisors have the potential to deliver a broader array of advice and there may be a range of specialized tools in particular decision domains. These robo-advisors may be used to automate certain aspects of risk management and provide decisions that are ethical and compliant with regulation. In data-intensive fields like marketing and supply chain management, the results and decisions that robotic algorithms provide is likely to be more accurate than those made by human intuition.

I’m kind of looking forward to this becoming a reality, to be honest. Let machines do what machines are good at, and humans do what humans are good at would be my mantra.

Source: Harvard Business Review

Is it pointless to ban autonomous killing machines?

The authors do have a point:

Suppose the UN were to implement a preventive ban on the further development of all autonomous weapons technology. Further suppose – quite optimistically, already – that all armies around the world were to respect the ban, and abort their autonomous-weapons research programmes. Even with both of these assumptions in place, we would still have to worry about autonomous weapons. A self-driving car can be easily re-programmed into an autonomous weapons system: instead of instructing it to swerve when it sees a pedestrian, just teach it to run over the pedestrian.

Source: Aeon