Tag: technology (page 1 of 7)

Don Norman on human-centred technologies

In this article, Don Norman (famous for his seminal work The Design of Everyday Things) takes to task our technology-centric view of the world:

We need to switch from a technology-centric view of the world to a people-centric one. We should start with people’s abilities and create technology that enhances people’s capabilities: Why are we doing it backwards?

Instead of focusing on what we as humans require, we start with what technology is able to provide. Norman argues that it is us serving technology rather than the other way around:

Just think about your life today, obeying the dictates of technology–waking up to alarm clocks (even if disguised as music or news); spending hours every day fixing, patching, rebooting, inventing work-arounds; answering the constant barrage of emails, tweets, text messages, and instant this and that; being fearful of falling for some new scam or phishing attack; constantly upgrading everything; and having to remember an unwieldly number of passwords and personal inane questions for security, such as the name of your least-liked friend in fourth grade. We are serving the wrong masters.

I particularly like his example of car accidents. We’re fed the line that autonomous vehicles will dramatically cut the number of accidents on our road, but is that right?

Over 90% of industrial and automobile accidents are blamed on human error with distraction listed as a major cause. Can this be true? Look, if 5% of accidents were caused by human error, I would believe it. But when it is 90%, there must be some other reason, namely, that people are asked to do tasks that people should not be doing. Tasks that violate fundamental human abilities.

Consider the words we use to describe the result: human error, distraction, lack of attention, sloppiness–all negative terms, all implying the inferiority of people. Distraction, in particular, is the byword of the day–responsible for everything from poor interpersonal relationships to car accidents. But what does the term really mean?

It’s a good article, particularly at a time when we’re thinking about robots and artificial intelligence replacing humans in the jobs market. It certainly made me think about my technology choices.

Source: Fast Company

 

Blogging in the Fediverse with Write.as

I couldn’t be happier about this news. Write.as is a service that allows you to connect multiple blogs to one online editor. You then compose your post and then decide where to send it.

Matt Baer, the guy behind Write.as, has announced some exciting new functionality:

After much trial and error, I’ve finished basic ActivityPub support on Write.as! (Though it’s not live yet.) I’m very, very excited about reaching this point so I can try out some new ideas.

So far, most developers in the fediverse have been remaking centralized web services with ActivityPub support. There’s PeerTube for video, PixelFed for social photos, Plume or Microblog.pub for blogging, and of course Mastodon and Pleroma for microblogging — among many others. I’ve loved watching the ecosystem grow over the past several months, but I also think more can be done, and getting AP support in Write.as was the first step to making this happen.

Baer references one of his previous posts where, like the main developer of Mastodon, he takes a stand against some things that people have come to expect from centralised services:

If we’re going to build the web world we want, we have to constantly evaluate the pieces we bring with us from the old to the new. With each iteration of an idea on the web we need to question the very nature of certain aspects’ existence in the first place, and determine whether or not every single old thing unimproved should still be with us. It’s the only way we can be sure we’re moving — if not in the direction, at least in some direction that will teach us something.

In Baer’s case, it’s not having public ‘likes’ and in Mastodon’s case it’s not providing the ability to quote toots. Either way, I applaud them for taking a stand.

Baer is planning a new product called Read.as:

Today my idea is to split reading and writing across two ActivityPub-enabled products, Write.as and Read.as. The former will stay focused on writing and publishing; AP support will be almost invisible. Blogs can be followed via the web, RSS, email (soon), or ActivityPub-speaking services (for example, I can follow blogs with my Mastodon account, and then or share any posts to my followers there). Then Read.as would be the read-only counterpart; you go there when you want to stare at your screen for a while and read something interesting. It would be minimally social, avoid interrupting your life, and preserve your privacy — just like Write.as.

Great, great news!

Source: Write.as

Problems with the present and future of work are of our own making

This is a long essay in which the RSA announces that, along with its partners (one of which, inevitably, is Google) it’s launching the Future Work Centre. I’ve only selected quotations from the first section here.

From autonomous vehicles to cancer-detecting algorithms, and from picking and packing machines to robo-advisory tools used in financial services, every corner of the economy has begun to feel the heat of a new machine age. The RSA uses the term ‘radical technologies’ to describe these innovations, which stretch from the shiny and much talked about, including artificial intelligence and robotics, to the prosaic but equally consequential, such as smartphones and digital platforms.

I highly recommend reading Adam Greenfield’s book Radical Technologies: the design of everyday life, if you haven’t already. Greenfield isn’t beholden to corporate partners, and lets rip.

What is certain is that the world of work will evolve as a direct consequence of the invention and adoption of radical technologies — and in more ways than we might imagine. Alongside eliminating and creating jobs, these innovations will alter how workers are recruited, monitored, organised and paid. Companies like HireVue (video interviewing), Percolata (schedule setting) and Veriato (performance monitoring) are eager to reinvent all aspects of the workplace.

Indeed, and a lot of what’s going on is compliance and surveillance of workers smuggled in through the back door while people focus on ‘innovation’.

The main problems outlined with the current economy which is being ‘disrupted’ by technology are:

  1. Declining wages (in real terms)
  2. Economic insecurity (gig economy, etc.)
  3. Working conditions
  4. Bullshit jobs
  5. Work-life balance

Taken together, these findings paint a picture of a dysfunctional labour market — a world of work that offers little in the way of material security, let alone satisfaction. But that may be going too far. Overall, most workers enjoy what they do and relish the careers they have established. The British Social Attitudes survey found that twice as many people in 2015 as in 1989 strongly agreed they would enjoy having a job even if their financial circumstances did not require it.

The problem is not with work per se but rather with how it is orchestrated in the modern economy, and how rewards are meted out. As a society we have a vision of what work could and should look like — well paid, protective, meaningful, engaging — but the reality too often falls short.

I doubt the RSA would ever say it without huge caveats, but the problem is neoliberalism. It’s all very well looking to the past for examples of technological disruption, but that was qualitatively different from what’s going on now. Organisations can run on a skeleton staff and make obscene profits for a very few people.

I feel like warnings such as ‘the robots are coming’ and ‘be careful not to choose an easily-automated occupation!’ are a smokescreen for decisions that people are making about the kind of society they want to live in. It seems like that’s one where most of us (the ‘have nots’) are expendable, while the 0.01% (the ‘haves’) live in historically-unparalleled luxury.

In summary, the lives of workers will be shaped by more technologies than AI and robotics, and in more ways than through the loss of jobs.

Fears surrounding automaton should be taken seriously. Yet anxiety over job losses should not distract us from the subtler impacts of radical technologies, including on recruitment practices, employee monitoring and people’s work-life balance. Nor should we become so fixated on AI and robotics that we lose sight of the conventional technologies bringing about change in the present moment.

Exactly. Let’s fix 2018 before we start thinking about 2040, eh?

Source: The RSA

Attention scarcity as an existential threat

This post is from Albert Wenger, a partner a New York-based early stage VC firm focused on investing in disruptive networks. It’s taken from his book World After Capital, currently in draft form.

In this section, Wenger is concerned with attention scarcity, which he believes to be both a threat to humanity, and an opportunity for us.

On the threat side, for example, we are not working nearly hard enough on how to recapture CO2 and other greenhouse gases from the atmosphere. Or on monitoring asteroids that could strike earth, and coming up with ways of deflecting them. Or containing the outbreak of the next avian flu: we should have a lot more collective attention dedicated to early detection and coming up with vaccines and treatments.

The reason the world’s population is so high is almost entirely due to the technological progress we’ve made. We’re simply better at keeping human beings alive.

On the opportunity side, far too little human attention is spent on environmental cleanup, free educational resources, and basic research (including the foundations of science), to name just a few examples. There are so many opportunities we could dedicate attention to that over time have the potential to dramatically improve quality of life here on Earth not just for humans but also for other species.

Interestingly, he comes up with a theory as to why we haven’t heard from any alien species yet:

I am proposing this as a (possibly new) explanation for the Fermi Paradox, which famously asks why we have not yet detected any signs of intelligent life elsewhere in our rather large universe. We now even know that there are plenty of goldilocks planets available that could harbor life forms similar to those on Earth. Maybe what happens is that all civilizations get far enough to where they generate huge amounts of information, but then they get done in by attention scarcity. They collectively take their eye off the ball of progress and are not prepared when something really bad happens such as a global pandemic.

Attention scarcity, then, has the opportunity to become an existential threat to our species. Pay attention to the wrong things and we could either neglect to avoid a disaster, or cause one of our own making.

Source: Continuations

Our irresistible screens of splendour

Apple is touting a new feature in the latest version of iOS that helps you reduce the amount of time you spend on your smartphone. Facebook are doing something similar. As this article in The New York Times notes, that’s no accident:

There’s a reason tech companies are feeling this tension between making phones better and worrying they are already too addictive. We’ve hit what I call Peak Screen.

For much of the last decade, a technology industry ruled by smartphones has pursued a singular goal of completely conquering our eyes. It has given us phones with ever-bigger screens and phones with unbelievable cameras, not to mention virtual reality goggles and several attempts at camera-glasses.

The article even gives the example of Augmented Reality LEGO play sets which actively encourage you to stop building and spend more time on screens!

Tech has now captured pretty much all visual capacity. Americans spend three to four hours a day looking at their phones, and about 11 hours a day looking at screens of any kind.

So tech giants are building the beginning of something new: a less insistently visual tech world, a digital landscape that relies on voice assistants, headphones, watches and other wearables to take some pressure off our eyes.

[…]

Screens are insatiable. At a cognitive level, they are voracious vampires for your attention, and as soon as you look at one, you are basically toast.

It’s not enough to tell people not to do things. Technology can be addictive, just like anything else, so we need to find better ways of achieving similar ends.

But in addition to helping us resist phones, the tech industry will need to come up with other, less immersive ways to interact with digital world. Three technologies may help with this: voice assistants, of which Amazon’s Alexa and Google Assistant are the best, and Apple’s two innovations, AirPods and the Apple Watch.

All of these technologies share a common idea. Without big screens, they are far less immersive than a phone, allowing for quick digital hits: You can buy a movie ticket, add a task to a to-do list, glance at a text message or ask about the weather without going anywhere near your Irresistible Screen of Splendors.

The issue I have is that it’s going to take tightly-integrated systems to do this well, at least at first. So the chances are that Apple or Google will create an ecosystem that only works with their products, providing another way to achieve vendor lock-in.

Source: The New York Times

Higher Education and blockchain

I’ve said it before, and I’ll say it again: the most useful applications of blockchain technologies are incredibly boring. That goes in education, too.

This post by Chris Fellingham considers blockchain in the context of Higher Education, and in particular credentialing:

The short pitch is that as jobs and education go digital, we need digital credentials for our education and those need to be trustworthy and automisable. Decentralised trust systems may well be the future but I don’t see that it solves a core problem. Namely that the main premium market for Higher Education Edtech is geared twards graduates in developed countries and that market — does not have a problem of trust in its credentials — it has a problem of credibility in its courses. People don’t know what it means to have done a MOOC/Specialization/MicroMasters in X which undermines the market system for it. Shoring up the credential is a second order problem to proving the intrinsic value of the course itself.

“Decentralised trust systems” is what blockchain aficionados refer to, but what they actually mean is removing trust from the equation. So, in hiring decisions, for example, trust is removed from the equation in favour of cryptographic proof.

Fellingham mentions someone called ‘Smolenski’ who, after a little bit of digging, must be Natalie Smolenski, who works for Learning Machine. That organisation is a driving force, with MIT, behind the Blockcerts standard for blockchain-based digital credentialing.

Smolenski however, is a believer, and in numerous elegant essays has argued blockchain is the latest paradigm shift in trust-based technologies. The thesis puts trust based technologies as a central driver of human development. Kinship was the first ‘trust technology’, followed by language and cultural development. Things really got going with organised religion which was the early modern driver — enabling proto-legal systems and financial systems to emerge. Total strangers could now conduct economic transactions by putting their trust in local laws (a mutually understand system for transactions) in the knowledge that it would be enforced by a trusted third party — the state. Out of this emerged market economies and currencies.

Like Fellingham, I’m not particularly enamoured with this teleological ‘grand narrative’ approach to history, of which blockchain believers do tend to be overly-fond. I’m pretty sure that human history hasn’t been ‘building’ in any way towards anything, particularly something that involves less trust between human beings.

Blockchain at this moment is a kind of religion. It’s based on a hope of things to come:

Blockchain — be it in credential or currency form …could well be a major — if not paradigmatic technology — but it has its own logic and fundamentally suits those who use it best — much as social networks turned out to be fertile grounds for fake news. For that reason alone, we should be far more cautious about a shift to blockchain in Higher Education — lest like fake news — it takes an imperfect system and makes it worse.

Indeed. Who on earth would want wants to hard code the way things are right now in Higher Education? If your answer is ‘blockchain-based credentials’, then I’m not sure you really understand what the question is.

Source: Chris Fellingham (via Stephen Downes)

F*** off Google

This is interesting, given that Google was welcomed with open arms in London:

Google plans to implant a “Google Campus” in Kreuzberg, Berlin. We, as a decentralized network of people are committed to not letting our beloved city be taken over by this law- and tax-evading company that is building a dystopian future. Let’s kick Google out of our neighborhood and lives!

What I find interesting is that not only are people organising against Google, they’ve also got a wiki to inform people and help wean them off Google services.

The problem that I have with ‘replacing’ Google services is that it’s usually non-trivial for less technical users to achieve. As the authors of the wiki point out:

It is though dangerous to think in terms of “alternatives”, like the goal was to reach equivalence to what Google offers (and risk to always lag behind). In reality what we want is *better* services than the ones of Google, because they would rest on *better* principles, such as decentralization/distribution of services, end-to-end encryption, uncompromising free/libre software, etc.

While presenting these “alternatives” or “replacements” here, we must keep in mind that the true goal is to achieve proper distribution/decentralization of information and communication, and empower people to understand and control where their information goes.

The two biggest problems with the project of removing big companies such as Google from our lives, are: (i) using web services is a social thing, and (ii) they provide such high quality services for so little financial cost.

Whether you’re using a social network to connect with friends or working with colleagues on a collaborative document, your choices aren’t solely yours. We negotiate the topography of the web at the same time as weaving the social fabric of society. It’s not enough to give people alternatives, there has to be some leadership to go with it.

Source: Fuck off Google wiki

 

Blockchain was just a stepping stone

I’m reading Adam Greenfield’s excellent book Radical Technologies: the design of everyday life at the moment. He says:

And for those of us who are motivated by commitment to a specifically participatory politics of the commons, it’s not at all clear that any blockchain-based infrastructure can support the kind of flexible assemblies we imagine. I myself come from an intellectual tradition that insists that any appearance of the word “potential” needs to be greeted with skepticism. There is no such thing as potential, in this view: there are merely states of a system that have historically been enacted, and those that have not yet been enacted. The only way to assess whether a system is capable of assuming a given state is to do the work of enacting it.
 

Back in 2015, I wrote about the potential of badges and blockchain. However, these days I’m more likely to agree that’s it’s a futuristic integrity wand.

The problem with blockchain technologies is that they tend to all get lumped together as if they’re one thing. For example, some use blockchain technologies to prop-up neoliberalism, whereas others are seeking to use it to destroy it.

As part of my research for a presentation I gave in Barcelona last year about decentralised technologies, I came across MaidSafe (“the world’s first autonomous data network”). I admit to be on the edges of my understanding here, but the idea is that the SAFE network can safely store data in an autonomous, decentralised way.

Last week, MaidSafe announced a new protocol called PARSEC (Protocol for Asynchronous, Reliable, Secure and Efficient Consensus). It solves the Byzantine General’s problem without recourse to the existing blockchain approach.

PARSEC solves a well-known problem in decentralised, distributed computer networks: how can individual computers (nodes) in a system reliably communicate truths (in other words, events that have taken place on the network) to each other where a proportion of the nodes are malicious (Byzantine) and looking to disrupt the system. Or to put it another way: how can a group of computers agree on which transactions have correctly taken place and in which order?

This protocol is GPL v3 licensed, meaning that it is “free for anyone to build upon and likely prove to be of immense value to other decentralised projects facing similar challenges”. The Bitcoin blockchain network is S-L-O-W and is getting slower. It’s also steadily pushing up the computing power required to achieve consensus across the network, meaning that a huge amount of electricity is being used worldwide. This is bad for our planet.

If you’re building a secure, autonomous, decentralised data and communications network for the world like we are with the SAFE Network, then the limitations of blockchain technology when it comes to throughput (transactions-per-second), ever-increasing storage challenges and lack of encryption are all insurmountable problems for any system that seeks to build a project of this magnitude.

[…]

So despite being big fans of blockchain technology for many reasons here at MaidSafe, the reality is that the data and communications networks of the future will see millions or even billions of transactions per second taking place. No matter which type of blockchain implementation you take — tweaking the quantity and distribution of nodes across the network or how many people are in control of these across a variety of locations — at the end of the day, the blockchain itself remains, by definition, a single centralised record. And for the use cases that we’re working on, blockchain technology comes with limitations of transactions-per-second that simply makes that sort of centralisation unworkable.

I confess to not having watched the hour-long YouTube video embedded in the post but, if PARSEC works, it’s another step towards a post-nation state world — for better or worse.

Source: MaidSafe blog

Finding friends and family without smartphones, maps, or GPS

When I was four years old we moved to the North East of England. Soon after, my parents took my grandmother, younger sister (still in a pushchair) and me to the Quayside market in Newcastle-upon-Tyne.

There’s still some disagreement as to how exactly it happened, but after buying a toy monkey that wrapped around my neck using velcro, I got lost. It’s a long time ago, but I can vaguely remember my decision that, if I couldn’t find my parents or grandmother, I’d probably better head back to the car. So I did.

45 minutes later, and after the police had been called, my parents found me and my monkey sitting on the bonnet of our family car. I can still remember the registration number of that orange Ford Escort: MAT 474 V.

Now, 33 years later, we’re still not great at ensuring children don’t get lost. Yes, we have more of a culture of ensuring children don’t go out of our sight, and give kids smartphones at increasingly-young ages, but we can do much better.

That’s why I thought this Lynq tracker, currently being crowdfunded via Indiegogo was such a great idea. You can get the gist by watching the promo video:

Our family is off for two weeks around Europe this summer. While we’ve been a couple of times before, both involved taking our car and camping. This time, we’re interrailing and Airbnbing our way around, which increases the risk that one of our children gets lost.

Lync looks really simple and effective to use, but isn’t going to be shipping until November, — otherwise I would have backed this in an instant.

Source: The Verge

The New Octopus: going beyond managerial interventions for internet giants

This article in Logic magazine was brought to my attention by a recent issue of Ian O’Byrne’s excellent TL;DR newsletter. It’s a long read, focusing on the structural power of internet giants such as Amazon, Facebook, and Google.

The author, K. Sabeel Rahman, is an assistant professor of law at Brooklyn Law School and a fellow at the Roosevelt Institute. He uses historical analogues to make his points, while noting how different the current state of affairs is from a century ago.

As in the Progressive Era, technological revolutions have radically transformed our social, economic, and political life. Technology platforms, big data, AI—these are the modern infrastructures for today’s economy. And yet the question of what to do about technology is fraught, for these technological systems paradoxically evoke both bigness and diffusion: firms like Amazon and Alphabet and Apple are dominant, yet the internet and big data and AI are technologies that are by their very nature diffuse.

The problem, however, is not bigness per se. Even for Brandeisians, the central concern was power: the ability to arbitrarily influence the decisions and opportunities available to others. Such unchecked power represented a threat to liberty. Therefore, just as the power of the state had to be tamed through institutional checks and balances, so too did this private power have to be contested—controlled, held to account.

This emphasis on power and contestation, rather than literal bigness, helps clarify the ways in which technology’s particular relationship to scale poses a challenge to ideals of democracy, liberty, equality—and what to do about it.

I think this is the thing that concerns me most. Just as the banks were ‘too big to fail’ during the economic crisis and had to be bailed out by the taxpayer, so huge technology companies are increasingly playing that kind of role elsewhere in our society.

The problem of scale, then, has always been a problem of power and contestability. In both our political and our economic life, arbitrary power is a threat to liberty. The remedy is the institutionalization of checks and balances. But where political checks and balances take a common set of forms—elections, the separation of powers—checks and balances for private corporate power have proven trickier to implement.

These various mechanisms—regulatory oversight, antitrust laws, corporate governance, and the countervailing power of organized labor— together helped create a relatively tame, and economically dynamic, twentieth-century economy. But today, as technology creates new kinds of power and new kinds of scale, new variations on these strategies may be needed.

“Arbitrary power is a threat to liberty.” Absolutely, no matter whether the company holding that power has been problematic in the past, has a slogan promising not to do anything wrong, or is well-liked by the public.

We need more than regulatory oversight of such organisations because of how insidious their power can be — much like the image of Luks’ octopus that accompanies this and the original post.

Rahman explains three types of power held by large internet companies:

First, there is transmission power. This is the ability of a firm to control the flow of data or goods. Take Amazon: as a shipping and logistics infrastructure, it can be seen as directly analogous to the railroads of the nineteenth century, which enjoyed monopolized mastery over the circulation of people, information, and commodities. Amazon provides the literal conduits for commerce.

[…]

A second type of power arises from what we might think of as a gatekeeping power. Here, the issue is not necessarily that the firm controls the entire infrastructure of transmission, but rather that the firm controls the gateway to an otherwise decentralized and diffuse landscape.

This is one way to understand the Facebook News Feed, or Google Search. Google Search does not literally own and control the entire internet. But it is increasingly true that for most users, access to the internet is mediated through the gateway of Google Search or YouTube’s suggested videos. By controlling the point of entry, Google exercises outsized influence on the kinds of information and commerce that users can ultimately access—a form of control without complete ownership.

[…]

A third kind of power is scoring power, exercised by ratings systems, indices, and ranking databases. Increasingly, many business and public policy decisions are based on big data-enabled scoring systems. Thus employers will screen potential applicants for the likelihood that they may quit, be a problematic employee, or participate in criminal activity. Or judges will use predictive risk assessments to inform sentencing and bail decisions.

These scoring systems may seem objective and neutral, but they are built on data and analytics that bake into them existing patterns of racial, gender, and economic bias.

[…]

Each of these forms of power is infrastructural. Their impact grows as more and more goods and services are built atop a particular platform. They are also more subtle than explicit control: each of these types of power enable a firm to exercise tremendous influence over what might otherwise look like a decentralized and diffused system.

As I quote Adam Greenfield as saying in Microcast #021 (supporters only!) this infrastructural power is less obvious because of the immateriality of the world controlled by internet giants. We need more than managerial approaches to solving the problems faced by their power.

A more radical response, then, would be to impose structural restraints: limits on the structure of technology firms, their powers, and their business models, to forestall the dynamics that lead to the most troubling forms of infrastructural power in the first place.

One solution would be to convert some of these infrastructures into “public options”—publicly managed alternatives to private provision. Run by the state, these public versions could operate on equitable, inclusive, and nondiscriminatory principles. Public provision of these infrastructures would subject them to legal requirements for equal service and due process. Furthermore, supplying a public option would put competitive pressures on private providers.

[…]

We can also introduce structural limits on technologies with the goal of precluding dangerous concentrations of power. While much of the debate over big data and privacy has tended to emphasize the concerns of individuals, we might view a robust privacy regime as a kind of structural limit: if firms are precluded from collecting or using certain types of data, that limits the kinds of power they can exercise.

Some of this is already happening, thankfully, through structural limitations such as GDPR. I hope this is the first step in a more coordinated response to internet giants who increasingly have more impact on the day-to-day lives of citizens than their governments.

Moving fast and breaking things is inevitable in moments of change. The issue is which things we are willing to break—and how broken we are willing to let them become. Moving fast may not be worth it if it means breaking the things upon which democracy depends.

It’s a difficult balance. However, just as GDPR has put in place mechanisms to prevent the over-reaching of governments and of companies, I think we could think differently about perhaps organisations with non-profit status and community ownership that could provide some of the infrastructure being built by shareholder-owned organisations.

Having just finished reading Utopia for Realists, I definitely think the left needs to think bigger than it’s currently doing, and really push that Overton window.

Source: Logic magazine (via Ian O’Byrne)