Tag: Google (page 1 of 3)

Friday flowerings

Did you see these things this week?

  • Happy 25th year, blogging. You’ve grown up, but social media is still having a brawl (The Guardian) — “The furore over social media and its impact on democracy has obscured the fact that the blogosphere not only continues to exist, but also to fulfil many of the functions of a functioning public sphere. And it’s massive. One source, for example, estimates that more than 409 million people view more than 20bn blog pages each month and that users post 70m new posts and 77m new comments each month. Another source claims that of the 1.7 bn websites in the world, about 500m are blogs. And WordPress.com alone hosts blogs in 120 languages, 71% of them in English.”
  • Emmanuel Macron Wants to Scan Your Face (The Washington Post) — “President Emmanuel Macron’s administration is set to be the first in Europe to use facial recognition when providing citizens with a secure digital identity for accessing more than 500 public services online… The roll-out is tainted by opposition from France’s data regulator, which argues the electronic ID breaches European Union rules on consent – one of the building blocks of the bloc’s General Data Protection Regulation laws – by forcing everyone signing up to the service to use the facial recognition, whether they like it or not.”
  • This is your phone on feminism (The Conversationalist) — “Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true. This cognitive dissonance confuses and paralyses us. And look around. Everyone has a smartphone. So it’s probably not so bad, and anyway, that’s just how things work. Right?”
  • Google’s auto-delete tools are practically worthless for privacy (Fast Company) — “In reality, these auto-delete tools accomplish little for users, even as they generate positive PR for Google. Experts say that by the time three months rolls around, Google has already extracted nearly all the potential value from users’ data, and from an advertising standpoint, data becomes practically worthless when it’s more than a few months old.”
  • Audrey Watters (Uses This) — “For me, the ideal set-up is much less about the hardware or software I am using. It’s about the ideas that I’m thinking through and whether or not I can sort them out and shape them up in ways that make for a good piece of writing. Ideally, that does require some comfort — a space for sustained concentration. (I know better than to require an ideal set up in order to write. I’d never get anything done.)”
  • Computer Files Are Going Extinct (OneZero) — “Files are skeuomorphic. That’s a fancy word that just means they’re a digital concept that mirrors a physical item. A Word document, for example, is like a piece of paper, sitting on your desk(top). A JPEG is like a painting, and so on. They each have a little icon that looks like the physical thing they represent. A pile of paper, a picture frame, a manila folder. It’s kind of charming really.”
  • Why Technologists Fail to Think of Moderation as a Virtue and Other Stories About AI (The LA Review of Books) — “Speculative fiction about AI can move us to think outside the well-trodden clichés — especially when it considers how technologies concretely impact human lives — through the influence of supersized mediators, like governments and corporations.”
  • Inside Mozilla’s 18-month effort to market without Facebook (Digiday) — “The decision to focus on data privacy in marketing the Mozilla brand came from research conducted by the company four years ago into the rise of consumers who make values-based decisions on not only what they purchase but where they spend their time.”
  • Core human values not eyeballs (Cubic Garden) — “Theres so much more to do, but the aims are high and important for not just the BBC, but all public service entities around the world. Measuring the impact and quality on peoples lives beyond the shallow meaningless metrics for public service is critical.”

Image: The why is often invisible via Jessica Hagy’s Indexed

Microcast #078 — Values-based organisations

I’ve decided to post these microcasts, which I previously made available only through Patreon, here instead.

Microcasts focus on what I’ve been up to and thinking about, and also provide a way to answer questions from supporters and other readers/listeners!

This microcast covers ethics in decision-making for technology companies and (related!) some recent purchases I’ve made.

Show notes

The greatest obstacle to discovery is not ignorance—it is the illusion of knowledge

So said Daniel J. Boorstin. It’s been an interesting week for those, like me, who follow the development of interaction between humans and machines. Specifically, people seem shocked that voice assistants are being used for health questions, also that the companies who make them employ people to listen to samples of voice recordings to make them better.

Before diving into that, let’s just zoom out a bit and remind ourselves that the average level of digital literacies in the general population is pretty poor. Sometimes I wonder how on earth VC-backed companies manage to burn through so much cash. Then I remember the contortions that those who design visual interfaces go through so that people don’t have to think.

Discussing ‘fake news’ and our information literacy problem in Forbes, you can almost feel Kalev Leetaru‘s eye-roll when he says:

It is the accepted truth of Silicon Valley that every problem has a technological solution.

Most importantly, in the eyes of the Valley, every problem can be solved exclusively through technology without requiring society to do anything on its own. A few algorithmic tweaks, a few extra lines of code and all the world’s problems can be simply coded out of existence.

Kalev Leetaru

It’s somewhat tangential to the point I want to make in this article, but Cory Doctorow makes a a good point in this regard about fake news for Locus

Fake news is an instrument for measuring trauma, and the epistemological incoherence that trauma creates – the justifiable mistrust of the establishment that has nearly murdered our planet and that insists that making the richest among us much, much richer will benefit everyone, eventually.

Cory Doctorow

Before continuing, I’d just like to say that I’ve got some skin in the voice assistant game, given that our home has no fewer that six devices that use the Google Assistant (ten if you count smartphones and tablets).

Voice assistants are pretty amazing when you know exactly what you want and can form a coherent query. It’s essentially just clicking the top link on a Google search result, without any of the effort of pointing and clicking. “Hey Google, do I need an umbrella today?”

However, some people are suspicious of voice assistants to a degree that borders on the superstitious. There’s perhaps some valid reasons if you know your tech, but if you’re of the opinion that your voice assistant is ‘always recording’ and literally sending everything to Amazon, Google, Apple, and/or Donald Trump then we need to have words. Just think about that for a moment, realise how ridiculous it is, and move on.

This week an article by VRT NWS stoked fears like these. It was cleverly written so that those who read it quickly could easily draw the conclusion that Google is listening to everything you say. However, let me carve out the key paragraphs:

Why is Google storing these recordings and why does it have employees listening to them? They are not interested in what you are saying, but the way you are saying it. Google’s computer system consists of smart, self-learning algorithms. And in order to understand the subtle differences and characteristics of the Dutch language, it still needs to learn a lot.

[…]

Speech recognition automatically generates a script of the recordings. Employees then have to double check to describe the excerpt as accurately as possible: is it a woman’s voice, a man’s voice or a child? What do they say? They write out every cough and every audible comma. These descriptions are constantly improving Google’s search engines, which results in better reactions to commands. One of our sources explains how this works.

VRS NWS

Every other provider of speech recognition products does this. Obviously. How else would you manage to improve voice recognition in real-world situations? What VRS NWS did was to get a sub-contractor to break a Non-Disclosure Agreement (and violate GDPR) to share recordings.

Google responded on their blog The Keyword, saying:

As part of our work to develop speech technology for more languages, we partner with language experts around the world who understand the nuances and accents of a specific language. These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant.

We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.

We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google.

The Keyword

As I’ve said before, due to the GDPR actually having teeth (British Airways was fined £183m last week) I’m a lot happier to share my data with large companies than I was before the legislation came in. That’s the whole point.

The other big voice assistant story, in the UK at least, was that the National Health Service (NHS) is partnering with Amazon Alexa to offer health advice. The BBC reports:

From this week, the voice-assisted technology is automatically searching the official NHS website when UK users ask for health-related advice.

The government in England said it could reduce demand on the NHS.

Privacy campaigners have raised data protection concerns but Amazon say all information will be kept confidential.

The partnership was first announced last year and now talks are under way with other companies, including Microsoft, to set up similar arrangements.

Previously the device provided health information based on a variety of popular responses.

The use of voice search is on the increase and is seen as particularly beneficial to vulnerable patients, such as elderly people and those with visual impairment, who may struggle to access the internet through more traditional means.

The BBC

So long as this is available to all types of voice assistants, this is great news. The number of people I know, including family members, who have convinced themselves they’ve got serious problems by spending ages searching their symptoms, is quite frightening. Getting sensible, prosaic advice is much better.

Iliana Magra writes in the The New York Times that privacy campaigners are concerned about Amazon setting up a health care division, but that there are tangible benefits to certain sections of the population.

The British health secretary, Matt Hancock, said Alexa could help reduce strain on doctors and pharmacists. “We want to empower every patient to take better control of their health care,” he said in a statement, “and technology like this is a great example of how people can access reliable, world-leading N.H.S. advice from the comfort of their home.”

His department added that voice-assistant advice would be particularly useful for “the elderly, blind and those who cannot access the internet through traditional means.”

Iliana Magra

I’m not dismissing the privacy issues, of course not. But what I’ve found, especially recently, is that the knowledge, skills, and expertise required to be truly ‘Google-free’ (or the equivalent) is an order of magnitude greater than what is realistically possible for the general population.

It might be fatalistic to ask the following question, but I’ll do it anyway: who exactly do we expect to be building these things? Mozilla, one of the world’s largest tech non-profits is conspicuously absent in these conversations, and somehow I don’t think people aren’t going to trust governments to get involved.

For years, techies have talked about ‘personal data vaults’ where you could share information in a granular way without being tracked. Currently being trialled is the BBC box to potentially help with some of this:

With a secure Databox at its heart, BBC Box offers something very unusual and potentially important: it is a physical device in the person’s home onto which personal data is gathered from a range of sources, although of course (and as mentioned above) it is only collected with the participants explicit permission, and processed under the person’s control.

Personal data is stored locally on the box’s hardware and once there, it can be processed and added to by other programmes running on the box – much like apps on a smartphone. The results of this processing might, for example be a profile of the sort of TV programmes someone might like or the sort of theatre they would enjoy. This is stored locally on the box – unless the person explicitly chooses to share it. No third party, not even the BBC itself, can access any data in ‘the box’ unless it is authorised by the person using it, offering a secure alternative to existing services which rely on bringing large quantities of personal data together in one place – with limited control by the person using it.

The BBC

It’s an interesting concept and, if they can get the user experience right, a potentially groundbreaking concept. Eventually, of course, it will be in your smartphone, which means that device really will be a ‘digital self’.

You can absolutely opt-out of whatever you want. For example, I opt out of Facebook’s products (including WhatsApp and Instagram). You can point out to others the reasons for that, but at some point you have to realise it’s an opinion, a lifestyle choice, an ideology. Not everyone wants to be a tech vegan, or live their lives under those who act as though they are one.

Friday ferretings

These things jumped out at me this week:

  • Deepfakes will influence the 2020 election—and our economy, and our prison system (Quartz) ⁠— “The problem doesn’t stop at the elections, however. Deepfakes can alter the very fabric of our economic and legal systems. Recently, we saw a deepfake video of Facebook CEO Mark Zuckerberg bragging about abusing data collected from users circulated on the internet. The creators of this video said it was produced to demonstrate the power of manipulation and had no malicious intent—yet it revealed how deceptively realistic deepfakes can be.”
  • The Slackification of the American Home (The Atlantic) — “Despite these tools’ utility in home life, it’s work where most people first become comfortable with them. ‘The membrane that divides work and family life is more porous than it’s ever been before,’ says Bruce Feiler, a dad and the author of The Secrets of Happy Families. ‘So it makes total sense that these systems built for team building, problem solving, productivity, and communication that were invented in the workplace are migrating to the family space’.”
  • You probably don’t know what your coworkers think of you. Here’s how to change that (Fast Company) — “[T]he higher you rise in an organization, the less likely you are to get an accurate picture of how other people view you. Most people want to be viewed favorably by others in a position of power. Once you move up to a supervisory role (or even higher), it is difficult to get people to give you a straight answer about their concerns.”
  • Sharing, Generosity and Gratitude (Cable Green, Creative Commons) — “David is home recovering and growing his liver back to full size. I will be at the Mayo Clinic through the end of July. After the Mayo surgeons skillfully transplanted ⅔ of David’s liver into me, he and I laughed about organ remixes, if he should receive attribution, and wished we’d have asked for a CC tattoo on my new liver.”
  • Flexibility as a key benefit of open (The Ed Techie) — “As I chatted to Dames and Lords and fiddled with my tie, I reflected on that what is needed for many of these future employment scenarios is flexibility. This comes in various forms, and people often talk about personalisation but it is more about institutional and opportunity flexibility that is important.”
  • Abolish Eton: Labour groups aim to strip elite schools of privileges (The Guardian) — “Private schools are anachronistic engines of privilege that simply have no place in the 21st century,” said Lewis. “We cannot claim to have an education system that is socially just when children in private schools continue to have 300% more spent on their education than children in state schools.”
  • I Can’t Stop Winning! (Pinboard blog) – “A one-person business is an exercise in long-term anxiety management, so I would say if you are already an anxious person, go ahead and start a business. You’re not going to feel any worse. You’ve already got the main skill set of staying up and worrying, so you might as well make some money.”
  • How To Be The Remote Employee That Proves The Stereotypes Aren’t True (Trello blog) — “I am a big fan of over-communicating in general, and I truly believe that this is a rule all remote employees should swear by.”
  • I Used Google Ads for Social Engineering. It Worked. (The New York Times) — “Ad campaigns that manipulate searchers’ behavior are frighteningly easy for anyone to run.”
  • Road-tripping with the Amazon Nomads (The Verge) — “To stock Amazon’s shelves, merchants travel the backroads of America in search of rare soap and coveted toys.”

Image from Guillermo Acuña fronts his remote Chilean retreat with large wooden staircase (Dezeen)

Friday frustrations

I couldn’t help but notice these things this week:

  • Don’t ask forgiveness, radiate intent (Elizabeth Ayer) ⁠— “I certainly don’t need a reputation as being underhanded or an organizational problem. Especially as a repeat behavior, signalling builds me a track record of openness and predictability, even as I take risks or push boundaries.”
  • When will we have flying cars? Maybe sooner than you think. (MIT Technology Review) — “An automated air traffic management system in constant communication with every flying car could route them to prevent collisions, with human operators on the ground ready to take over by remote control in an emergency. Still, existing laws and public fears mean there’ll probably have to be pilots at least for a while, even if only as a backup to an autonomous system.”
  • For Smart Animals, Octopuses Are Very Weird (The Atlantic) — “Unencumbered by a shell, cephalopods became flexible in both body and mind… They could move faster, expand into new habitats, insinuate their arms into crevices in search of prey.”
  • Cannabidiol in Anxiety and Sleep: A Large Case Series. (PubMed) — “The final sample consisted of 72 adults presenting with primary concerns of anxiety (n = 47) or poor sleep (n = 25). Anxiety scores decreased within the first month in 57 patients (79.2%) and remained decreased during the study duration. Sleep scores improved within the first month in 48 patients (66.7%) but fluctuated over time. In this chart review, CBD was well tolerated in all but 3 patients.”
  • 22 Lessons I’m Still Learning at 82 (Coach George Raveling) — “We must always fill ourselves with more questions than answers. You should never retire your mind. After you retire mentally, then you are just taking up residence in society. I do not ever just want to be a resident of society. I want to be a contributor to our communities.”
  • How Boris Johnson’s “model bus hobby” non sequitur manipulated the public discourse and his search results (BoingBoing) — “Remember, any time a politician deliberately acts like an idiot in public, there’s a good chance that they’re doing it deliberately, and even if they’re not, public idiocy can be very useful indeed.”
  • It’s not that we’ve failed to rein in Facebook and Google. We’ve not even tried. (The Guardian) — “Surveillance capitalism is not the same as digital technology. It is an economic logic that has hijacked the digital for its own purposes. The logic of surveillance capitalism begins with unilaterally claiming private human experience as free raw material for production and sales.”
  • Choose Boring Technology (Dan McKinley) — “The nice thing about boringness (so constrained) is that the capabilities of these things are well understood. But more importantly, their failure modes are well understood.”
  • What makes a good excuse? A Cambridge philosopher may have the answer (University of Cambridge) — “Intentions are plans for action. To say that your intention was morally adequate is to say that your plan for action was morally sound. So when you make an excuse, you plead that your plan for action was morally fine – it’s just that something went awry in putting it into practice.”
  • Your Focus Is Priceless. Stop Giving It Away. (Forge) — “To virtually everyone who isn’t you, your focus is a commodity. It is being amassed, collected, repackaged and sold en masse. This makes your attention extremely valuable in aggregate. Collectively, audiences are worth a whole lot. But individually, your attention and my attention don’t mean anything to the eyeball aggregators. It’s a drop in their growing ocean. It’s essentially nothing.”

Image via @EffinBirds

There’s no viagra for enlightenment

This quotation from the enigmatic Russell Brand seemed appropriate for the subject of today’s article: the impact of so-called ‘deepfakes’ on everything from porn to politics.

First, what exactly are ‘deepfakes’? Mark Wilson explains in an article for Fast Company:

In early 2018, [an anonymous Reddit user named Deepfakes] uploaded a machine learning model that could swap one person’s face for another face in any video. Within weeks, low-fi celebrity-swapped porn ran rampant across the web. Reddit soon banned Deepfakes, but the technology had already taken root across the web–and sometimes the quality was more convincing. Everyday people showed that they could do a better job adding Princess Leia’s face to The Force Awakens than the Hollywood special effects studio Industrial Light and Magic did. Deepfakes had suddenly made it possible for anyone to master complex machine learning; you just needed the time to collect enough photographs of a person to train the model. You dragged these images into a folder, and the tool handled the convincing forgery from there.

Mark Wilson

As you’d expect, deepfakes bring up huge ethical issues, as Jessica Lindsay reports for Metro. It’s a classic case of our laws not being able to keep up with what’s technologically possible:

With the advent of deepfake porn, the possibilities have expanded even further, with people who have never starred in adult films looking as though they’re doing sexual acts on camera.

Experts have warned that these videos enable all sorts of bad things to happen, from paedophilia to fabricated revenge porn.

[…]

This can be done to make a fake speech to misrepresent a politician’s views, or to create porn videos featuring people who did not star in them.

Jessica Lindsay

It’s not just video, either, with Google’s AI now able to translate speech from one language to another and keep the same voice. Karen Hao embeds examples in an article for MIT Technology Review demonstrating where this is all headed.

The results aren’t perfect, but you can sort of hear how Google’s translator was able to retain the voice and tone of the original speaker. It can do this because it converts audio input directly to audio output without any intermediary steps. In contrast, traditional translational systems convert audio into text, translate the text, and then resynthesize the audio, losing the characteristics of the original voice along the way.

Karen Hao

The impact on democracy could be quite shocking, with the ability to create video and audio that feels real but is actually completely fake.

However, as Mike Caulfield notes, the technology doesn’t even have to be that sophisticated to create something that can be used in a political attack.

There’s a video going around that purportedly shows Nancy Pelosi drunk or unwell, answering a question about Trump in a slow and slurred way. It turns out that it is slowed down, and that the original video shows her quite engaged and articulate.

[…]

In musical production there is a technique called double-tracking, and it’s not a perfect metaphor for what’s going on here but it’s instructive. In double tracking you record one part — a vocal or solo — and then you record that part again, with slight variations in timing and tone. Because the two tracks are close, they are perceived as a single track. Because they are different though, the track is “widened” feeling deeper, richer. The trick is for them to be different enough that it widens the track but similar enough that they blend.

Mike Caulfield

This is where blockchain could actually be a useful technology. Caulfield often talks about the importance of ‘going back to the source’ — in other words, checking the provenance of what it is you’re reading, watching, or listening. There’s potential here for checking that something is actually the original document/video/audio.

Ultimately, however, people believe what they want to believe. If they want to believe Donald Trump is an idiot, they’ll read and share things showing him in a negative light. It doesn’t really matter if it’s true or not.


Also check out:

What is no good for the hive is no good for the bee

So said Roman Emperor and Stoic philosopher Marcus Aurelius. In this article, I want to apply that to our use of technology as well as the stories we tell one another about that technology use.

Let’s start with an excellent post by Nolan Lawson, who when I started using Twitter less actually deleted his account and went all-in on the Fediverse. He maintains a Mastodon web client called Pinafore, and is a clear-headed thinker on all things open. The post is called Tech veganism and sums up the problem I have with holier-than-thou open advocates:

I find that there’s a bit of a “let them eat cake” attitude among tech vegan boosters, because they often discount the sheer difficulty of all this stuff. (“Let them use Linux” could be a fitting refrain.) After all, they figured it out, so why can’t you? What, doesn’t everyone have a computer science degree and six years experience as a sysadmin?

To be a vegan, all you have to do is stop eating animal products. To be a tech vegan, you have to join an elite guild of tech wizards and master their secret arts. And even then, you’re probably sneaking a forbidden bite of Google or Apple every now and then.

Nolan Lawson

It’s that second paragraph that’s the killer for me. I’m pescetarian and probably about the equivalent of that, in Lawson’s lingo, when it comes to my tech choices. I definitely agree with him that the conversation is already changing away from open source and free software to what Mark Zuckerberg (shudder) calls “time well spent”:

I also suspect that tech veganism will begin to shift, if it hasn’t already. I think the focus will become less about open source vs closed source (the battle of the last decade) and more about digital well-being, especially in regards to privacy, addiction, and safety. So in this way, it may be less about switching from Windows to Linux and more about switching from Android to iOS, or from Facebook to more private channels like Discord and WhatsApp.

Nolan Lawson

This is reminiscent of Yancey Strickler‘s notion of ‘dark forests’. I can definitely see more call for nuance around private and public spaces.

So much of this, though, depends on your worldview. Everyone likes the idea of ‘freedom’, but are we talking about ‘freedom from‘ or ‘freedom to‘? How important are different types of freedom? Should all information be available to everyone? Where do rights start and responsibilities stop (and vice-versa)?

One thing I’ve found fascinating is how the world changes and debates get left behind. For example, the idea (and importance) of Linux on the desktop has been something that people have been discussing most of my adult life. At the same time, cloud computing has changed the game, with a lot of the data processing and heavy lifting being done by servers — most of which are powered by Linux!

Mark Shuttleworth, CEO of Canonical, the company behind Ubuntu Linux, said in a recent interview:

I think the bigger challenge has been that we haven’t invented anything in the Linux that was like deeply, powerfully ahead of its time… if in the free software community we only allow ourselves to talk about things that look like something that already exists, then we’re sort of defining ourselves as a series of forks and fragmentations.

Mark Shuttleworth

This is a problem that’s wider than just software. Those of us who are left-leaning are more likely to let small ideological differences dilute our combined power. That affects everything from opposing Brexit, to getting people to switch to Linux. There’s just too much noise, too many competing options.

Meanwhile, as the P2P Foundation notes, businesses swoop in and use open licenses to enclose the Commons:

[I]t is clear that these Commons have become an essential infrastructure without which the Internet could no longer function today (90% of the world’s servers run on Linux, 25% of websites use WordPress, etc.) But many of these projects suffer from maintenance and financing problems, because their development depends on communities whose means are unrelated to the size of the resources they make available to the whole world.

[…]

This situation corresponds to a form of tragedy of the Commons, but of a different nature from that which can strike material resources. Indeed, intangible resources, such as software or data, cannot by definition be over-exploited and they even increase in value as they are used more and more. But tragedy can strike the communities that participate in the development and maintenance of these digital commons. When the core of individual contributors shrinks and their strengths are exhausted, information resources lose quality and can eventually wither away.

P2P Foundation

So what should we do? One thing we’ve done with MoodleNet is to ensure that it has an AGPL license, one that Google really doesn’t like. They state perfectly the reasons why we selected it:

The primary risk presented by AGPL is that any product or service that depends on AGPL-licensed code, or includes anything copied or derived from AGPL-licensed code, may be subject to the virality of the AGPL license. This viral effect requires that the complete corresponding source code of the product or service be released to the world under the AGPL license. This is triggered if the product or service can be accessed over a remote network interface, so it does not even require that the product or service is actually distributed.

Google

So, in other words, if you run a server with AGPL code, or create a project with source code derived from it, you must make that code available to others. To me, it has the same ‘viral effect’ as the Creative Commons BY-SA license.

As Benjamin “Mako” Hill points out in a recent keynote, we need to be a bit more wise when it comes to ‘choosing a side’. Cory Doctorow, summarising Mako’s keynote says:

[M]arkets discovered free software and turned it into “open source,” figuring out how to create developer communities around software (“digital sharecropping”) that lowered their costs and increased their quality. Then the companies used patents and DRM and restrictive terms of service to prevent users from having any freedom.

Mako says that this is usually termed “strategic openness,” in which companies take a process that would, by default, be closed, and open the parts of it that make strategic sense for the firm. But really, this is “strategic closedness” — projects that are born open are strategically enclosed by companies to allow them to harvest the bulk of the value created by these once-free systems.

[…]

Mako suggests that the time in which free software and open source could be uneasy bedfellows is over. Companies’ perfection of digital sharecropping means that when they contribute to “free” projects, all the freedom will go to them, not the public.

Cory Doctorow

It’s certainly an interesting time we live in, when the people who are pointing out all of the problems (the ‘tech vegans’) are seen as the problem, and the VC-backed companies as the disruptive champions of the people. Tech follows politics, though, I guess.


Also check out:

  • Is High Quality Software Worth the Cost? (Martin Fowler) — “I thus divide software quality attributes into external (such as the UI and defects) and internal (architecture). The distinction is that users and customers can see what makes a software product have high external quality, but cannot tell the difference between higher or lower internal quality.”
  • What the internet knows about you (Axios) — “The big picture: Finding personal information online is relatively easy; removing all of it is nearly impossible.”
  • Against Waldenponding II (ribbonfarm) — “Waldenponding is a search for meaning that is circumscribed by the what you might call the spiritual gravity field of an object or behavior held up as ineffably sacred. “

Remote work is a different beast

You might not work remotely right now, but the chances are that at some point in your career, and in some capacity, you will do. Remote work has its own challenges and benefits, which are alluded to in three articles in Fast Company that I want to highlight. The first is an article summarising a survey Google performed amongst 5,600 of its remote workers.

On the outset of the study, the team hypothesized that distributed teams might not be as productive as their centrally located counterparts. “We were a little nervous about that,” says [Veronica] Gilrane [manager of Google’s People Innovation Lab]. She was surprised to find that distributed teams performed just as well. Unfortunately, she also found that there is a lot more frustration involved in working remotely. Workers in other offices can sometimes feel burdened to sync up their schedules with the main office. They can also feel disconnected from the team.

That doesn’t surprise me at all. Even though probably spend less AFK (Away From Keyboard) as a remote worker than I would in an office, there’s not that performative element, where you have to look like you’re working. Sometimes work doesn’t look like work; it looks like going for a run to think about a problem, or bouncing an idea off a neighbour as you walk back to your office with a cup of tea.

The main thing, as this article points out, is that it’s really important to have an approach that focuses on results rather than time spent doing the work. You do have to have some process, though:

[I]t’s imperative that you stress disciplinary excellence; workers at home don’t have a manager peering over their shoulder, so they have to act as their own boss and maintain a strict schedule to get things done. Don’t try to dictate every aspect of their lives–remote work is effective because it offers workers flexibility, after all. Nonetheless, be sure that you’re requesting regular status updates, and that you have a system in place to measure productivity.

Fully-remote working is different to ‘working from home’ a day or two per week. It does take discipline, if only to stop raiding the biscuit tin. But it’s also a different mindset, including intentionally sharing your work much more than you’d do in a co-located setting.

Fundamentally, as Greg Galant, CEO of a full-remote organisation, comments, it’s about trust:

“My friends always say to me, ‘How do you know if anyone is really working?’ and I always ask them, ‘How do you know if anybody is really working if they are at the office?’” says Galant. “Because the reality is, you can see somebody at their desk and they can stay late, but that doesn’t mean they’re really working.”

[…]

If managers are adhering to traditional management practices, they’re going to feel anxiety with remote teams. They’re going to want to check in constantly to make sure people are working. But checking in constantly prevents work from getting done.

Remote work is strange and difficult to describe to anyone who hasn’t experienced it. You can, for example, in the same day feel isolated and lonely, while simultaneously getting annoyed with all of the ‘pings’ and internal communication coming at you.

At the end of the day, companies need to set expectations, and remote workers need to set boundaries. It’s the only way to avoid burnout, and to ensure that what can be a wonderful experience doesn’t turn into a nightmare.


Also check out:

  • 5 Great Resources for Remote Workers (Product Hunt) — “If you’re a remote worker or spend part of your day working from outside of the office, the following tools will help you find jobs, discover the best cities for remote workers, and learn from people who have built successful freelance careers or location-independent companies.”
  • Stop Managing Your Remote Workers As If They Work Onsite (ThinkGrowth) — “Managers need to back away from their conventional views of what “working hard” looks like and instead set specific targets, explain what success looks like, and trust the team to get it done where, when, and however works best for them.”
  • 11 Tools That Allow us to Work from Anywhere on Earth as a Distributed Company (Ghost) —”In an office, the collaboration tools you use are akin to a simple device like a screwdriver. They assist with difficult tasks and lessen the amount of effort required to complete them. In a distributed team, the tools you use are more like life-support. Everything to do with distributed team tools is about clawing back some of that contextual awareness which you’ve lost by not being in the same space.”

Looking back and forward in tech

Looking back at 2018, Amber Thomas commented that, for her, a few technologies became normalised over the course of the year:

  1. Phone payments
  2. Voice-controlled assistants
  3. Drones
  4. Facial recognition
  5. Fingerprints

Apart from drones, I’ve spent the last few years actively avoiding the above. In fact, I spent most of 2018 thinking about decentralised technology, privacy, and radical politics.

However, December is always an important month for me. I come off social media, stop blogging, and turn another year older just before Christmas. It’s a good time to reflect and think about what’s gone before, and what comes next.

Sometimes, it’s possible to identify a particular stimulus to a change in thinking. For me, it was while I was watching Have I Got News For You and the panellists were shown a photo of a fashion designer who put a shoe in front of their face to avoid being recognisable. Paul Merton asked, “doesn’t he have a passport?”

Obvious, of course, but I’d recently been travelling and using the biometric features of my passport. I’ve also relented this year and use the fingerprint scanner to unlock my phone. I realised that the genie isn’t going back in the bottle here, and that everyone else was using my data — biometric or otherwise — so I might as well benefit, too.

Long story short, I’ve bought a Google Pixelbook and Lenovo Smart Display over the Christmas period which I’ll be using in 2019 to my life easier. I’m absolutely trading privacy for convenience, but it’s been a somewhat frustrating couple of years trying to use nothing but Open Source tools.

I’ll have more to say about all of this in due course, but it’s worth saying that I’m still committed to living and working openly. And, of course, I’m looking forward to continuing to work on MoodleNet.

Source: Fragments of Amber

Asking Google philosophical questions

Writing in The Guardian, philosopher Julian Baggini reflects on a recent survey which asked people what they wish Google was able to answer:

The top 25 questions mostly fall into four categories: conspiracies (Who shot JFK? Did Donald Trump rig the election?); desires for worldly success (Will I ever be rich? What will tomorrow’s winning lottery numbers be?); anxieties (Do people like me? Am I good in bed?); and curiosity about the ultimate questions (What is the meaning of life? Is there a God?).

This is all hypothetical, of course, but I’m always amazed by what people type into search engines. It’s as if there’s some ‘truth’ in there, rather than just databases and algorithms. I suppose I can understand children asking voice assistants such as Alexa and Siri questions about the world, because they can’t really know how the internet works.

What Baggini points out, though, is that what we type into search engines can reflect our deepest desires. That’s why they trawl the search history of suspected murderers, and why the Twitter account Theresa May Googling is so funny.

A Google search, however, cannot give us the two things we most need: time and other people. For our day-to-day problems, a sympathetic ear remains the most powerful device for providing relief, if not a cure. For the bigger puzzles of existence, there is no substitute for long reflection, with help from the great thinkers of history. Google can lead us directly to them, but only we can spend time in their company. Search results can help us only if they are the start, not the end, of our intellectual quest.

Sadly, in the face of, let’s face it, pretty amazing technological innovation over the last 25 years, we’ve forgotten what it is that makes us human: connections. Thankfully, some more progressive tech companies are beginning to realise the importance of the Humanities — including Philosophy.

Source: The Guardian