Tag: BBC

Friday flowerings

Did you see these things this week?

  • Happy 25th year, blogging. You’ve grown up, but social media is still having a brawl (The Guardian) — “The furore over social media and its impact on democracy has obscured the fact that the blogosphere not only continues to exist, but also to fulfil many of the functions of a functioning public sphere. And it’s massive. One source, for example, estimates that more than 409 million people view more than 20bn blog pages each month and that users post 70m new posts and 77m new comments each month. Another source claims that of the 1.7 bn websites in the world, about 500m are blogs. And WordPress.com alone hosts blogs in 120 languages, 71% of them in English.”
  • Emmanuel Macron Wants to Scan Your Face (The Washington Post) — “President Emmanuel Macron’s administration is set to be the first in Europe to use facial recognition when providing citizens with a secure digital identity for accessing more than 500 public services online… The roll-out is tainted by opposition from France’s data regulator, which argues the electronic ID breaches European Union rules on consent – one of the building blocks of the bloc’s General Data Protection Regulation laws – by forcing everyone signing up to the service to use the facial recognition, whether they like it or not.”
  • This is your phone on feminism (The Conversationalist) — “Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true. This cognitive dissonance confuses and paralyses us. And look around. Everyone has a smartphone. So it’s probably not so bad, and anyway, that’s just how things work. Right?”
  • Google’s auto-delete tools are practically worthless for privacy (Fast Company) — “In reality, these auto-delete tools accomplish little for users, even as they generate positive PR for Google. Experts say that by the time three months rolls around, Google has already extracted nearly all the potential value from users’ data, and from an advertising standpoint, data becomes practically worthless when it’s more than a few months old.”
  • Audrey Watters (Uses This) — “For me, the ideal set-up is much less about the hardware or software I am using. It’s about the ideas that I’m thinking through and whether or not I can sort them out and shape them up in ways that make for a good piece of writing. Ideally, that does require some comfort — a space for sustained concentration. (I know better than to require an ideal set up in order to write. I’d never get anything done.)”
  • Computer Files Are Going Extinct (OneZero) — “Files are skeuomorphic. That’s a fancy word that just means they’re a digital concept that mirrors a physical item. A Word document, for example, is like a piece of paper, sitting on your desk(top). A JPEG is like a painting, and so on. They each have a little icon that looks like the physical thing they represent. A pile of paper, a picture frame, a manila folder. It’s kind of charming really.”
  • Why Technologists Fail to Think of Moderation as a Virtue and Other Stories About AI (The LA Review of Books) — “Speculative fiction about AI can move us to think outside the well-trodden clichés — especially when it considers how technologies concretely impact human lives — through the influence of supersized mediators, like governments and corporations.”
  • Inside Mozilla’s 18-month effort to market without Facebook (Digiday) — “The decision to focus on data privacy in marketing the Mozilla brand came from research conducted by the company four years ago into the rise of consumers who make values-based decisions on not only what they purchase but where they spend their time.”
  • Core human values not eyeballs (Cubic Garden) — “Theres so much more to do, but the aims are high and important for not just the BBC, but all public service entities around the world. Measuring the impact and quality on peoples lives beyond the shallow meaningless metrics for public service is critical.”

Image: The why is often invisible via Jessica Hagy’s Indexed

The best way out is always through

So said Robert Frost, but I want to begin with the ending of a magnificent post from Kate Bowles. She expresses clearly how I feel sometimes when I sit down to write something for Thought Shrapnel:

[T]his morning I blocked out time, cleared space, and sat down to write — and nothing happened. Nothing. Not a word, not even a wisp of an idea. After enough time staring at the blankness of the screen I couldn’t clearly remember having had an idea, ever.

Along the way I looked at the sky, I ate a mandarin and then a second mandarin, I made a cup of tea, I watched a family of wrens outside my window, I panicked. I let email divert me, and then remembered that was the opposite of the plan. I stayed off Twitter. Panic increased.

Then I did the one thing that absolutely makes a difference to me. I asked for help. I said “I write so many stupid words in my bullshit writing job that I can no longer write and that is the end of that.” And the person I reached out to said very calmly “Why not write about the thing you’re thinking about?”

Sometimes what you have to do as a writer is sit in place long enough, and sometimes you have to ask for help. Whatever works for you, is what works.

Kate Bowles

There are so many things wrong with the world right now, that sometimes I feel like I could stop working on all of the things I’m working on and spend time just pointing them out to people.

But to what end? You don’t change the world by just making people aware of things, not usually. For example, as tragic as the sentence, “the Amazon is on fire” is, it isn’t in and of itself a call-to-action. These days, people argue about the facts themselves as well as the appropriate response.

The world is an inordinately complicated place that we seek to make sense of by not thinking as much as humanly possible. To aid and abet us in this task, we divide ourselves, either consciously or unconsciously, into groups who apply similar heuristics. The new (information) is then assimilated into the old (worldview).

I have no privileged position, no objective viewpoint in which to observe and judge the world’s actions. None of us do. I’m as complicit in joining and forming in and out groups as the next person. I decide I’m going to delete my Twitter account and then end up rage-tweeting All The Things.

Thankfully, there are smart people, and not only academics, thinking about all this to figure out what we can and should do. Tim Urban, from the phenomenally-successful Wait But Why, for example, has spent the last three years working on “a new language we can use to think and talk about our societies and the people inside of them”. In the first chapter in a new series, he writes about the ongoing struggle between (what he calls) the ‘Primitive Minds’ and ‘Higher Minds’ of humans:

The never-ending struggle between these two minds is the human condition. It’s the backdrop of everything that has ever happened in the human world, and everything that happens today. It’s the story of our times because it’s the story of all human times.

Tim Urban

I think this is worth remembering when we spend time on social networks. And especially when we spend so much time that it becomes our default delivery method for the news of the day. Our Primitive Minds respond strongly to stimuli around fear and fornication.

When we reflect on our social media usage and the changing information landscape, the temptation is either to cut down, or to try a different information diet. Some people become the equivalent of Information Vegans, attempting to source the ‘cleanest’ morsels of information from the most wholesome, trusted, and traceable of places.

But where are those ‘trusted places’ these days? Are we as happy with the previously gold-standard news outlets such as the BBC and The New York Times as we once were? And if not, what’s changed?

The difference, I think, is the way we’ve decided to allow money to flow through our digital lives. Commercial news outlets, including those with which the BBC competes, are funded by advertising. Those adverts we see in digital spaces aren’t just showing things that we might happen to be interested in. They’ll keep on showing you that pair of shoes you almost bought last week in every space that is funded by advertising. Which is basically everywhere.

I feel like I’m saying obvious things here that everyone knows, but perhaps it bears repeating. If everyone is consuming news via social networks, and those news stories are funded by advertising, then the nature of what counts as ‘news’ starts to evolve. What gets the most engagement? How are headlines formed now, compared with a decade ago?

It’s as if something hot-wires our brain when something non-threatening and potentially interesting is made available to us ‘for free’. We never get to the stuff that we’d like to think defines us, because we caught in neverending cycles of titillation. We pay with our attention, that scarce and valuable resource.

Our attention, and more specifically, how we react to our social media feeds when we’re ‘engaged’ is valuable because it can be packaged up and sold to advertisers. But it’s also sold to governments too. Twitter just had to update their terms and conditions specifically because of the outcry over the Chinese government’s propaganda around the Hong Kong protests.

Protesters part of the ‘umbrella revolution’ in Hong Kong have recently been focusing on cutting down what we used to call CCTV cameras, but which are much more accurately described as ‘facial recognition masts’:

We are living in a world where the answer to everything seems to be ‘increased surveillance’. Kids not learning fast enough in school? Track them more. Scared of terrorism? Add more surveillance into the lives of everyday citizens. And on and on.

In an essay earlier this year, Maciej Cegłowski riffed on all of this, reflecting on what he calls ‘ambient privacy’:

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent.

Ambient privacy is particularly hard to protect where it extends into social and public spaces outside the reach of privacy law. If I’m subjected to facial recognition at the airport, or tagged on social media at a little league game, or my public library installs an always-on Alexa microphone, no one is violating my legal rights. But a portion of my life has been brought under the magnifying glass of software. Even if the data harvested from me is anonymized in strict conformity with the most fashionable data protection laws, I’ve lost something by the fact of being monitored.

Maciej Cegłowski

One of the difficulties in resisting the ‘Silicon Valley narrative’ and Big Tech’s complicity with governments is the danger of coming across as a neo-luddite. Without looking very closely to understand what’s going on (and having some time to reflect) it can all look like the inevitable march of progress.

So, without necessarily an answer to all this, I guess the best thing is, like Kate, to ask for help. What can we do here? What practical steps can we take? Comments are open.

The greatest obstacle to discovery is not ignorance—it is the illusion of knowledge

So said Daniel J. Boorstin. It’s been an interesting week for those, like me, who follow the development of interaction between humans and machines. Specifically, people seem shocked that voice assistants are being used for health questions, also that the companies who make them employ people to listen to samples of voice recordings to make them better.

Before diving into that, let’s just zoom out a bit and remind ourselves that the average level of digital literacies in the general population is pretty poor. Sometimes I wonder how on earth VC-backed companies manage to burn through so much cash. Then I remember the contortions that those who design visual interfaces go through so that people don’t have to think.

Discussing ‘fake news’ and our information literacy problem in Forbes, you can almost feel Kalev Leetaru‘s eye-roll when he says:

It is the accepted truth of Silicon Valley that every problem has a technological solution.

Most importantly, in the eyes of the Valley, every problem can be solved exclusively through technology without requiring society to do anything on its own. A few algorithmic tweaks, a few extra lines of code and all the world’s problems can be simply coded out of existence.

Kalev Leetaru

It’s somewhat tangential to the point I want to make in this article, but Cory Doctorow makes a a good point in this regard about fake news for Locus

Fake news is an instrument for measuring trauma, and the epistemological incoherence that trauma creates – the justifiable mistrust of the establishment that has nearly murdered our planet and that insists that making the richest among us much, much richer will benefit everyone, eventually.

Cory Doctorow

Before continuing, I’d just like to say that I’ve got some skin in the voice assistant game, given that our home has no fewer that six devices that use the Google Assistant (ten if you count smartphones and tablets).

Voice assistants are pretty amazing when you know exactly what you want and can form a coherent query. It’s essentially just clicking the top link on a Google search result, without any of the effort of pointing and clicking. “Hey Google, do I need an umbrella today?”

However, some people are suspicious of voice assistants to a degree that borders on the superstitious. There’s perhaps some valid reasons if you know your tech, but if you’re of the opinion that your voice assistant is ‘always recording’ and literally sending everything to Amazon, Google, Apple, and/or Donald Trump then we need to have words. Just think about that for a moment, realise how ridiculous it is, and move on.

This week an article by VRT NWS stoked fears like these. It was cleverly written so that those who read it quickly could easily draw the conclusion that Google is listening to everything you say. However, let me carve out the key paragraphs:

Why is Google storing these recordings and why does it have employees listening to them? They are not interested in what you are saying, but the way you are saying it. Google’s computer system consists of smart, self-learning algorithms. And in order to understand the subtle differences and characteristics of the Dutch language, it still needs to learn a lot.

[…]

Speech recognition automatically generates a script of the recordings. Employees then have to double check to describe the excerpt as accurately as possible: is it a woman’s voice, a man’s voice or a child? What do they say? They write out every cough and every audible comma. These descriptions are constantly improving Google’s search engines, which results in better reactions to commands. One of our sources explains how this works.

VRS NWS

Every other provider of speech recognition products does this. Obviously. How else would you manage to improve voice recognition in real-world situations? What VRS NWS did was to get a sub-contractor to break a Non-Disclosure Agreement (and violate GDPR) to share recordings.

Google responded on their blog The Keyword, saying:

As part of our work to develop speech technology for more languages, we partner with language experts around the world who understand the nuances and accents of a specific language. These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant.

We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.

We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google.

The Keyword

As I’ve said before, due to the GDPR actually having teeth (British Airways was fined £183m last week) I’m a lot happier to share my data with large companies than I was before the legislation came in. That’s the whole point.

The other big voice assistant story, in the UK at least, was that the National Health Service (NHS) is partnering with Amazon Alexa to offer health advice. The BBC reports:

From this week, the voice-assisted technology is automatically searching the official NHS website when UK users ask for health-related advice.

The government in England said it could reduce demand on the NHS.

Privacy campaigners have raised data protection concerns but Amazon say all information will be kept confidential.

The partnership was first announced last year and now talks are under way with other companies, including Microsoft, to set up similar arrangements.

Previously the device provided health information based on a variety of popular responses.

The use of voice search is on the increase and is seen as particularly beneficial to vulnerable patients, such as elderly people and those with visual impairment, who may struggle to access the internet through more traditional means.

The BBC

So long as this is available to all types of voice assistants, this is great news. The number of people I know, including family members, who have convinced themselves they’ve got serious problems by spending ages searching their symptoms, is quite frightening. Getting sensible, prosaic advice is much better.

Iliana Magra writes in the The New York Times that privacy campaigners are concerned about Amazon setting up a health care division, but that there are tangible benefits to certain sections of the population.

The British health secretary, Matt Hancock, said Alexa could help reduce strain on doctors and pharmacists. “We want to empower every patient to take better control of their health care,” he said in a statement, “and technology like this is a great example of how people can access reliable, world-leading N.H.S. advice from the comfort of their home.”

His department added that voice-assistant advice would be particularly useful for “the elderly, blind and those who cannot access the internet through traditional means.”

Iliana Magra

I’m not dismissing the privacy issues, of course not. But what I’ve found, especially recently, is that the knowledge, skills, and expertise required to be truly ‘Google-free’ (or the equivalent) is an order of magnitude greater than what is realistically possible for the general population.

It might be fatalistic to ask the following question, but I’ll do it anyway: who exactly do we expect to be building these things? Mozilla, one of the world’s largest tech non-profits is conspicuously absent in these conversations, and somehow I don’t think people aren’t going to trust governments to get involved.

For years, techies have talked about ‘personal data vaults’ where you could share information in a granular way without being tracked. Currently being trialled is the BBC box to potentially help with some of this:

With a secure Databox at its heart, BBC Box offers something very unusual and potentially important: it is a physical device in the person’s home onto which personal data is gathered from a range of sources, although of course (and as mentioned above) it is only collected with the participants explicit permission, and processed under the person’s control.

Personal data is stored locally on the box’s hardware and once there, it can be processed and added to by other programmes running on the box – much like apps on a smartphone. The results of this processing might, for example be a profile of the sort of TV programmes someone might like or the sort of theatre they would enjoy. This is stored locally on the box – unless the person explicitly chooses to share it. No third party, not even the BBC itself, can access any data in ‘the box’ unless it is authorised by the person using it, offering a secure alternative to existing services which rely on bringing large quantities of personal data together in one place – with limited control by the person using it.

The BBC

It’s an interesting concept and, if they can get the user experience right, a potentially groundbreaking concept. Eventually, of course, it will be in your smartphone, which means that device really will be a ‘digital self’.

You can absolutely opt-out of whatever you want. For example, I opt out of Facebook’s products (including WhatsApp and Instagram). You can point out to others the reasons for that, but at some point you have to realise it’s an opinion, a lifestyle choice, an ideology. Not everyone wants to be a tech vegan, or live their lives under those who act as though they are one.

Friday fumblings

These were the things I came across this week that made me smile:


Image via Why WhatsApp Will Never Be Secure (Pavel Durov)

Anything invented after you’re thirty-five is against the natural order of things

I’m fond of the above quotation by Douglas Adams that I’ve used for the title of this article. It serves as a reminder to myself that I’ve now reached an age when I’ll look at a technology and wonder: why?

Despite this, I’m quite excited about the potential of two technologies that will revolutionise our digital world both in our homes and offices and when we’re out-and-about. Those technologies? Wi-Fi 6, as it’s known colloquially, and 5G networks.

Let’s take Wi-Fi 6 first, which Chuong Nguyen explains in an article for Digital Trends, isn’t just about faster speeds:

A significant advantage for Wi-Fi 6 devices is better battery life. Though the standard promotes Internet of Things (IoT) devices being able to last for weeks, instead of days, on a single charge as a major benefit, the technology could even prove to be beneficial for computers, especially since Intel’s latest 9th-generation processors for laptops come with Wi-Fi 6 support.

Likewise, Alexis Madrigal, writing in The Atlantic, explains that mobile 5G networks bring benefits other than streaming YouTube videos at ever-higher resolutions, but are quite a technological hurdle:

The fantastic 5G speeds require higher-frequency, shorter-wavelength signals. And the shorter the wavelength, the more likely it is to be blocked by obstacles in the world.

[…]

Ideally, [mobile-associated companies] would like a broader set of customers than smartphone users. So the companies behind 5G are also flaunting many other applications for these networks, from emergency services to autonomous vehicles to every kind of “internet of things” gadget.

If you’ve been following the kerfuffle around the UK using Huawei’s technology for its 5G infrastructure, you’ll already know about the politics and security issues at stake here.

Sue Halpern, writing in The New Yorker, outlines the claimed benefits:

Two words explain the difference between our current wireless networks and 5G: speed and latency. 5G—if you believe the hype—is expected to be up to a hundred times faster. (A two-hour movie could be downloaded in less than four seconds.) That speed will reduce, and possibly eliminate, the delay—the latency—between instructing a computer to perform a command and its execution. This, again, if you believe the hype, will lead to a whole new Internet of Things, where everything from toasters to dog collars to dialysis pumps to running shoes will be connected. Remote robotic surgery will be routine, the military will develop hypersonic weapons, and autonomous vehicles will cruise safely along smart highways. The claims are extravagant, and the stakes are high. One estimate projects that 5G will pump twelve trillion dollars into the global economy by 2035, and add twenty-two million new jobs in the United States alone. This 5G world, we are told, will usher in a fourth industrial revolution.

But greater speeds and lower latency isn’t all upside for all members of societies, as I learned in this BBC Beyond Today podcast episode about Korean spy cam porn. Halpern explains:

In China, which has installed three hundred and fifty thousand 5G relays—about ten times more than the United States—enhanced geolocation, coupled with an expansive network of surveillance cameras, each equipped with facial-recognition technology, has enabled authorities to track and subordinate the country’s eleven million Uighur Muslims. According to the Times, “the practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.”

Automated racism, now there’s a thing. It turns out that technologies amplify our existing prejudices. Perhaps we should be a bit more careful and ask more questions before we march down the road of technological improvements? Especially given 5G could affect our ability to predict major storms. I’m reading Low-tech Magazine: The Printed Website at the moment, and it’s pretty eye-opening about what we could be doing instead.


Also check out:

You need more daylight to sleep better

An an historian, I’ve often been fascinated about what life must have been like before the dawn of electricity. I have a love-hate relationship with artificial light. On the one hand, I use a lightbox to stave off Seasonal Affective Disorder. On the other hand, I’ve got (my optician tells me) not only pale blue irises but very thin corneas. That makes me photophobic and subject to the kind of glare on a regular basis I can only imagine ‘normal’ people get after staring at a lightbulb for a while.

In this article, Linda Geddes describes an experiment in which she decided to forgo artificial life for a number of weeks to see what effect it had on her health and, most importantly, her sleep.

Working with sleep researchers Derk-Jan Dijk and Nayantara Santhi at the University of Surrey, I designed a programme to go cold-turkey on artificial light after dark, and to try to maximise exposure to natural light during the day – all while juggling an office job and busy family life in urban Bristol.

By the end of 2017, instead of having to manually install something like f.lux on my devices, they all started to have it built-in. There’s a general realisation that blue light before bedtime is a bad idea. What this article points out, however, is another factor: how bright the light is that you’re subjected to during the day.

Light enables us to see, but it affects many other body systems as well. Light in the morning advances our internal clock, making us more lark-like, while light at night delays the clock, making us more owlish. Light also suppresses a hormone called melatonin, which signals to the rest of the body that it’s night-time – including the parts that regulate sleep. “Apart from vision, light has a powerful non-visual effect on our body and mind, something to remember when we stay indoors all day and have lights on late into the night,” says Santhi, who previously demonstrated that the evening light in our homes suppresses melatonin and delays the timing of our sleep.

The important correlation here is between the strength of light Geddes experienced during her waking hours, and the quality of her sleep.

But when I correlated my sleep with the amount of light I was exposed to during the daytime, an interesting pattern emerged. On the brightest days, I went to bed earlier. And for every 100 lux increase in my average daylight exposure, I experienced an increase in sleep efficiency of almost 1% and got an extra 10 minutes of sleep.

This isn’t just something that Geddes has experienced; studies have also found this kind of correlation.

In March 2007, Dijk and his colleagues replaced the light bulbs on two floors of an office block in northern England, housing an electronic parts distribution company. Workers on one floor of the building were exposed to blue-enriched lighting for four weeks; those on the other floor were exposed to white light. Then the bulbs were switched, meaning both groups were ultimately exposed to both types of light. They found that exposure to the blue-enriched white light during daytime hours improved the workers’ subjective alertness, performance, and evening fatigue. They also reported better quality and longer sleep.

So the key takeaway message?

It’s ridiculously simple. But spending more time outdoors during the daytime and dimming the lights in the evening really could be a recipe for better sleep and health. For millennia, humans have lived in synchrony with the Sun. Perhaps it’s time we got reacquainted.

Source: BBC Future