Tag: ethics

Friday facings

This week’s links seem to have a theme about faces and looking at them through screens. I’m not sure what that says about either my network, or my interests, but there we are…

As ever, let me know what resonates with you, and if you have any thoughts on what’s shared below!


The Age of Instagram Face

The human body is an unusual sort of Instagram subject: it can be adjusted, with the right kind of effort, to perform better and better over time. Art directors at magazines have long edited photos of celebrities to better match unrealistic beauty standards; now you can do that to pictures of yourself with just a few taps on your phone.

Jia Tolentino (The New Yorker)

People, especially women, but there’s increasing pressure on young men too, are literally going to see plastic surgeons with ‘Facetuned’ versions of themselves. It’s hard not to think that we’re heading for a kind of dystopia when people want to look like cartoonish versions of themselves.


What Makes A Good Person?

What I learned as a child is that most people don’t even meet the responsibilities of their positions (husband, wife, teacher, boss, politicians, whatever.) A few do their duty, and I honor them for it, because it is rare. But to go beyond that and actually be a man of honor is unbelievably rare.

Ian Welsh

This question, as I’ve been talking with my therapist about, is one I ask myself all the time. Recently, I’ve settled on Marcus Aurelius’ approach: “Waste no more time arguing about what a good man should be. Be one.”


Boredom is but a window to a sunny day beyond the gloom

Boredom can be our way of telling ourselves that we are not spending our time as well as we could, that we should be doing something more enjoyable, more useful, or more fulfilling. From this point of view, boredom is an agent of change and progress, a driver of ambition, shepherding us out into larger, greener pastures.

Neel Burton (Aeon)

As I’ve discussed before, I’m not so sure about the fetishisation of ‘boredom’. It’s good to be creative and let the mind wander. But boredom? Nah. There’s too much interesting stuff out there.


Resting Risk Face

Unlock your devices with a surgical mask that looks just like you.

I don’t usually link to products in this roundup, but I’m not sure this is 100% serious. Good idea, though!


The world’s biggest work-from-home experiment has been triggered by coronavirus

For some employees, like teachers who have conducted classes digitally for weeks, working from home can be a nightmare.
But in other sectors, this unexpected experiment has been so well received that employers are considering adopting it as a more permanent measure. For those who advocate more flexible working options, the past few weeks mark a possible step toward widespread — and long-awaited — reform.

Jessie Yeung (CNN)

Every cloud has a silver lining, I guess? Working from home is great, especially when you have a decent setup.


Setting Up Your Webcam, Lights, and Audio for Remote Work, Podcasting, Videos, and Streaming

Only you really know what level of clarity you want from each piece of your setup. Are you happy with what you have? Please, dear Lord, don’t spend any money. This is intended to be a resource if you want more and don’t know how to do it, not a stress or a judgment to anyone happy with their current setup

And while it’s a lot of fun to have a really high-quality webcam for my remote work, would I have bought it if I didn’t have a more intense need for high quality video for my YouTube stuff? Hell no. Get what you need, in your budget. This is just a resource.

This is a fantastic guide. I bought a great webcam when I saw it drop in price via CamelCamelCamel and bought a decent mic when I recorded the TIDE podcast wiht Dai. It really does make a difference.


Large screen phones: a challenge for UX design (and human hands)

I know it might sound like I have more questions than answers, but it seems to me that we are missing out on a very basic solution for the screen size problem. Manufacturers did so much to increase the screen size, computational power and battery capacity whilst keeping phones thin, that switching the apps navigation to the bottom should have been the automatic response to this new paradigm.

Maria Grilo (Imaginary Cloud)

The struggle is real. I invested in a new phone this week (a OnePlus 7 Pro 5G) and, unlike the phone it replaced from 2017, it’s definitely a hold-with-two-hands device.


Society Desperately Needs An Alternative Web

What has also transpired is a web of unbridled opportunism and exploitation, uncertainty and disparity. We see increasing pockets of silos and echo chambers fueled by anxiety, misplaced trust, and confirmation bias. As the mainstream consumer lays witness to these intentions, we notice a growing marginalization that propels more to unplug from these communities and applications to safeguard their mental health. However, the addiction technology has produced cannot be easily remedied. In the meantime, people continue to suffer.

Hessie Jones (Forbes)

Another call to re-decentralise the web, this time based on arguments about centralised services not being able to handle the scale of abuse and fraudulent activity.


UK Google users could lose EU GDPR data protections

It is understood that Google decided to move its British users out of Irish jurisdiction because it is unclear whether Britain will follow GDPR or adopt other rules that could affect the handling of user data.

If British Google users have their data kept in Ireland, it would be more difficult for British authorities to recover it in criminal investigations.

The recent Cloud Act in the US, however, is expected to make it easier for British authorities to obtain data from US companies. Britain and the US are also on track to negotiate a broader trade agreement.

Samuel Gibbs (The Guardian)

I’m sure this is a business decision as well, but I guess it makes sense given post-Brexit uncertainty about privacy legislation. It’s a shame, though, and a little concerning.


Enjoy this? Sign up for the weekly roundup, become a supporter, or download Thought Shrapnel Vol.1: Personal Productivity!


Header image by Luc van Loon

Friday featherings

Behold! The usual link round-up of interesting things I’ve read in the last week.

Feel free to let me know if anything particularly resonated with you via the comments section below…


Part I – What is a Weird Internet Career?

Weird Internet Careers are the kinds of jobs that are impossible to explain to your parents, people who somehow make a living from the internet, generally involving a changing mix of revenue streams. Weird Internet Career is a term I made up (it had no google results in quotes before I started using it), but once you start noticing them, you’ll see them everywhere. 

Gretchen McCulloch (All Things Linguistic)

I love this phrase, which I came across via Dan Hon’s newsletter. This is the first in a whole series of posts, which I am yet to explore in its entirety. My aim in life is now to make my career progressively more (internet) weird.


Nearly half of Americans didn’t go outside to recreate in 2018. That has the outdoor industry worried.

While the Outdoor Foundation’s 2019 Outdoor Participation Report showed that while a bit more than half of Americans went outside to play at least once in 2018, nearly half did not go outside for recreation at all. Americans went on 1 billion fewer outdoor outings in 2018 than they did in 2008. The number of adolescents ages 6 to 12 who recreate outdoors has fallen four years in a row, dropping more than 3% since 2007 

The number of outings for kids has fallen 15% since 2012. The number of moderate outdoor recreation participants declined, and only 18% of Americans played outside at least once a week. 

Jason Blevins (The Colorado Sun)

One of Bruce Willis’ lesser-known films is Surrogates (2009). It’s a short, pretty average film with a really interesting central premise: most people stay at home and send their surrogates out into the world. Over a decade after the film was released, a combination of things (including virulent viruses, screen-focused leisure time, and safety fears) seem to suggest it might be a predictor of our medium-term future.


I’ll Never Go Back to Life Before GDPR

It’s also telling when you think about what lengths companies have had to go through to make the EU versions of their sites different. Complying with GDPR has not been cheap. Any online business could choose to follow GDPR by default across all regions and for all visitors. It would certainly simplify things. They don’t, though. The amount of money in data collection is too big.

Jill Duffy (OneZero)

This is a strangely-titled article, but a decent explainer on what the web looks and feels like to those outside the EU. The author is spot-on when she talks about how GDPR and the recent California Privacy Law could be applied everywhere, but they’re not. Because surveillance capitalism.


You Are Now Remotely Controlled

The belief that privacy is private has left us careening toward a future that we did not choose, because it failed to reckon with the profound distinction between a society that insists upon sovereign individual rights and one that lives by the social relations of the one-way mirror. The lesson is that privacy is public — it is a collective good that is logically and morally inseparable from the values of human autonomy and self-determination upon which privacy depends and without which a democratic society is unimaginable.

Shoshana Zuboff (The New York Times)

I fear that the length of Zuboff’s (excellent) book on surveillance capitalism, her use of terms in this article such as ‘epistemic inequality, and the subtlety of her arguments, may mean that she’s preaching to the choir here.


How to Raise Media-Savvy Kids in the Digital Age

The next time you snap a photo together at the park or a restaurant, try asking your child if it’s all right that you post it to social media. Use the opportunity to talk about who can see that photo and show them your privacy settings. Or if a news story about the algorithms on YouTube comes on television, ask them if they’ve ever been directed to a video they didn’t want to see.

Meghan Herbst (WIRED)

There’s some useful advice in this WIRED article, especially that given by my friend Ian O’Byrne. The difficulty I’ve found is when one of your kids becomes a teenager and companies like Google contact them directly telling them they can have full control of their accounts, should they wish…


Control-F and Building Resilient Information Networks

One reason the best lack conviction, though, is time. They don’t have the time to get to the level of conviction they need, and it’s a knotty problem, because that level of care is precisely what makes their participation in the network beneficial. (In fact, when I ask people who have unintentionally spread misinformation why they did so, the most common answer I hear is that they were either pressed for time, or had a scarcity of attention to give to that moment)

But what if — and hear me out here — what if there was a way for people to quickly check whether linked articles actually supported the points they claimed to? Actually quoted things correctly? Actually provided the context of the original from which they quoted

And what if, by some miracle, that function was shipped with every laptop and tablet, and available in different versions for mobile devices?

This super-feature actually exists already, and it’s called control-f.

Roll the animated GIF!

Mike Caulfield (Hapgood)

I find it incredible, but absolutely believable, that only around 10% of internet users know how to use Ctrl-F to find something within a web page. On mobile, it’s just as easy, as there’s an option within most (all?) browsers to ‘search within page’. I like Mike’s work, as not only is it academic, it’s incredibly practical.


EdX launches for-credit credentials that stack into bachelor’s degrees

The MicroBachelors also mark a continued shift for EdX, which made its name as one of the first MOOC providers, to a wider variety of educational offerings 

In 2018, EdX announced several online master’s degrees with selective universities, including the Georgia Institute of Technology and the University of Texas at Austin.

Two years prior, it rolled out MicroMasters programs. Students can complete the series of graduate-level courses as a standalone credential or roll them into one of EdX’s master’s degrees.

That stackability was something EdX wanted to carry over into the MicroBachelors programs, Agarwal said. One key difference, however, is that the undergraduate programs will have an advising component, which the master’s programs do not. 

Natalie Schwartz (Education Dive)

This is largely a rewritten press release with a few extra links, but I found it interesting as it’s a concrete example of a couple of things. First, the ongoing shift in Higher Education towards students-as-customers. Second, the viability of microcredentials as a ‘stackable’ way to build a portfolio of skills.

Note that, as a graduate of degrees in the Humanities, I’m not saying this approach can be used for everything, but for those using Higher Education as a means to an end, this is exactly what’s required.


How much longer will we trust Google’s search results?

Today, I still trust Google to not allow business dealings to affect the rankings of its organic results, but how much does that matter if most people can’t visually tell the difference at first glance? And how much does that matter when certain sections of Google, like hotels and flights, do use paid inclusion? And how much does that matter when business dealings very likely do affect the outcome of what you get when you use the next generation of search, the Google Assistant?

Dieter Bohn (The Verge)

I’ve used DuckDuckGo as my go-to search engine for years now. It used to be that I’d have to switch to Google for around 10% of my searches. That’s now down to zero.


Coaching – Ethics

One of the toughest situations for a product manager is when they spot a brewing ethical issue, but they’re not sure how they should handle the situation.  Clearly this is going to be sensitive, and potentially emotional. Our best answer is to discover a solution that does not have these ethical concerns, but in some cases you won’t be able to, or may not have the time.

[…]

I rarely encourage people to leave their company, however, when it comes to those companies that are clearly ignoring the ethical implications of their work, I have and will continue to encourage people to leave.

Marty Cagan (SVPG)

As someone with a sensitive radar for these things, I’ve chosen to work with ethical people and for ethical organisations. As Cagan says in this post, if you’re working for a company that ignores the ethical implications of their work, then you should leave. End of story.


Image via webcomic.name

Microcast #078 — Values-based organisations

I’ve decided to post these microcasts, which I previously made available only through Patreon, here instead.

Microcasts focus on what I’ve been up to and thinking about, and also provide a way to answer questions from supporters and other readers/listeners!

This microcast covers ethics in decision-making for technology companies and (related!) some recent purchases I’ve made.

Show notes

Friday floutings

Did you see these things this week? I did, and thought they were aces.

  1. Do you live in a ‘soft city’? Here’s why you probably want to (Fast Company) — “The benefits of taking a layered approach to building design—and urban planning overall—is that it also cuts down on the amount of travel by car that people need to do. If resources are assembled in a way that a person leaving their home can access everything they need by walking, biking, or taking transit, it frees up space for streets to also be layered to support these different modes.”
  2. YouTube should stop recommending garbage videos to users (Ars Technica) — “When a video finishes playing, YouTube should show the next video in the same channel. Or maybe it could show users a video selected from a list of high-quality videos curated by human YouTube employees. But the current approach—in which an algorithm tries to recommend the most engaging videos without worrying about whether they’re any good—has got to go.”
  3. Fairphone 3 is the ‘ethical’ smartphone you might actually buy (Engadget) — “Doing the right thing is often framed as giving up something. You’re not enjoying a vegetarian burger, you’re being denied the delights of red meat. But what if the ethical, moral, right choice was also the tastiest one? What if the smartphone made by the yurt-dwelling moralists was also good-looking, inexpensive and useful? That’s the question the Fairphone 3 poses.”
  4. Uh-oh: Silicon Valley is building a Chinese-style social credit system (Fast Company) — “The most disturbing attribute of a social credit system is not that it’s invasive, but that it’s extralegal. Crimes are punished outside the legal system, which means no presumption of innocence, no legal representation, no judge, no jury, and often no appeal. In other words, it’s an alternative legal system where the accused have fewer rights.”
  5. The Adults In The Room (Deadspin) — “The tragedy of digital media isn’t that it’s run by ruthless, profiteering guys in ill-fitting suits; it’s that the people posing as the experts know less about how to make money than their employees, to whom they won’t listen.”
  6. A brief introduction to learning agility (Opensource.com) — “One crucial element of adaptability is learning agility. It is the capacity for adapting to situations and applying knowledge from prior experience—even when you don’t know what to do. In short, it’s a willingness to learn from all your experiences and then apply that knowledge to tackle new challenges in new situations.”
  7. Telegram Pushes Ahead With Plans for ‘Gram’ Cryptocurrency (The New York Times) — “In its sales pitch for the Gram, which was viewed by The New York Times, Telegram has said the new digital money will operate with a decentralized structure similar to Bitcoin, which could make it easier to skirt government regulations.”
  8. Don’t Teach Tools (Assorted Stuff) — “As Culatta notes, concentrating on specific products also locks teachers (and, by extension, their students) into a particular brand, to the advantage of the company, rather than helping them understand the broader concepts of using computing devices as learning and creative tools.”
  9. Stoic Reflections From The Gym (part 2) by Greg Sadler (Modern Stoicism) — “From a Stoic perspective, what we do or don’t make time for, particularly in relation to other things, reflects what Epictetus would call the price we actually place upon those things, on what we take to be goods or values, evils or disvalues, and the relative rankings of those in relation to each other.”

Calvin & Hobbes cartoon found via a recent post on tenpencemore

The drawbacks of Artificial Intelligence

It’s really interesting to do philosophical thought experiments with kids. For example, the trolley problem, a staple of undergradate Philosophy courses, is also accessible to children from a fairly young age.

You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options:

  1. Do nothing and allow the trolley to kill the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the more ethical option?

With the advent of autonomous vehicles, these are no longer idle questions. The vehicles, which have to make split-second decision, may have to decide whether to hit a pram containing a baby, or swerve and hit a couple of pensioners. Due to cultural differences, even that’s not something that can be easily programmed, as the diagram below demonstrates.

Self-driving cards: pedestrians vs passengers

For two countries that are so close together, it’s really interesting that Japan and China are on the opposite ends of the spectrum when it comes to saving passengers or pedestrians!

The authors of the paper cited in the article are careful to point out that countries shouldn’t simply create laws based on popular opinion:

Edmond Awad, an author of the paper, brought up the social status comparison as an example. “It seems concerning that people found it okay to a significant degree to spare higher status over lower status,” he said. “It’s important to say, ‘Hey, we could quantify that’ instead of saying, ‘Oh, maybe we should use that.’” The results, he said, should be used by industry and government as a foundation for understanding how the public would react to the ethics of different design and policy decisions.

This is why we need more people with a background in the Humanities in tech, and be having a real conversation about ethics and AI.

Of course, that’s easier said than done, particularly when those companies who are in a position to make significant strides in this regard have near-monopolies in their field and are pulling in eye-watering amounts of money. A recent example of this, where Google convened an AI ethics committee was attacked as a smokescreen:

Academic Ben Wagner says tech’s enthusiasm for ethics paraphernalia is just “ethics washing,” a strategy to avoid government regulation. When researchers uncover new ways for technology to harm marginalized groups or infringe on civil liberties, tech companies can point to their boards and charters and say, “Look, we’re doing something.” It deflects criticism, and because the boards lack any power, it means the companies don’t change.

 […]

“It’s not that people are against governance bodies, but we have no transparency into how they’re built,” [Rumman] Chowdhury [a data scientist and lead for responsible AI at management consultancy Accenture] tells The Verge. With regard to Google’s most recent board, she says, “This board cannot make changes, it can just make suggestions. They can’t talk about it with the public. So what oversight capabilities do they have?”

As we saw around privacy, it takes a trusted multi-national body like the European Union to create a regulatory framework like GDPR for these issues. Thankfully, they’ve started that process by releasing guidelines containing seven requirements to create trustworthy AI:

  1. Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  2. Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  3. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  4. Transparency: The traceability of AI systems should be ensured.
  5. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  6. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The problem isn’t that people are going out of their way to build malevolent systems to rob us of our humanity. As usual, bad things happen because of more mundane requirements. For example, The Guardian has recently reported on concerns around predictive policing and hospitals using AI to predict everything from no-shows to risk of illness.

When we throw facial recognition into the mix, things get particularly scary. It’s all very well for Taylor Swift to use this technology to identify stalkers at her concerts, but given its massive drawbacks, perhaps we should restrict facial recognition somehow?

Human bias can seep into AI systems. Amazon abandoned a recruiting algorithm after it was shown to favor men’s resumes over women’s; researchers concluded an algorithm used in courtroom sentencing was more lenient to white people than to black people; a study found that mortgage algorithms discriminate against Latino and African American borrowers.

Facial recognition might be a cool way to unlock your phone, but the kind of micro-expressions that made for great television in the series Lie to Me is now easily exploited in what is expected to become a $20bn industry.

The difficult thing with all of this is that it’s very difficult for us as individuals to make a difference here. The problem needs to be tackled at a much higher level, as with GDPR. That will take time, and meanwhile the use of AI is exploding. Be careful out there.


Also check out:

Assassination markets now available on the blockchain

I first mentioned so-called ‘assassination markets’ in one of my weeknotes back in 2015 when reporting back on a dinner party conversation. For those unfamiliar, the idea has been around for at least the last twenty years.

Here’s how Wikipedia defines them:

An assassination market is a prediction market where any party can place a bet (using anonymous electronic money and pseudonymous remailers) on the date of death of a given individual, and collect a payoff if they “guess” the date accurately. This would incentivise assassination of individuals because the assassin, knowing when the action would take place, could profit by making an accurate bet on the time of the subject’s death. Because the payoff is for accurately picking the date rather than performing the action of the assassin, it is substantially more difficult to assign criminal liability for the assassination.

Of course, the blockchain is a trustless system, so perfect for this kind of thing. A new platform called Augur is a prediction market and so, of course, one of the first things that appears on there are ‘predictions’ about the death of Donald Trump in 2018:

Everyone knew that it was inevitable that assassination markets would quickly pop up on decentralized prediction market platform Augur, but that doesn’t make the fact that users are now betting on whether U.S. President Donald Trump will be assassinated by the end of the year any less jarring.

Yet this market exists, and, though not the most popular bet on Augur, more than 50 shares have been traded on it as of the time of writing. Similar markets, moreover, exist for a number of other public figures, allowing users to gamble on whether 96-year-old actress Betty White and U.S. Senator John McCain — who has been diagnosed with brain cancer — will survive until Jan. 1, 2019.

This is why ethics in technology are important. There is no such thing as a ‘neutral’ technology:

Now that assassination markets are here, a fierce debate has emerged in cryptocurrency circles over what — if anything — should be done about them, as well as who should be held responsible for these clearly-illegal death markets.

The core issue stems from the fact that, in addition to the pure revulsion that these markets should engender in any decent human being, they also create a financial incentive for someone to place a large bet that a public figure will be assassinated and then murder that person for profit. Consequently, the mere presence of these markets makes it more likely that these events will occur, however slim that increase may be.

Interesting times, indeed.

Source: CCN

Ethical design in social networks

I’m thinking a lot about privacy and ethical design at the moment as part of my role leading Project MoodleNet. This article gives a short but useful overview of the Ethical Design Manifesto, along with some links for further reading:

There is often a disconnect between what digital designers originally intend with a product or feature, and how consumers use or interpret it.

Ethical user experience design – meaning, for example, designing technologies in ways that promote good online behaviour and intuit how they might be used – may help bridge that gap.

There’s already people (like me) making choices about the technology and social networks they used based on ethics:

User experience design and research has so far mainly been applied to designing tech that is responsive to user needs and locations. For example, commercial and digital assistants that intuit what you will buy at a local store based on your previous purchases.

However, digital designers and tech companies are beginning to recognise that there is an ethical dimension to their work, and that they have some social responsibility for the well-being of their users.

Meeting this responsibility requires designers to anticipate the meanings people might create around a particular technology.

In addition to ethical design, there are other elements to take into consideration:

Contextually aware design is capable of understanding the different meanings that a particular technology may have, and adapting in a way that is socially and ethically responsible. For example, smart cars that prevent mobile phone use while driving.

Emotional design refers to technology that elicits appropriate emotional responses to create positive user experiences. It takes into account the connections people form with the objects they use, from pleasure and trust to fear and anxiety.

This includes the look and feel of a product, how easy it is to use and how we feel after we have used it.

Anticipatory design allows technology to predict the most useful interaction within a sea of options and make a decision for the user, thus “simplifying” the experience. Some companies may use anticipatory design in unethical ways that trick users into selecting an option that benefits the company.

Source: The Conversation

Is it pointless to ban autonomous killing machines?

The authors do have a point:

Suppose the UN were to implement a preventive ban on the further development of all autonomous weapons technology. Further suppose – quite optimistically, already – that all armies around the world were to respect the ban, and abort their autonomous-weapons research programmes. Even with both of these assumptions in place, we would still have to worry about autonomous weapons. A self-driving car can be easily re-programmed into an autonomous weapons system: instead of instructing it to swerve when it sees a pedestrian, just teach it to run over the pedestrian.

Source: Aeon

Get a Thought Shrapnel digest in your inbox every Sunday (free!)
Holler Box