Tag: data

Friday flaggings

As usual, a mixed bag of goodies, just like you used to get from your favourite sweet shop as a kid. Except I don’t hold the bottom of the bag, so you get full value.

Let me know which you found tasty and which ones suck (if you’ll pardon the pun).


Andrei Tarkovsky’s Message to Young People: “Learn to Be Alone,” Enjoy Solitude

I don’t know… I think I’d like to say only that [young people] should learn to be alone and try to spend as much time as possible by themselves. I think one of the faults of young people today is that they try to come together around events that are noisy, almost aggressive at times. This desire to be together in order to not feel alone is an unfortunate symptom, in my opinion. Every person needs to learn from childhood how to spend time with oneself. That doesn’t mean he should be lonely, but that he shouldn’t grow bored with himself because people who grow bored in their own company seem to me in danger, from a self-esteem point of view.

Andrei Tarkovsky

This article in Open Culture quotes the film-maker Andrei Tarkovsky. Having just finished my first set of therapy sessions, I have to say that the metaphor of “puting on your own oxygen mask before helping others” would be a good takeaway from it. That sounds selfish, but as Tarkovsky points out here, other approaches can lead to the destruction of self-esteem.


Being a Noob

[T]here are two sources of feeling like a noob: being stupid, and doing something novel. Our dislike of feeling like a noob is our brain telling us “Come on, come on, figure this out.” Which was the right thing to be thinking for most of human history. The life of hunter-gatherers was complex, but it didn’t change as much as life does now. They didn’t suddenly have to figure out what to do about cryptocurrency. So it made sense to be biased toward competence at existing problems over the discovery of new ones. It made sense for humans to dislike the feeling of being a noob, just as, in a world where food was scarce, it made sense for them to dislike the feeling of being hungry.

Paul Graham

I’m not sure about the evolutionary framing, but there’s definitely something in this about having the confidence (and humility) to be a ‘noob’ and learn things as a beginner.


You Aren’t Communicating Nearly Enough

Imagine you were to take two identical twins and give them the same starter job, same manager, same skills, and the same personality. One competently does all of their work behind a veil of silence, not sharing good news, opportunities, or challenges, but just plugs away until asked for a status update. The other does the same level of work but communicates effectively, keeping their manager and stakeholders proactively informed. Which one is going to get the next opportunity for growth?

Michael Natkin

I absolutely love this post. As a Product Manager, I’ve been talking repeatedly recently about making our open-source project ‘legible’. As remote workers, that means over-communicating and, as pointed out in this post, being proactive in that communication. Highly recommended.


The Boomer Blockade: How One Generation Reshaped the Workforce and Left Everyone Behind

This is a profound trend. The average age of incoming CEOs for S&P 500 companies has increased about 14 years over the last 14 years

From 1980 to 2001 the average age of a CEO dropped four years and then from 2005 to 2019 the averare incoming age of new CEOs increased 14 years!

This means that the average birth year of a CEO has not budged since 2005. The best predictor of becoming a CEO of our most successful modern institutions?

Being a baby boomer.

Paul Millerd

Wow. This, via Marginal Revolution, pretty much speaks for itself.


The Ed Tech suitcase

Consider packing a suitcase for a trip. It contains many different items – clothes, toiletries, books, electrical items, maybe food and drink or gifts. Some of these items bear a relationship to others, for example underwear, and others are seemingly unrelated, for example a hair dryer. Each brings their own function, which has a separate existence and relates to other items outside of the case, but within the case, they form a new category, that of “items I need for my trip.” In this sense the suitcase resembles the ed tech field, or at least a gathering of ed tech individuals, for example at a conference

If you attend a chemistry conference and have lunch with strangers, it is highly likely they will nearly all have chemistry degrees and PhDs. This is not the case at an ed tech conference, where the lunch table might contain people with expertise in computer science, philosophy, psychology, art, history and engineering. This is a strength of the field. The chemistry conference suitcase then contains just socks (but of different types), but the ed tech suitcase contains many different items. In this perspective then the aim is not to make the items of the suitcase the same, but to find means by which they meet the overall aim of usefulness for your trip, and are not random items that won’t be needed. This suggests a different way of approaching ed tech beyond making it a discipline.

Martin Weller

At the start of this year, it became (briefly) fashionable among ageing (mainly North American) men to state that they had “never been an edtech guy”. Follwed by something something pedagogy or something something people. In this post, Martin Weller uses a handy metaphor to explain that edtech may not be a discipline, but it’s a useful field (or area of focus) nonetheless.


Why Using WhatsApp is Dangerous

Backdoors are usually camouflaged as “accidental” security flaws. In the last year alone, 12 such flaws have been found in WhatsApp. Seven of them were critical – like the one that got Jeff Bezos. Some might tell you WhatsApp is still “very secure” despite having 7 backdoors exposed in the last 12 months, but that’s just statistically improbable.

[…]

Don’t let yourself be fooled by the tech equivalent of circus magicians who’d like to focus your attention on one isolated aspect all while performing their tricks elsewhere. They want you to think about end-to-end encryption as the only thing you have to look at for privacy. The reality is much more complicated. 

Pavel Durov

Facebook products are bad for you, for society, and for the planet. Choose alternatives and encourage others to do likewise.


Why private micro-networks could be the future of how we connect

The current social-media model isn’t quite right for family sharing. Different generations tend to congregate in different places: Facebook is Boomer paradise, Instagram appeals to Millennials, TikTok is GenZ central. (WhatsApp has helped bridge the generational divide, but its focus on messaging is limiting.)

Updating family about a vacation across platforms—via Instagram stories or on Facebook, for example—might not always be appropriate. Do you really want your cubicle pal, your acquaintance from book club, and your high school frenemy to be looped in as well?

Tanya Basu

Some apps are just before their time. Take Path, for example, which my family used for almost the entire eight years it was around, from 2010 to 2018. The interface was great, the experience cosy, and the knowledge that you weren’t sharing with everyone outside of a close circle? Priceless.


‘Anonymized’ Data Is Meaningless Bullshit

While one data broker might only be able to tie my shopping behavior to something like my IP address, and another broker might only be able to tie it to my rough geolocation, that’s ultimately not much of an issue. What is an issue is what happens when those “anonymized” data points inevitably bleed out of the marketing ecosystem and someone even more nefarious uses it for, well, whatever—use your imagination. In other words, when one data broker springs a leak, it’s bad enough—but when dozens spring leaks over time, someone can piece that data together in a way that’s not only identifiable but chillingly accurate.

Shoshana Wodinsky

This idea of cumulative harm is a particularly difficult one to explain (and prove) not only in the world of data, but in every area of life.


“Hey Google, stop tracking me”

Google recently invented a third way to track who you are and what you view on the web.

[…]

Each and every install of Chrome, since version 54, have generated a unique ID. Depending upon which settings you configure, the unique ID may be longer or shorter.

[…]

So every time you visit a Google web page or use a third party site which uses some Google resource, this ID is sent to Google and can be used to track which website or individual page you are viewing. As Google’s services such as scripts, captchas and fonts are used extensively on the most popular web sites, it’s likely that Google tracks most web pages you visit.

Magic Lasso

Use Firefox. Use multi-account containers and extensions that protect your privacy.


The Golden Age of the Internet and social media is over

In the last year I have seen more and more researchers like danah boyd suggesting that digital literacies are not enough. Given that some on the Internet have weaponized these tools, I believe she is right. Moving beyond digital literacies means thinking about the epistemology behind digital literacies and helping to “build the capacity to truly hear and embrace someone else’s perspective and teaching people to understand another’s view while also holding their view firm” (boyd, March 9, 2018). We can still rely on social media for our news but we really owe it to ourselves to do better in further developing digital literacies, and knowing that just because we have discussions through screens that we should not be so narcissistic to believe that we MUST be right or that the other person is simply an idiot.

Jimmy Young

I’d argue, as I did recently in this talk, that what Young and boyd are talking about here is actually a central tenet of digital literacies.


Image via Introvert doodles


Enjoy this? Sign up for the weekly roundup and/or become a supporter!

Friday festoonings

Check out these things I read and found interesting this week. Thanks to some positive feedback, I’ve carved out time for some commentary, and changed the way this link roundup is set out.

Let me know what you think! What did you find most interesting?


Maps Are Biased Against Animals

Critics may say that it is unreasonable to expect maps to reflect the communities or achievements of nonhumans. Maps are made by humans, for humans. When beavers start Googling directions to a neighbor’s dam, then their homes can be represented! For humans who use maps solely to navigate—something that nonhumans do without maps—man-made roads are indeed the only features that are relevant. Following a map that includes other information may inadvertently lead a human onto a trail made by and for deer.

But maps are not just tools to get from points A to B. They also relay new and learned information, document evolutionary changes, and inspire intrepid exploration. We operate on the assumption that our maps accurately reflect what a visitor would find if they traveled to a particular area. Maps have immense potential to illustrate the world around us, identifying all the important features of a given region. By that definition, the current maps that most humans use fall well short of being complete. Our definition of what is “important” is incredibly narrow.

Ryan Huling (WIRED)

Cartography is an incredibly powerful tool. We’ve known for a long time that “the map is not the territory” but perhaps this is another weapon in the fight against climate change and the decline in diversity of species?


Why Actually Principled People Are Difficult (Glenn Greenwald Edition)

Then you get people like Greenwald, Assange, Manning and Snowden. They are polarizing figures. They are loved or hated. They piss people off.

They piss people off precisely because they have principles they consider non-negotiable. They will not do the easy thing when it matters. They will not compromise on anything that really matters.

That’s breaking the actual social contract of “go along to get along”, “obey authority” and “don’t make people uncomfortable.” I recently talked to a senior activist who was uncomfortable even with the idea of yelling at powerful politicians. It struck them as close to violence.

So here’s the thing, people want men and women of principle to be like ordinary people.

They aren’t. They can’t be. If they were, they wouldn’t do what they do. Much of what you may not like about a Greenwald or Assange or Manning or Snowden is why they are what they are. Not just the principle, but the bravery verging on recklessness. The willingness to say exactly what they think, and do exactly what they believe is right even if others don’t.

Ian Welsh

Activists like Greta Thunberg and Edward Snowden are the closest we get to superheroes, to people who stand for the purest possible version of an idea. This is why we need them — and why we’re so disappointed when they turn out to be human after all.


Explicit education

Students’ not comprehending the value of engaging in certain ways is more likely to be a failure in our teaching than their willingness to learn (especially if we create a culture in which success becomes exclusively about marks and credentialization). The question we have to ask is if what we provide as ‘university’ goes beyond the value of what our students can engage with outside of our formal offer. 

Dave White

This is a great post by Dave, who I had the pleasure of collaborating with briefly during my stint at Jisc. I definitely agree that any organisation walks a dangerous path when it becomes overly-fixated on the ‘how’ instead of the ‘what’ and the ‘why’.


What Are Your Rules for Life? These 11 Expressions (from Ancient History) Might Help

The power of an epigram or one of these expressions is that they say a lot with a little. They help guide us through the complexity of life with their unswerving directness. Each person must, as the retired USMC general and former Secretary of Defense Jim Mattis, has said, “Know what you will stand for and, more important, what you won’t stand for.” “State your flat-ass rules and stick to them. They shouldn’t come as a surprise to anyone.”

Ryan Holiday

Of the 11 expressions here, I have to say that other than memento mori (“remember you will die”) I particularly like semper anticus (“always forward”) which I’m going to print out in a fancy font and stick on the wall of my home office.


Dark Horse Discord

In a hypothetical world, you could get a Discord (or whatever is next) link for your new job tomorrow – you read some wiki and meta info, sort yourself into your role you’d, and then are grouped with the people who you need to collaborate with on a need be basis. All wrapped in one platform. Maybe you have an HR complaint – drop it in #HR where you can’t read the messages but they can, so it’s a blind 1 way conversation. Maybe there is a #help channel, where you ping you write your problems and the bot pings people who have expertise based on keywords. There’s a lot of things you can do with this basic design.

Mule’s Musings

What is described in this post is a bit of a stretch, but I can see it: a world where work is organised a bit like how gamers organisers in chat channels. Something to keep an eye on, as the interplay between what’s ‘normal’ and what’s possible with communications technology changes and evolves.


The Edu-Decade That Was: Unfounded Optimism?

What made the last decade so difficult is how education institutions let corporations control the definitions so that a lot of “study and ethical practice” gets left out of the work. With the promise of ease of use, low-cost, increased student retention (or insert unreasonable-metric-claim here), etc. institutions are willing to buy into technology without regard to accessibility, scalability, equity and inclusion, data privacy or student safety, in hope of solving problem X that will then get to be checked off of an accreditation list. Or worse, with the hope of not having to invest in actual people and local infrastructure.

Geoff Cain (Brainstorm in progress)

It’s nice to see a list of some positives that came out of the last decades, and for microcredentials and badging to be on that list.


When Is a Bird a ‘Birb’? An Extremely Important Guide

First, let’s consider the canonized usages. The subreddit r/birbs defines a birb as any bird that’s “being funny, cute, or silly in some way.” Urban Dictionary has a more varied set of definitions, many of which allude to a generalized smallness. A video on the youtube channel Lucidchart offers its own expansive suggestions: All birds are birbs, a chunky bird is a borb, and a fluffed-up bird is a floof. Yet some tension remains: How can all birds be birbs if smallness or cuteness are in the equation? Clearly some birds get more recognition for an innate birbness.

Asher Elbein (Audubon magazine)

A fun article, but also an interesting one when it comes to ambiguity, affinity groups, and internet culture.


Why So Many Things Cost Exactly Zero

“Now, why would Gmail or Facebook pay us? Because what we’re giving them in return is not money but data. We’re giving them lots of data about where we go, what we eat, what we buy. We let them read the contents of our email and determine that we’re about to go on vacation or we’ve just had a baby or we’re upset with our friend or it’s a difficult time at work. All of these things are in our email that can be read by the platform, and then the platform’s going to use that to sell us stuff.”

Fiona Scott Morton (Yale business school) quoted by Peter coy (Bloomberg Businessweek)

Regular readers of Thought Shrapnel know all about surveillance capitalism, but it’s good to see these explainers making their way to the more mainstream business press.


Your online activity is now effectively a social ‘credit score’

The most famous social credit system in operation is that used by China’s government. It “monitors millions of individuals’ behavior (including social media and online shopping), determines how moral or immoral it is, and raises or lowers their “citizen score” accordingly,” reported Atlantic in 2018.

“Those with a high score are rewarded, while those with a low score are punished.” Now we know the same AI systems are used for predictive policing to round up Muslim Uighurs and other minorities into concentration camps under the guise of preventing extremism.

Violet Blue (Engadget)

Some (more prudish) people will write this article off because it discusses sex workers, porn, and gay rights. But the truth is that all kinds of censorship start with marginalised groups. To my mind, we’re already on a trajectory away from Silicon Valley and towards Chinese technology. Will we be able to separate the tech from the morality?


Panicking About Your Kids’ Phones? New Research Says Don’t

The researchers worry that the focus on keeping children away from screens is making it hard to have more productive conversations about topics like how to make phones more useful for low-income people, who tend to use them more, or how to protect the privacy of teenagers who share their lives online.

“Many of the people who are terrifying kids about screens, they have hit a vein of attention from society and they are going to ride that. But that is super bad for society,” said Andrew Przybylski, the director of research at the Oxford Internet Institute, who has published several studies on the topic.

Nathaniel Popper (The New York Times)

Kids and screentime is just the latest (extended) moral panic. Overuse of anything causes problems, smartphones, games consoles, and TV included. What we need to do is to help our children find balance in all of this, which can be difficult for the first generation of parents navigating all of this on the frontline.


Gorgeous header art via the latest Facebook alternative, planetary.social

I am not fond of expecting catastrophes, but there are cracks in the universe

So said Sydney Smith. Let’s talk about surveillance. Let’s talk about surveillance capitalism and surveillance humanitarianism. But first, let’s talk about machine learning and algorithms; in other words, let’s talk about what happens after all of that data is collected.

Writing in The Guardian, Sarah Marsh investigates local councils using “automated guidance systems” in an attempt to save money.

The systems are being deployed to provide automated guidance on benefit claims, prevent child abuse and allocate school places. But concerns have been raised about privacy and data security, the ability of council officials to understand how some of the systems work, and the difficulty for citizens in challenging automated decisions.

Sarah Marsh

The trouble is, they’re not particularly effective:

It has emerged North Tyneside council has dropped TransUnion, whose system it used to check housing and council tax benefit claims. Welfare payments to an unknown number of people were wrongly delayed when the computer’s “predictive analytics” erroneously identified low-risk claims as high risk

Meanwhile, Hackney council in east London has dropped Xantura, another company, from a project to predict child abuse and intervene before it happens, saying it did not deliver the expected benefits. And Sunderland city council has not renewed a £4.5m data analytics contract for an “intelligence hub” provided by Palantir.

Sarah Marsh

When I was at Mozilla there were a number of colleagues there who had worked on the OFA (Obama For America) campaign. I remember one of them, a DevOps guy, expressing his concern that the infrastructure being built was all well and good when there’s someone ‘friendly’ in the White House, but what comes next.

Well, we now know what comes next, on both sides of the Atlantic, and we can’t put that genie back in its bottle. Swingeing cuts by successive Conservative governments over here, coupled with the Brexit time-and-money pit means that there’s no attention or cash left.

If we stop and think about things for a second, we probably wouldn’t don’t want to live in a world where machines make decisions for us, based on algorithms devised by nerds. As Rose Eveleth discusses in a scathing article for Vox, this stuff isn’t ‘inevitable’ — nor does it constitute a process of ‘natural selection’:

Often consumers don’t have much power of selection at all. Those who run small businesses find it nearly impossible to walk away from Facebook, Instagram, Yelp, Etsy, even Amazon. Employers often mandate that their workers use certain apps or systems like Zoom, Slack, and Google Docs. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or, ‘I’m not on social media,’” says Rumman Chowdhury, a data scientist at Accenture. “You actually have to be so comfortable in your privilege that you can opt out of things.”

And so we’re left with a tech world claiming to be driven by our desires when those decisions aren’t ones that most consumers feel good about. There’s a growing chasm between how everyday users feel about the technology around them and how companies decide what to make. And yet, these companies say they have our best interests in mind. We can’t go back, they say. We can’t stop the “natural evolution of technology.” But the “natural evolution of technology” was never a thing to begin with, and it’s time to question what “progress” actually means.

Rose Eveleth

I suppose the thing that concerns me the most is people in dire need being subject to impersonal technology for vital and life-saving aid.

For example, Mark Latonero, writing in The New York Times, talks about the growing dangers around what he calls ‘surveillance humanitarianism’:

By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.

Mark Latonero

It’s easy to think that this is an emergency, so we should just do whatever is necessary. But Latonero explains the risks, arguing that the risk is shifted to a later time:

If an individual or group’s data is compromised or leaked to a warring faction, it could result in violent retribution for those perceived to be on the wrong side of the conflict. When I spoke with officials providing medical aid to Syrian refugees in Greece, they were so concerned that the Syrian military might hack into their database that they simply treated patients without collecting any personal data. The fact that the Houthis are vying for access to civilian data only elevates the risk of collecting and storing biometrics in the first place.

Mark Latonero

There was a rather startling article in last weekend’s newspaper, which I’ve found online. Hannah Devlin, again writing in The Guardian (which is a good source of information for those concerned with surveillance) writes about a perfect storm of social media and improved processing speeds:

[I]n the past three years, the performance of facial recognition has stepped up dramatically. Independent tests by the US National Institute of Standards and Technology (Nist) found the failure rate for finding a target picture in a database of 12m faces had dropped from 5% in 2010 to 0.1% this year.

The rapid acceleration is thanks, in part, to the goldmine of face images that have been uploaded to Instagram, Facebook, LinkedIn and captioned news articles in the past decade. At one time, scientists would create bespoke databases by laboriously photographing hundreds of volunteers at different angles, in different lighting conditions. By 2016, Microsoft had published a dataset, MS Celeb, with 10m face images of 100,000 people harvested from search engines – they included celebrities, broadcasters, business people and anyone with multiple tagged pictures that had been uploaded under a Creative Commons licence, allowing them to be used for research. The dataset was quietly deleted in June, after it emerged that it may have aided the development of software used by the Chinese state to control its Uighur population.

In parallel, hardware companies have developed a new generation of powerful processing chips, called Graphics Processing Units (GPUs), uniquely adapted to crunch through a colossal number of calculations every second. The combination of big data and GPUs paved the way for an entirely new approach to facial recognition, called deep learning, which is powering a wider AI revolution.

Hannah Devlin

Those of you who have read this far and are expecting some big reveal are going to be disappointed. I don’t have any ‘answers’ to these problems. I guess I’ve been guilty, like many of us have, of the kind of ‘privacy nihilism’ mentioned by Ian Bogost in The Atlantic:

Online services are only accelerating the reach and impact of data-intelligence practices that stretch back decades. They have collected your personal data, with and without your permission, from employers, public records, purchases, banking activity, educational history, and hundreds more sources. They have connected it, recombined it, bought it, and sold it. Processed foods look wholesome compared to your processed data, scattered to the winds of a thousand databases. Everything you have done has been recorded, munged, and spat back at you to benefit sellers, advertisers, and the brokers who service them. It has been for a long time, and it’s not going to stop. The age of privacy nihilism is here, and it’s time to face the dark hollow of its pervasive void.

Ian Bogost

The only forces that we have to stop this are collective action, and governmental action. My concern is that we don’t have the digital savvy to do the former, and there’s definitely the lack of will in respect of the latter. Troubling times.

Friday fermentations

I boiled the internet and this was what remained:

  • I Quit Social Media for a Year and Nothing Magical Happened (Josh C. Simmons) — “A lot of social media related aspects of my life are different now – I’m not sure they’re better, they’re just different, but I can confidently say that I prefer this normal to last year’s. There’s a bit of rain with all of the sunshine. I don’t see myself ever going back to social media. I don’t see the point of it, and after leaving for a while, and getting a good outside look, it seems like an abusive relationship – millions of workers generating data for tech-giants to crunch through and make money off of. I think that we tend to forget how we were getting along pretty well before social media – not everything was idyllic and better, but it was fine.”
  • Face recognition, bad people and bad data (Benedict Evans) — “My favourite example of what can go wrong here comes from a project for recognising cancer in photos of skin. The obvious problem is that you might not have an appropriate distribution of samples of skin in different tones. But another problem that can arise is that dermatologists tend to put rulers in the photo of cancer, for scale – so if all the examples of ‘cancer’ have a ruler and all the examples of ‘not-cancer’ do not, that might be a lot more statistically prominent than those small blemishes. You inadvertently built a ruler-recogniser instead of a cancer-recogniser.”
  • Would the Internet Be Healthier Without ‘Like’ Counts? (WIRED) ⁠— “Online, value is quantifiable. The worth of a person, idea, movement, meme, or tweet is often based on a tally of actions: likes, retweets, shares, followers, views, replies, claps, and swipes-up, among others. Each is an individual action. Together, though, they take on outsized meaning. A YouTube video with 100,000 views seems more valuable than one with 10, even though views—like nearly every form of online engagement—can be easily bought. It’s a paradoxical love affair. And it’s far from an accident.”
  • Are Platforms Commons? (On The Horizon) — “[W]hat if ecosystems were constructed so that they were governed by the participants, rather by the hypercapitalist strivings of the platform owners — such as Apple, Google, Amazon, Facebook — or the heavy-handed regulators? Is there a middle ground where the needs of the end user and those building, marketing, and shipping products and services can be balanced, and a fair share of the profits are distributed not just through common carrier laws but by the shared economics of a commons, and where the platform orchestrator gets a fair share, as well?”
  • Depression and anxiety threatened to kill my career. So I came clean about it (The Guardian) — “To my surprise, far from rejecting me, students stayed after class to tell me how sorry they were. They left condolence cards in my mailbox and sent emails to let me know they were praying for my family. They stopped by my office to check on me. Up to that point, I’d been so caught up in my despair that it never occurred to me that I might be worthy of concern and support. Being accepted despite my flaws touched me in ways that are hard to express.”
  • Absolute scale corrupts absolutely (apenwarr) — “Here’s what we’ve lost sight of, in a world where everything is Internet scale: most interactions should not be Internet scale. Most instances of most programs should be restricted to a small set of obviously trusted people. All those people, in all those foreign countries, should not be invited to read Equifax’s PII database in Argentina, no matter how stupid the password was. They shouldn’t even be able to connect to the database. They shouldn’t be able to see that it exists. It shouldn’t, in short, be on the Internet.”
  • The Automation Charade (Logic magazine) — “The problem is that the emphasis on technological factors alone, as though “disruptive innovation” comes from nowhere or is as natural as a cool breeze, casts an air of blameless inevitability over something that has deep roots in class conflict. The phrase “robots are taking our jobs” gives technology agency it doesn’t (yet?) possess, whereas “capitalists are making targeted investments in robots designed to weaken and replace human workers so they can get even richer” is less catchy but more accurate.”
  • The ambitious plan to reinvent how websites get their names (MIT Technology Review) — “The system would be based on blockchain technology, meaning it would be software that runs on a widely distributed network of computers. In theory, it would have no single point of failure and depend on no human-run organization that could be corrupted or co-opted.”
  • O whatever God or whatever ancestor that wins in the next life (The Main Event) — “And it begins to dawn on you that the stories were all myths and the epics were all narrated by the villains and the history books were written to rewrite the histories and that so much of what you thought defined excellence merely concealed grift.”
  • A Famous Argument Against Free Will Has Been Debunked (The Atlantic) — “In other words, people’s subjective experience of a decision—what Libet’s study seemed to suggest was just an illusion—appeared to match the actual moment their brains showed them making a decision.”

Educational institutions are at a crossroads of relevance

One of the things that attracted me to the world of Open Badges and digital credentialing back in 2011 was the question of relevance. As a Philosophy graduate, I’m absolutely down with the idea of a broad, balanced education, and learning as a means of human flourishing.

However, in a world where we measure schools, colleges, and universities through an economic lens, it’s inevitable that learners do so too. As I’ve said in presentations and to clients many times, I want my children to choose to go to university because it’s the right choice for them, not because they have to.

In an article in Forbes, Brandon Busteed notes that we’re on the verge of a huge change in Higher Education:

This shift will go down as the biggest disruption in higher education whereby colleges and universities will be disintermediated by employers and job seekers going direct. Higher education won’t be eliminated from the model; degrees and other credentials will remain valuable and desired, but for a growing number of young people they’ll be part of getting a job as opposed to college as its own discrete experience. This is already happening in the case of working adults and employers that offer college education as a benefit. But it will soon be true among traditional age students. Based on a Kaplan University Partners-QuestResearch study I led and which was released today, I predict as many as one-third of all traditional students in the next decade will “Go Pro Early” in work directly out of high school with the chance to earn a college degree as part of the package.

This is true to some degree in the UK as well, through Higher Apprenticeships. University study becomes a part-time deal with the ‘job’ paying for fees. It’s easy to see how this could quickly become a two-tier system for rich and poor.

A “job-first, college included model” could well become one of the biggest drivers of both increasing college completion rates in the U.S. and reducing the cost of college. In the examples of employers offering college degrees as benefits, a portion of the college expense will shift to the employer, who sees it as a valuable talent development and retention strategy with measurable return on investment benefits. This is further enhanced through bulk-rate tuition discounts offered by the higher educational institutions partnering with these employers. Students would still be eligible for federal financial aid, and they’d be making an income while going to college. To one degree or another, this model has the potential to make college more affordable for more people, while lowering or eliminating student loan debt and increasing college enrollments. It would certainly help bridge the career readiness gap that many of today’s college graduates encounter.

The ‘career readiness’ that Busteed discusses here is an interesting concept, and one that I think has been invented by employers who don’t want to foot the bill for training. Certainly, my parents’ generation weren’t supposed to be immediately ready for employment straight after their education — and, of course, they weren’t saddled with student debt, either.

Related, in my mind, is the way that we treat young people as data to be entered on a spreadsheet. This is managerialism at its worst. Back when I was a teacher and a form tutor, I remember how sorry I felt for the young people in my charge, who were effectively moved around a machine for ‘processing’ them.

Now, in an article for The Guardian, Jeremy Hannay tells it like it is for those who don’t have an insight into the Kafkaesque world of schools:

Let me clear up this edu-mess for you. It’s not Sats. It’s not workload. The elephant in the room is high-stakes accountability. And I’m calling bullshit. Our education system actively promotes holding schools, leaders and teachers at gunpoint for a very narrow set of test outcomes. This has long been proven to be one of the worst ways to bring about sustainable change. It is time to change this educational paradigm before we have no one left in the classroom except the children.

Just like our dog-eat-dog society in the UK could be much more collaborative, so our education system badly needs remodelling. We’ve deprofessionalised teaching, and introduced a managerial culture. Things could be different, as they are elsewhere in the world.

In such systems – and they do exist in some countries, such as Finland and Canada, and even in some brave schools in this country – development isn’t centred on inspection, but rather professional collaboration. These schools don’t perform regular observations and monitoring, or fire out over-prescriptive performance policies. Instead, they discuss and design pedagogy, engage in action research, and regularly perform activities such as learning and lesson study. Everyone understands that growing great educators involves moments of brilliance and moments of mayhem.

That’s the key: “moments of brilliance and moments of mayhem”. Ironically, bureaucratic, hierarchical systems cannot cope with amazing teachers, because they’re to some extent unpredictable. You can’t put them in a box (on a spreadsheet).

Actually, perhaps it’s not the hierarchy per se, but the power dynamics, as Richard D. Bartlett points out in this post.

Yes, when a hierarchical shape is applied to a human group, it tends to encourage coercive power dynamics. Usually the people at the top are given more importance than the rest. But the problem is the power, not the shape. 

What we’re doing is retro-fitting the worst forms of corporate power dynamics onto education and expecting everything to be fine. Newsflash: learning is different to work, and always will be.

Interestingly, Bartlett defines three different forms of power dynamics, which I think is enlightening:

Follett coined the terms “power-over” and “power-with” in 1924. Starhawk adds a third category “power-from-within”. These labels provide three useful lenses for analysing the power dynamics of an organisation. With apologies to the original authors, here’s my definitions:

  • power-from-within or empowerment — the creative force you feel when you’re making art, or speaking up for something you believe in
  • power-with or social power — influence, status, rank, or reputation that determines how much you are listened to in a group
  • power-over or coercion — power used by one person to control another

The problem with educational institutions, I feel, is that we’ve largely done away with empowerment and social power, and put all of our eggs in the basket of coercion.


Also check out:

  • Working collaboratively and learning cooperatively (Harold Jarche) — “Two types of behaviours are necessary in the network era workplace — collaboration and cooperation. Cooperation is not the same as collaboration, though they are complementary.”
  • Learning Alignment Model (Tom Barrett) – “It is not a step by step process to design learning, but more of a high-level thinking model to engage with that uncovers some interesting potential tensions in our classroom work.”
  • A Definition of Academic Innovation (Inside Higher Ed) – “What if academic innovation was built upon the research and theory of our field, incorporating social constructivist, constructionist and activity theory?”

Intimate data analytics in education

The ever-relevant and compulsively-readable Ben Williamson turns his attention to ‘precision education’ in his latest post. It would seem that now that the phrase ‘personalised learning’ has jumped the proverbial shark, people are doubling down on the rather dangerous assumption that we just need more data to provide better learning experiences.

In some ways, precision education looks a lot like a raft of other personalized learning practices and platform developments that have taken shape over the past few years. Driven by developments in learning analytics and adaptive learning technologies, personalized learning has become the dominant focus of the educational technology industry and the main priority for philanthropic funders such as Bill Gates and Mark Zuckerberg.

[…]

A particularly important aspect of precision education as it is being advocated by others, however, is its scientific basis. Whereas most personalized learning platforms tend to focus on analysing student progress and outcomes, precision education requires much more intimate data to be collected from students. Precision education represents a shift from the collection of assessment-type data about educational outcomes, to the generation of data about the intimate interior details of students’ genetic make-up, their psychological characteristics, and their neural functioning.

As Williamson points out, the collection of ‘intimate data’ is particularly concerning, particularly in the wake of the Cambridge Analytica revelations.

Many people will find the ideas behind precision education seriously concerning. For a start, there appear to be some alarming symmetries between the logics of targeted learning and targeted advertising that have generated heated public and media attention already in 2018. Data protection and privacy are obvious risks when data are collected about people’s private, intimate and interior lives, bodies and brains. The ethical stakes in using genetics, neural information and psychological profiles to target students with differentiated learning inputs are significant.

There’s a very definite worldview which presupposes that we just need to throw more technology at a problem until it goes away. That may be true in some situations, but at what cost? And to what extent is the outcome an artefact of the constraints of the technologies? Hopefully my own kids will be finished school before this kind of nonsense becomes mainstream. I do, however, worry about my grandchildren.

The technical machinery alone required for precision education would be vast. It would have to include neurotechnologies for gathering brain data, such as neuroheadsets for EEG monitoring. It would require new kinds of tests, such as those of personality and noncognitive skills, as well as real-time analytics programs of the kind promoted by personalized-learning enthusiasts. Gathering intimate data might also require genetics testing technologies, and perhaps wearable-enhanced learning devices for capturing real-time data from students’ bodies as proxy psychometric measures of their responses to learning inputs and materials.

Thankfully, Williamson cites the work of academics who are proposing a different way forward. Something that respects the social aspect of learning rather than a reductionist view that focuses on inputs and outputs.

One productive way forward might be to approach precision education from a ‘biosocial’ perspective. As Deborah Youdell  argues, learning may be best understood as the result of ‘social and biological entanglements.’ She advocates collaborative, inter-disciplinary research across social and biological sciences to understand learning processes as the dynamic outcomes of biological, genetic and neural factors combined with socially and culturally embedded interactions and meaning-making processes. A variety of biological and neuroscientific ideas are being developed in education, too, making policy and practice more bio-inspired.

The trouble is, of course, is that it’s not enough for academics to write papers about things. Or even journalists to write newspaper articles. Even with all of the firestorm over Facebook recently, people are still using the platform. If the advocates of ‘precision education’  have their way, I wonder who will actually create something meaningful that opposes their technocratic worldview?

Source: Code Acts in Education

Data-driven society: utopia or dystopia?

Good stuff from (Lord) Jim Knight, who cites part of his speech in the House of Lords about data privacy:

The use of data to fuel our economy is critical. The technology and artificial intelligence it generates has a huge power to enhance us as humans and to do good. That is the utopia we must pursue. Doing nothing heralds a dystopian outcome, but the pace of change is too fast for us legislators, and too complex for most of us to fathom. We therefore need to devise a catch-all for automated or intelligent decisioning by future data systems. Ethical and moral clauses could and should, I argue, be forced into terms of use and privacy policies.

Jim’s a great guy, and went out of his way to help me in 2017. It’s great to have someone with his ethics and clout in a position of influence.

Source: Medium

Commit to improving your security in 2018

We don’t live in a cosy world where everyone hugs fluffy bunnies who shoot rainbows out of their eyes. Hacks and data breaches affect everyone:

If you aren’t famous enough to be a target, you may still be a victim of a mass data breach. Whereas passwords are usually stored in hashed or encrypted form, answers to security questions are often stored — and therefore stolen — in plain text, as users entered them. This was the case in the 2015 breach of the extramarital encounters site Ashley Madison, which affected 32 million users, and in some of the Yahoo breaches, disclosed over the past year and a half, which affected all of its three billion accounts.

Some of it isn’t our fault, however. For example, you can bypass PayPal’s two-factor authentication by opting to answer questions about your place of birth and mother’s maiden name. This is not difficult information for hackers to obtain:

According to Troy Hunt, a cybersecurity expert, organizations continue to use security questions because they are easy to set up technically, and easy for users. “If you ask someone their favorite color, that’s not a drama,” Mr. Hunt said. “They’ll be able to give you a straight answer. If you say, ‘Hey, please download this authenticator app and point the camera at a QR code on the screen,’ you’re starting to lose people.” Some organizations have made a risk-based decision to retain this relatively weak security measure, often letting users opt for it over two-factor authentication, in the interest of getting people signed up.

Remaining secure online is a constantly-moving target, and one that we would all do well to spend a bit more time thinking about. These principles by the EFF are a good starting point for conversations we should be having this year.

Source: The New York Times

GDPR could break the big five’s monopoly stranglehold on our data

Almost everyone has one or more account with the following companies: Apple, Amazon, Facebook, Google, and Microsoft. Between them they know more about you than your family and the state apparatus of your country, combined.

However, 2018 could be the year that changes all that, all thanks to the General Data Protection Regulation (GDPR), as this article explains.

There is legitimate fear that GDPR will threaten the data-profiling gravy train. It’s a direct assault on the surveillance economy, enforced by government regulators and an army of class-action lawyers. “It will require such a rethinking of the way Facebook and Google work, I don’t know what they will do,” says Jonathan Taplin, author of Move Fast and Break Things, a book that’s critical of the platform economy. Companies could still serve ads, but they would not be able to use data to target someone’s specific preferences without their consent. “I saw a study that talked about the difference in value of an ad if platforms track information versus do not track,” says Reback. “If you just honor that, it would cut the value Google could charge for an ad by 80 percent.”

If it was any other industry, these monolithic companies would already have been broken up. However, they may be another, technical, way of restricting their dominance: forcing them to be interoperable so that users can move their data between platforms.

Portability would break one of the most powerful dynamics cementing Big Tech dominance: the network effect. People want to use the social media site their friends use, forcing startups to swim against a huge tide. Competition is not a click away, as Google’s Larry Page once said; the costs of switching are too high. But if you could use a competing social media site with the confidence that you’ll reach all your friends, suddenly the Facebook lock gets jimmied open. This offers the opportunity for competition on the quality and usability of the service rather than the presence of friends.

Source: The American Prospect

Get a Thought Shrapnel digest in your inbox every Sunday (free!)
Holler Box