The supermarket is a panopticon

    My son’s now old enough to get ‘loyalty cards’ for supermarkets, coffee shops, and places to eat. He thinks this is great: free drinks! money off vouchers! What’s not to like? On a recent car journey, I explained why the only loyalty card I use is the one for the Co-op, and introduced him to the murky world of data brokers.

    In this article, Ian Bogost writes in The Atlantic about the extensive data collection by retailers to personalise marketing. This not only predicts but also influences consumer behaviour, raising ethical concerns about the erosion of privacy and democratic ideals. Bogost argues that this data-driven approach shifts the power balance, allowing companies to manipulate consumer preferences.

    In marketing, segmentation refers to the process of dividing customers into different groups, in order to make appeals to them based on shared characteristics. Though always somewhat artificial, segments used to correspond with real categories or identities—soccer moms, say, or gamers. Over decades, these segments have become ever smaller and more precise, and now retailers have enough data to create a segment just for you. And not even just for you, but for you right now: They customize marketing messages to unique individuals at distinct moments in time.

    You might be thinking, Who cares? If stores can offer the best deals on the most relevant products to me, then let them do it. But you don’t even know which products are relevant anymore. Customizing offerings and prices to ever-smaller segments of customers works; it causes people to alter their shopping behavior to the benefit of the stores and their data-greedy machines. It gives retailers the ability, in other words, to use your private information to separate you from your money. The reason to worry about the erosion of retail privacy isn’t only because stores might discover or reveal your secrets based on the data they collect about you. It’s that they can use that data to influence purchasing so effectively that they’re rewiring your desires.

    […]

    Ordinary people may not realize just how much offline information is collected and aggregated by the shopping industry rather than the tech industry. In fact, the two work together to erode our privacy effectively, discreetly, and thoroughly. Data gleaned from brick-and-mortar retailers get combined with data gleaned from online retailers to build ever-more detailed consumer profiles, with the intention of selling more things, online and in person—and to sell ads to sell those things, a process in which those data meet up with all the other information big Tech companies such as Google and Facebook have on you.“Retailing,” Joe Turow told me, “is the place where a lot of tech gets used and monetized.” The tech industry is largely the ad-tech industry. That makes a lot of data retail data. “There are a lot of companies doing horrendous things with your data, and people use them all the time, because they’re not on the public radar.” The supermarket, in other words, is a panopticon just the same as the social network.

    Source: You Should Worry About the Data Retailers Collect About You | The Atlantic

    Calendars as data layers

    I run my life by Google Calendar, so I found this post about different data layers including both past and future data points really interesting.

    As someone who also pays attention to their stress level as reported by a Garmin smartwatch, and as someone who suffers from migraines, this kind of data would be juxtaposition would be super-interesting to me.

    Our digital calendars turned out to be just marginally better than their pen and paper predecessors. And since their release, neither Outlook nor Google Calendar have really changed in any meaningful way.

    […]

    Flights, for example, should be native calendar objects with their own unique attributes to highlight key moments such as boarding times or possible delays.

    This gets us to an interesting question: If our calendars were able to support other types of calendar activities, what else could we map onto them?

    […]

    Something I never really noticed before is that we only use our calendars to look forward in time, never to reflect on things that happened in the past. That feels like a missed opportunity.

    […]

    My biggest gripe with almost all quantified self tools is that they are input-only devices. They are able to collect data, but unable to return any meaningful output. My Garmin watch can tell my current level of stress based on my heart-rate variability, but not what has caused that stress or how I can prevent it in the future. It lacks context.

    Once I view the data alongside other events, however, things start to make more sense. Adding workouts or meditation sessions, for example, would give me even more context to understand (and manage) stress.

    […]

    Once you start to see the calendar as a time machine that covers more than just future plans, you’ll realize that almost any activity could live in your calendar. As long as it has a time dimension, it can be visualized as a native calendar layer.

    Source: Multi-layered calendars | julian.digital

    Proving endemic racism and sexism in the world of football

    Anyone who follows football will perhaps be disappointed yet unsurprised that racism and sexism continue to be part of the beautiful game.

    This study is clever in the way that it shows that those watching football matches use coded language and are biased against women. Hopefully, it will help all of us figure out better ways forward.

    (I actually really enjoy watching women’s football with my family!)

    The resulting paper, “Pace and Power: Removing unconscious bias from soccer broadcasts,” caused a stir when they presented it at last month’s New England Symposium on Statistics in Sports. Of the 47 sports fans who watched a two-minute clip of the World Cup TV broadcast, 70 percent said that Senegal, whose players were all Black, was “more athletic or quick.” But of 58 others who saw an animation of the same two minutes without knowing which teams they were watching, 62 percent picked Poland, whose players were all white, as the more athletic side.1 The physical advantages that supposedly defined the African team’s style of play disappeared as soon as their skin color did.

    […]

    The athleticism flip-flop offers a new kind of evidence of a prejudice that affects how Black players of every nationality are perceived. For decades, researchers have documented media stereotypes of African players as “‘powerful,’ ‘big-thighed,’ ‘lithe of body,’ ‘big,’ ‘explosive,’ and like ‘lightning,’ attributes that were to be contrasted with ‘the know-how that England possess.’” As Belgian forward Romelu Lukaku, who is Black, told The New York Times, “It is never about my skill when I am compared to other strikers.” Now, for the first time, researchers have a way to isolate how race influences direct perceptions of the game.

    Interestingly, they also looked at gender as well as race:

    The study also examined attitudes toward gender by showing viewers a pair of two-minute clips, one from the American top-flight National Women’s Soccer League and another from League Two, the English men’s fourth tier. Even though the NWSL draws more fans to games, its average player earns about a quarter as much as the average player in League Two. Gregory and Pleuler were curious whether this “clear gender pay gap” could be explained by a difference in the quality of the soccer shown on TV, as some have argued.

    People who watched the broadcasts said that the men’s game was “higher quality” by a 57 percent to 43 percent margin. Those who saw the renders with genderless stick figures preferred the women’s match, 59 percent to 41 percent. The results weren’t statistically significant across a small sample of 105 mostly male respondents, but Pleuler believes the line of research is promising. “I think these results are suggestive that your average soccer fan can’t tell the difference between something that does have a large investment level and the women’s game, which does not,” he said.

    Source: Soccer Looks Different When You Can’t See Who’s Playing | FiveThirtyEight

    UK government adviser warns against plans to force the NHS to share data with police forces

    It’s entirely unsurprising that governments should seek to use the pandemic as cover for hoovering up data about its citizens. However, it’s up to us to resist this.

    Plans to force the NHS to share confidential data with police forces across England are “very problematic” and could see patients giving false information to doctors, the government’s data watchdog has warned.

    […]

    Dr Nicola Byrne also warned that emergency powers brought in to allow the sharing of data to help tackle the spread of Covid-19 could not run on indefinitely after they were extended to March 2022.

    Dr Byrne, 46, who has had a 20-year career in mental health, also warned against the lack of regulation over the way companies were collecting, storing and sharing patient data via health apps.

    She told The Independent she had raised concerns with the government over clauses in the Police, Crime, Sentencing and Courts Bill which is going through the House of Lords later this month.

    The legislation could impose a duty on NHS bodies to disclose private patient data to police to prevent serious violence and crucially sets aside a duty of confidentiality on clinicians collecting information when providing care.

    Dr Byrne said doing so could “erode trust and confidence, and deter people from sharing information and even from presenting for clinical care”.

    She added that it was not clear what exact information would be covered by the bill: “The case isn’t made as to why that is necessary. These things need to be debated openly and in public.”

    Source: Plans to hand over NHS data to police sparks warning from government adviser | The Independent

    Sports data and GDPR

    This is really quite fascinating. The use of player data has absolutely exploded in the last decade, and that's now being challenged from a GDPR (i.e. data privacy) point of view.

    Some of it could be said to be reasonably innocuous, but when we get into the territory of players being compared against 'expected goals' things start to get tricky, I'd suggest.

    Slade's legal team said the fact players receive no payment for the unlicensed use of their data contravenes General Data Protection Regulation (GDPR) rules that were strengthened in 2018.

    Under Article 4 of the GDPR, "personal data" refers to a range or identifiable information, such as physical attributes, location data or physiological information.

    BBC News understands that an initial 17 major betting, entertainment and data collection firms have been targeted, but Slade's Global Sports Data and Technology Group has highlighted more than 150 targets it believes have misused data.

    [...]

    Former Wales international Dave Edwards, one the players behind the move, said it was a chance for players to take more control of the way information about them is used.

    Having seen how data has become a staple part of the modern game, he believes players rights to how information about them is used should be at the forefront of any future use.

    "The more I've looked into it and you see how our data is used, the amount of channels its passed through, all the different organisations which use it, I feel as a player we should have a say on who is allowed to use it," he said.

    Source: Footballers demand compensation over 'data misuse' | BBC News

    UK government survey into climate change and net zero

    The UK government’s Department for Business, Energy & Industrial Strategy published a report today showing the results of a an online survey into public perceptions of climate change and net zero.

    Broadly speaking, ‘net zero’ is supported, but most people think we’ll achieve that through energy efficiency.

    GOV.UK logo

    Climate change was perceived to be affecting other countries more than respondents’ local area within the UK although half of respondents (50%) felt that their local area had been affected to ‘at least some extent’.
    • Eighty-three percent of participants reported that climate change was a concern.
    • Fourteen percent of participants perceived climate change as affecting their local area by ‘a great deal’ compared to 42% of UK participants perceiving climate change as affecting other countries by ‘a great deal’.
    • Eighty-six percent of UK participants perceived other countries to be experiencing climate change effect to ‘at least some extent’.
    • Around half (54%) of participants perceived their local area to be experiencing climate change effect to ‘at least some extent’.
    Source: Climate change and net zero: public awareness and perceptions | GOV.UK

    Information means nothing by itself

    I had reason to reference this image today, which is an update of the classic gapingvoid cartoon. The point I was making is that a lot of organisations think that they revolutionise learning by connecting people to knowledge.

    However, as every educator should know, it’s the connections between bits of information, including context and application, which constitutes the learning experience. The thing that gets missed most often, of course, is the “so what?” — i.e. the impact.

    PS- the above image is from the (seemingly) never-ending, information-knowledge meme, originally done as part of building a culture of innovation for our friends over at Genentech. They were happy, the idea lives on. This is how you turn change into movements 🙂
    Source: Want to know how to turn change into a movement? | Gapingvoid

    What kind of world do we want? (or, why regulation matters)

    I saw a thread on Mastodon recently, which included this image:

    Three images with the title 'Space required to Transport 48 People'. Each image is the same, with cars backed up down a road. The caption for each image is 'Car', 'Electric Car' and 'Autonomous Car', respectively.

    Someone else replied with a meme showing a series of images with the phrase "They feed us poison / so we buy their 'cures' / while they ban our medicine". The poison in this case being cars burning fossil fuels, the cures being electric and/or autonomous cars, and the medicine public transport.

    There's similar kind of thinking in the world of tech, with at least one interviewee in the documentary The Social Dilemma saying that people should be paid for their data. I've always been uneasy about this, so it's good to see the EFF come out strongly against it:

    Let’s be clear: getting paid for your data—probably no more than a handful of dollars at most—isn’t going to fix what’s wrong with privacy today. Yes, a data dividend may sound at first blush like a way to get some extra money and stick it to tech companies. But that line of thinking is misguided, and falls apart quickly when applied to the reality of privacy today. In truth, the data dividend scheme hurts consumers, benefits companies, and frames privacy as a commodity rather than a right.

    EFF strongly opposes data dividends and policies that lay the groundwork for people to think of the monetary value of their data rather than view it as a fundamental right. You wouldn’t place a price tag on your freedom to speak. We shouldn’t place one on our privacy, either.

    Hayley Tsukayama, Why Getting Paid for Your Data Is a Bad Deal (EFF)

    As the EFF points out, who would get to set the price of that data, anyway? Also, individual data is useful to companies, but so is data in aggregate. Is that covered by such plans?

    Facebook makes around $7 per user, per quarter. Even if they gave you all of that, is that a fair exchange?

    Those small checks in exchange for intimate details about you are not a fairer trade than we have now. The companies would still have nearly unlimited power to do what they want with your data. That would be a bargain for the companies, who could then wipe their hands of concerns about privacy. But it would leave users in the lurch.

    All that adds up to a stark conclusion: if where we’ve been is any indication of where we’re going, there won’t be much benefit from a data dividend. What we really need is stronger privacy laws to protect how businesses process our data—which we can, and should do, as a separate and more protective measure.

    Hayley Tsukayama, Why Getting Paid for Your Data Is a Bad Deal (EFF)

    As the rest of the article goes on to explain, we're already in a world of 'pay for privacy' which is exacerbating the gulf between the haves and the have-nots. We need regulation and legislation to curb this before it gallops away from us.

    Seeing through is rarely seeing into

    I am not fond of expecting catastrophes, but there are cracks in the universe

    So said Sydney Smith. Let's talk about surveillance. Let's talk about surveillance capitalism and surveillance humanitarianism. But first, let's talk about machine learning and algorithms; in other words, let's talk about what happens after all of that data is collected.

    Writing in The Guardian, Sarah Marsh investigates local councils using "automated guidance systems" in an attempt to save money.

    The systems are being deployed to provide automated guidance on benefit claims, prevent child abuse and allocate school places. But concerns have been raised about privacy and data security, the ability of council officials to understand how some of the systems work, and the difficulty for citizens in challenging automated decisions.

    Sarah Marsh

    The trouble is, they're not particularly effective:

    It has emerged North Tyneside council has dropped TransUnion, whose system it used to check housing and council tax benefit claims. Welfare payments to an unknown number of people were wrongly delayed when the computer’s “predictive analytics” erroneously identified low-risk claims as high risk

    Meanwhile, Hackney council in east London has dropped Xantura, another company, from a project to predict child abuse and intervene before it happens, saying it did not deliver the expected benefits. And Sunderland city council has not renewed a £4.5m data analytics contract for an “intelligence hub” provided by Palantir.

    Sarah Marsh

    When I was at Mozilla there were a number of colleagues there who had worked on the OFA (Obama For America) campaign. I remember one of them, a DevOps guy, expressing his concern that the infrastructure being built was all well and good when there's someone 'friendly' in the White House, but what comes next.

    Well, we now know what comes next, on both sides of the Atlantic, and we can't put that genie back in its bottle. Swingeing cuts by successive Conservative governments over here, coupled with the Brexit time-and-money pit means that there's no attention or cash left.

    If we stop and think about things for a second, we probably wouldn't don't want to live in a world where machines make decisions for us, based on algorithms devised by nerds. As Rose Eveleth discusses in a scathing article for Vox, this stuff isn't 'inevitable' — nor does it constitute a process of 'natural selection':

    Often consumers don’t have much power of selection at all. Those who run small businesses find it nearly impossible to walk away from Facebook, Instagram, Yelp, Etsy, even Amazon. Employers often mandate that their workers use certain apps or systems like Zoom, Slack, and Google Docs. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or, ‘I’m not on social media,’” says Rumman Chowdhury, a data scientist at Accenture. “You actually have to be so comfortable in your privilege that you can opt out of things.”

    And so we’re left with a tech world claiming to be driven by our desires when those decisions aren’t ones that most consumers feel good about. There’s a growing chasm between how everyday users feel about the technology around them and how companies decide what to make. And yet, these companies say they have our best interests in mind. We can’t go back, they say. We can’t stop the “natural evolution of technology.” But the “natural evolution of technology” was never a thing to begin with, and it’s time to question what “progress” actually means.

    Rose Eveleth

    I suppose the thing that concerns me the most is people in dire need being subject to impersonal technology for vital and life-saving aid.

    For example, Mark Latonero, writing in The New York Times, talks about the growing dangers around what he calls 'surveillance humanitarianism':

    By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need.

    Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.

    Mark Latonero

    It's easy to think that this is an emergency, so we should just do whatever is necessary. But Latonero explains the risks, arguing that the risk is shifted to a later time:

    If an individual or group’s data is compromised or leaked to a warring faction, it could result in violent retribution for those perceived to be on the wrong side of the conflict. When I spoke with officials providing medical aid to Syrian refugees in Greece, they were so concerned that the Syrian military might hack into their database that they simply treated patients without collecting any personal data. The fact that the Houthis are vying for access to civilian data only elevates the risk of collecting and storing biometrics in the first place.

    Mark Latonero

    There was a rather startling article in last weekend's newspaper, which I've found online. Hannah Devlin, again writing in The Guardian (which is a good source of information for those concerned with surveillance) writes about a perfect storm of social media and improved processing speeds:

    [I]n the past three years, the performance of facial recognition has stepped up dramatically. Independent tests by the US National Institute of Standards and Technology (Nist) found the failure rate for finding a target picture in a database of 12m faces had dropped from 5% in 2010 to 0.1% this year.

    The rapid acceleration is thanks, in part, to the goldmine of face images that have been uploaded to Instagram, Facebook, LinkedIn and captioned news articles in the past decade. At one time, scientists would create bespoke databases by laboriously photographing hundreds of volunteers at different angles, in different lighting conditions. By 2016, Microsoft had published a dataset, MS Celeb, with 10m face images of 100,000 people harvested from search engines – they included celebrities, broadcasters, business people and anyone with multiple tagged pictures that had been uploaded under a Creative Commons licence, allowing them to be used for research. The dataset was quietly deleted in June, after it emerged that it may have aided the development of software used by the Chinese state to control its Uighur population.

    In parallel, hardware companies have developed a new generation of powerful processing chips, called Graphics Processing Units (GPUs), uniquely adapted to crunch through a colossal number of calculations every second. The combination of big data and GPUs paved the way for an entirely new approach to facial recognition, called deep learning, which is powering a wider AI revolution.

    Hannah Devlin

    Those of you who have read this far and are expecting some big reveal are going to be disappointed. I don't have any 'answers' to these problems. I guess I've been guilty, like many of us have, of the kind of 'privacy nihilism' mentioned by Ian Bogost in The Atlantic:

    Online services are only accelerating the reach and impact of data-intelligence practices that stretch back decades. They have collected your personal data, with and without your permission, from employers, public records, purchases, banking activity, educational history, and hundreds more sources. They have connected it, recombined it, bought it, and sold it. Processed foods look wholesome compared to your processed data, scattered to the winds of a thousand databases. Everything you have done has been recorded, munged, and spat back at you to benefit sellers, advertisers, and the brokers who service them. It has been for a long time, and it’s not going to stop. The age of privacy nihilism is here, and it’s time to face the dark hollow of its pervasive void.

    Ian Bogost

    The only forces that we have to stop this are collective action, and governmental action. My concern is that we don't have the digital savvy to do the former, and there's definitely the lack of will in respect of the latter. Troubling times.

    Educational institutions are at a crossroads of relevance

    One of the things that attracted me to the world of Open Badges and digital credentialing back in 2011 was the question of relevance. As a Philosophy graduate, I'm absolutely down with the idea of a broad, balanced education, and learning as a means of human flourishing.

    However, in a world where we measure schools, colleges, and universities through an economic lens, it's inevitable that learners do so too. As I've said in presentations and to clients many times, I want my children to choose to go to university because it's the right choice for them, not because they have to.

    In an article in Forbes, Brandon Busteed notes that we're on the verge of a huge change in Higher Education:

    This shift will go down as the biggest disruption in higher education whereby colleges and universities will be disintermediated by employers and job seekers going direct. Higher education won’t be eliminated from the model; degrees and other credentials will remain valuable and desired, but for a growing number of young people they’ll be part of getting a job as opposed to college as its own discrete experience. This is already happening in the case of working adults and employers that offer college education as a benefit. But it will soon be true among traditional age students. Based on a Kaplan University Partners-QuestResearch study I led and which was released today, I predict as many as one-third of all traditional students in the next decade will "Go Pro Early" in work directly out of high school with the chance to earn a college degree as part of the package.

    This is true to some degree in the UK as well, through Higher Apprenticeships. University study becomes a part-time deal with the 'job' paying for fees. It's easy to see how this could quickly become a two-tier system for rich and poor.

    A "job-first, college included model" could well become one of the biggest drivers of both increasing college completion rates in the U.S. and reducing the cost of college. In the examples of employers offering college degrees as benefits, a portion of the college expense will shift to the employer, who sees it as a valuable talent development and retention strategy with measurable return on investment benefits. This is further enhanced through bulk-rate tuition discounts offered by the higher educational institutions partnering with these employers. Students would still be eligible for federal financial aid, and they’d be making an income while going to college. To one degree or another, this model has the potential to make college more affordable for more people, while lowering or eliminating student loan debt and increasing college enrollments. It would certainly help bridge the career readiness gap that many of today’s college graduates encounter.

    The 'career readiness' that Busteed discusses here is an interesting concept, and one that I think has been invented by employers who don't want to foot the bill for training. Certainly, my parents' generation weren't supposed to be immediately ready for employment straight after their education — and, of course, they weren't saddled with student debt, either.

    Related, in my mind, is the way that we treat young people as data to be entered on a spreadsheet. This is managerialism at its worst. Back when I was a teacher and a form tutor, I remember how sorry I felt for the young people in my charge, who were effectively moved around a machine for 'processing' them.

    Now, in an article for The Guardian, Jeremy Hannay tells it like it is for those who don't have an insight into the Kafkaesque world of schools:

    Let me clear up this edu-mess for you. It’s not Sats. It’s not workload. The elephant in the room is high-stakes accountability. And I’m calling bullshit. Our education system actively promotes holding schools, leaders and teachers at gunpoint for a very narrow set of test outcomes. This has long been proven to be one of the worst ways to bring about sustainable change. It is time to change this educational paradigm before we have no one left in the classroom except the children.

    Just like our dog-eat-dog society in the UK could be much more collaborative, so our education system badly needs remodelling. We've deprofessionalised teaching, and introduced a managerial culture. Things could be different, as they are elsewhere in the world.

    In such systems – and they do exist in some countries, such as Finland and Canada, and even in some brave schools in this country – development isn’t centred on inspection, but rather professional collaboration. These schools don’t perform regular observations and monitoring, or fire out over-prescriptive performance policies. Instead, they discuss and design pedagogy, engage in action research, and regularly perform activities such as learning and lesson study. Everyone understands that growing great educators involves moments of brilliance and moments of mayhem.

    That's the key: "moments of brilliance and moments of mayhem". Ironically, bureaucratic, hierarchical systems cannot cope with amazing teachers, because they're to some extent unpredictable. You can't put them in a box (on a spreadsheet).

    Actually, perhaps it's not the hierarchy per se, but the power dynamics, as Richard D. Bartlett points out in this post.

    Yes, when a hierarchical shape is applied to a human group, it tends to encourage coercive power dynamics. Usually the people at the top are given more importance than the rest. But the problem is the power, not the shape. 

    What we're doing is retro-fitting the worst forms of corporate power dynamics onto education and expecting everything to be fine. Newsflash: learning is different to work, and always will be.

    Interestingly, Bartlett defines three different forms of power dynamics, which I think is enlightening:

    Follett coined the terms “power-over” and “power-with” in 1924. Starhawk adds a third category “power-from-within”. These labels provide three useful lenses for analysing the power dynamics of an organisation. With apologies to the original authors, here’s my definitions:

    • power-from-within or empowerment — the creative force you feel when you’re making art, or speaking up for something you believe in
    • power-with or social power — influence, status, rank, or reputation that determines how much you are listened to in a group
    • power-over or coercion — power used by one person to control another

    The problem with educational institutions, I feel, is that we've largely done away with empowerment and social power, and put all of our eggs in the basket of coercion.


    Also check out:

    • Working collaboratively and learning cooperatively (Harold Jarche) — "Two types of behaviours are necessary in the network era workplace — collaboration and cooperation. Cooperation is not the same as collaboration, though they are complementary."
    • Learning Alignment Model (Tom Barrett) - "It is not a step by step process to design learning, but more of a high-level thinking model to engage with that uncovers some interesting potential tensions in our classroom work."
    • A Definition of Academic Innovation (Inside Higher Ed) - "What if academic innovation was built upon the research and theory of our field, incorporating social constructivist, constructionist and activity theory?"

    Intimate data analytics in education

    The ever-relevant and compulsively-readable Ben Williamson turns his attention to ‘precision education’ in his latest post. It would seem that now that the phrase ‘personalised learning’ has jumped the proverbial shark, people are doubling down on the rather dangerous assumption that we just need more data to provide better learning experiences.

    In some ways, precision education looks a lot like a raft of other personalized learning practices and platform developments that have taken shape over the past few years. Driven by developments in learning analytics and adaptive learning technologies, personalized learning has become the dominant focus of the educational technology industry and the main priority for philanthropic funders such as Bill Gates and Mark Zuckerberg.

    […]

    A particularly important aspect of precision education as it is being advocated by others, however, is its scientific basis. Whereas most personalized learning platforms tend to focus on analysing student progress and outcomes, precision education requires much more intimate data to be collected from students. Precision education represents a shift from the collection of assessment-type data about educational outcomes, to the generation of data about the intimate interior details of students’ genetic make-up, their psychological characteristics, and their neural functioning.

    As Williamson points out, the collection of ‘intimate data’ is particularly concerning, particularly in the wake of the Cambridge Analytica revelations.

    Many people will find the ideas behind precision education seriously concerning. For a start, there appear to be some alarming symmetries between the logics of targeted learning and targeted advertising that have generated heated public and media attention already in 2018. Data protection and privacy are obvious risks when data are collected about people’s private, intimate and interior lives, bodies and brains. The ethical stakes in using genetics, neural information and psychological profiles to target students with differentiated learning inputs are significant.
    There's a very definite worldview which presupposes that we just need to throw more technology at a problem until it goes away. That may be true in some situations, but at what cost? And to what extent is the outcome an artefact of the constraints of the technologies? Hopefully my own kids will be finished school before this kind of nonsense becomes mainstream. I do, however, worry about my grandchildren.
    The technical machinery alone required for precision education would be vast. It would have to include neurotechnologies for gathering brain data, such as neuroheadsets for EEG monitoring. It would require new kinds of tests, such as those of personality and noncognitive skills, as well as real-time analytics programs of the kind promoted by personalized-learning enthusiasts. Gathering intimate data might also require genetics testing technologies, and perhaps wearable-enhanced learning devices for capturing real-time data from students’ bodies as proxy psychometric measures of their responses to learning inputs and materials.
    Thankfully, Williamson cites the work of academics who are proposing a different way forward. Something that respects the social aspect of learning rather than a reductionist view that focuses on inputs and outputs.
    One productive way forward might be to approach precision education from a ‘biosocial’ perspective. As Deborah Youdell  argues, learning may be best understood as the result of ‘social and biological entanglements.’ She advocates collaborative, inter-disciplinary research across social and biological sciences to understand learning processes as the dynamic outcomes of biological, genetic and neural factors combined with socially and culturally embedded interactions and meaning-making processes. A variety of biological and neuroscientific ideas are being developed in education, too, making policy and practice more bio-inspired.
    The trouble is, of course, is that it's not enough for academics to write papers about things. Or even journalists to write newspaper articles. Even with all of the firestorm over Facebook recently, people are still using the platform. If the advocates of 'precision education'  have their way, I wonder who will actually create something meaningful that opposes their technocratic worldview?

    Source: Code Acts in Education

    Data-driven society: utopia or dystopia?

    Good stuff from (Lord) Jim Knight, who cites part of his speech in the House of Lords about data privacy:

    The use of data to fuel our economy is critical. The technology and artificial intelligence it generates has a huge power to enhance us as humans and to do good. That is the utopia we must pursue. Doing nothing heralds a dystopian outcome, but the pace of change is too fast for us legislators, and too complex for most of us to fathom. We therefore need to devise a catch-all for automated or intelligent decisioning by future data systems. Ethical and moral clauses could and should, I argue, be forced into terms of use and privacy policies.

    Jim’s a great guy, and went out of his way to help me in 2017. It’s great to have someone with his ethics and clout in a position of influence.

    Source: Medium

    Commit to improving your security in 2018

    We don’t live in a cosy world where everyone hugs fluffy bunnies who shoot rainbows out of their eyes. Hacks and data breaches affect everyone:

    If you aren’t famous enough to be a target, you may still be a victim of a mass data breach. Whereas passwords are usually stored in hashed or encrypted form, answers to security questions are often stored — and therefore stolen — in plain text, as users entered them. This was the case in the 2015 breach of the extramarital encounters site Ashley Madison, which affected 32 million users, and in some of the Yahoo breaches, disclosed over the past year and a half, which affected all of its three billion accounts.
    Some of it isn't our fault, however. For example, you can bypass PayPal's two-factor authentication by opting to answer questions about your place of birth and mother's maiden name. This is not difficult information for hackers to obtain:
    According to Troy Hunt, a cybersecurity expert, organizations continue to use security questions because they are easy to set up technically, and easy for users. “If you ask someone their favorite color, that’s not a drama,” Mr. Hunt said. “They’ll be able to give you a straight answer. If you say, ‘Hey, please download this authenticator app and point the camera at a QR code on the screen,’ you’re starting to lose people.” Some organizations have made a risk-based decision to retain this relatively weak security measure, often letting users opt for it over two-factor authentication, in the interest of getting people signed up.
    Remaining secure online is a constantly-moving target, and one that we would all do well to spend a bit more time thinking about. These principles by the EFF are a good starting point for conversations we should be having this year.

    Source: The New York Times

Older Posts →