It’s now over seven years since I submitted my doctoral thesis on digital literacies. Since then, almost the entire time my daughter has been alive, the world has changed a lot.
Writing in The Conversation, Anjana Susarla explains her view that digital literacy goes well beyond functional skills:
In my view, the new digital literacy is not using a computer or being on the internet, but understanding and evaluating the consequences of an always-plugged-in lifestyle. This lifestyle has a meaningful impact on how people interact with others; on their ability to pay attention to new information; and on the complexity of their decision-making processes.
Digital literacies are plural, context-dependent and always evolving. Right now, I think Susarla is absolutely correct to be focusing on algorithms and the way they interact with society. Ben Williamson is definitely someone to follow and read up on in that regard.
Over the past few years I’ve been trying (both directly and indirectly) to educate people about the impact of algorithms on everything from fake news to privacy. It’s one of the reasons I don’t use Facebook, for example, and go out of my way to explain to others why they shouldn’t either:
A study of Facebook usage found that when participants were made aware of Facebook’s algorithm for curating news feeds, about 83% of participants modified their behavior to try to take advantage of the algorithm, while around 10% decreased their usage of Facebook.
However, a vast majority of platforms do not provide either such flexibility to their end users or the right to choose how the algorithm uses their preferences in curating their news feed or in recommending them content. If there are options, users may not know about them. About 74% of Facebook’s users said in a survey that they were not aware of how the platform characterizes their personal interests.
Although I’m still not going to join Facebook, one reason I’m a little more chilled out about algorithms and privacy these days is because of the GDPR. If it’s regulated effectively (as I think it will be) then it should really keep Big Tech in check:
As part of the recently approved General Data Protection Regulation in the European Union, people have “a right to explanation” of the criteria that algorithms use in their decisions. This legislation treats the process of algorithmic decision-making like a recipe book. The thinking goes that if you understand the recipe, you can understand how the algorithm affects your life.
But transparency is not a panacea. Even when an algorithm’s overall process is sketched out, the details may still be too complex for users to comprehend. Transparency will help only users who are sophisticated enough to grasp the intricacies of algorithms.
I agree that it’s not enough to just tell people that they’re being tracked without them being able to do something about it. That leads to technological defeatism. We need a balance between simple, easy-to-use tools that enable user privacy and security. These aren’t going to come through tech industry self-regulation, but through regulatory frameworks like GDPR.
Source: The Conversation
Also check out:
- Platforms Want Centralized Censorship. That Should Scare You (WIRED) — “The risk of overbroad censorship from automated filtering tools has been clear since the earliest days of the internet”
- What to do if your boss is an algorithm (BBC Ideas) — “Digital sociologist Karen Gregory on how to cope when your boss isn’t actually human.”
- Optimize Algorithms to Support Kids Online, Not Exploit Them (WIRED) — “Children are exposed to risks at churches, schools, malls, parks, and anywhere adults and children interact. Even when harms and abuses happen, we don’t talk about shutting down parks and churches, and we don’t exclude young people from these intergenerational spaces.”