Two people are illustrated in a warm, cartoon style, one on the left and one on the right. The person on the left, who has their back to the viewer, and is typing on a laptop which is sitting on a table. They are white, their hair is shoulder length and dark, and they are wearing a green t-shirt. The computer screen is dark with rows of coloured squares representing programming. The person on the right looks similar but their hair is now tied back in a pony tail, and they are wearing a white lab coat and safety goggles. They are reaching down to lift up an orange hazard label which is about the size of a book. The label is an orange square with a black exclamation mark in the middle. The person looks like they are being careful as they lift it.

I’m working on an AI Literacy project with the BBC at the moment. I haven’t given many details of this anywhere, as they need to socialise internally that the work is happening first. But I’m really enjoying getting my teeth back into the new literacies space.

For years now, I’ve included in my presentations the fact that when you define ‘literacy’ you’re making a power move. You’re either explicitly or implicitly saying what counts as “literate behaviour.”

That’s why I’m in agreement with James O’Hagan’s position in this article. It chimes the point I made earlier this week about it being a good thing that young people are using AI for their own ends. They need a space to push back at simple ‘compliance training’ on how to use tools, to develop critical AI Literacies (plural!)

There is a reason why the dominant models of AI literacy being promoted to schools feel so hollow. They focus on functionality, not freedom. They train students to use the tools, not challenge the systems. They offer guardrails, not agency.

In one of my Medium pieces, I argued that we are designing AI literacy to make education compliant, not smarter. That is still the case. What often gets labeled “AI education” is really just exposure — watching a tool work, seeing a demo, reading a definition. For example, I completed all five MagicSchool AI certification courses in under 15 minutes — without ever logging in or using the platform. That says more about the training than it does about the tool.

Very little of this equips students to intervene. To resist. To build differently.

We need AI literacy that makes students dangerous thinkers, not docile users.

[…]

Let students ask who funds the tools. Who sets the limits. Who benefits. Let them critique the platforms that shape their school day. Let them design alternatives rooted in their experiences. And let us stop pretending that integration is progress if the terms are dictated from the outside.

We can — and must — teach the technical. But we should not stop there. We need to lift the hood, yes. But we also need to ask why the engine was built in the first place, who it leaves behind, and where it refuses to go.

The way we talk about AI in education will shape the way we teach it. If we treat it like magic, we will mystify. If we treat it like software, we will standardize. But if we treat it like a political, social, and ethical terrain, we will start to give students the tools to navigate it — and challenge it.

Source: James O’Hagan

Image: Yasmin Dwiputri & Data Hazards Project