The more we embed today’s norms into these systems, the harder it will be to course-correct later

WAO wrote a report on Harnessing AI for environmental justice which serves as the background to this post by Christian Graham from Friends of the Earth. He points out that LLMs can be problematic in terms of human progress in at least a couple of important ways.
First, referencing existing norms and frameworks could make it more difficult for new, exciting, and innovative ideas to break through. Second, AI enforcement / policing / standardisation could make it difficult for us to correct course away from a problematic trajectory.
It’s well worth a read. Even though I’ve found LLMs to be extremely useful as a ‘thought partner’ for coming with new angles on existing problems, I’m not sure the majority of people would use them in such a nuanced way. After all, the US Secretary of State is already threatening to use AI to revoke visas of foreign students who appear “pro-Hamas”.
(On a more prosaic level, can we expect much nuance when, last month, ‘Google’ was the sixth most-searched term… on Google? People, including politicians and policymakers, value convenience even/over everything else)
No one built this to be unjust. It’s just an AI doing its job: optimising for carbon cuts, personal accountability and cold, hard data. Yet baked into it are early 21st-century blind spots: climate as your burden, not the system’s. And AI as some impartial oracle.
This is technological lock-in. Not a grand conspiracy, but a thousand quiet choices, hardening into a future we can’t easily unwrite.
[…]
[S]tability isn’t always a good thing. The same features that make AI useful could also make it inflexible, resistant to new ideas and blind to the possibility of better alternatives:
- Old biases risk becoming permanent – AI trained on today’s moral and economic frameworks might prioritise corporate-led sustainability initiatives over grassroots action or overvalue GDP growth as the main measure of success.
- Future breakthroughs could struggle to take hold – Just as a Victorian-trained AI might have discouraged Darwin from publishing, future scientists, activists, and policymakers could find themselves fighting against AI-driven inertia.
- AI governance might become self-referential – If AI models continually cite their own outputs as authoritative sources, they could create self-reinforcing knowledge loops, making early 21st-century assumptions feel like eternal truths.
- Technology stops being a tool for change – If AI systems shape environmental, legal, and economic policies based on past precedent, it becomes harder for movements that challenge the status quo to gain traction. Instead of being a force for progress, AI becomes a force for keeping things exactly as they are.
We can already see early signs of this happening. AI is being used in policing in ways that prioritise past data over future possibilities. The more we embed today’s norms into these systems, the harder it will be to course-correct later.
Source: Friends of the Earth
Image: Jamillah Knowles & We and AI