The narrative slippage and metaphorical vagueness that many important people use when they talk about AI means it can be very difficult to know what they mean

I’m working on an AI Literacy project at the moment which involves, in part, providing some guidance for the BBC. I’ve collected some frameworks which I’m going through with my WAO colleagues. Some are pretty useful, others are not.
We’re coming up with criteria to help guide our research, things such as whether a framework includes:
- Definition of (generative) AI
- Defined target audience(s)
- Explanation how it was created (decisions, tradeoffs, names of authors, etc.)
- List of skills and competencies
In addition, it should come from a reputable source.
In addition, it would be nice to have:
- Examples of application to real-world situations and issues
- At least a mention of the difference between AI safety vs AI ethics
- A visual representation of framework
I bring this up by way of context as Rachel Coldicutt’s recent post helps problematise not only AI Literacy, but AI itself. I’m not sure I’d share her ‘social’ definition of AI as “a set of extractive tools used to concentrate power and wealth” as it ascribes too much intentionality. However, I do think that the quotation from her which I’ve used to title this post is an important insight.
As I’ve discussed at length elsewhere, there are different kinds of ambiguity and a lot of language around AI is what I would deem “unproductively ambiguous.”
“AI literacy” is not just a matter of getting to grips with data and algorithms and learning how Microsoft tools actually work, it also requires understanding power, myths, and money. This blog post explores some ways those two letters have come to stand for so many different things.
There are many reasons AI is an ambiguous and shifting set of concepts; some are due to the technical complexity of the field and the rapidly unfolding geopolitical AI arms race, but others are related to straightforward media manipulation and the fact that awe and wonder can be catching. However, a fundamental reasons AI is a confusing term is that it’s not actually the right terminology for the thing it describes.
[…]
“[A]rtificial intelligence” is not a highly specific technical label, but a name given in haste by someone writing a funding proposal. The fact that the term AI has persisted for so long and expanded to include the broader field of related computer science clearly indicates that many people find it useful, but you don’t need to get hung up on that particular pairing of terms or look for a deeper meaning to understand what it is. “Artificial intelligence” is almost like a nickname or a brand name; something understood by many to stand for something, rather than a precise description of any particular qualities.
[…]
The narrative slippage and metaphorical vagueness that many important people use when they talk about AI means it can be very difficult to know what they mean – which in turn makes it harder to keep them accountable or to ask precise, difficult questions.
When heroic words are used to describe technologies that operate on the horizon of hope and ambition, it can feel awkward to ask practical questions such as “what are you actually proposing?” and “how will it work?”, but real knowledge requires detail and specificity rather than waves of shock and awe. AI technologies are not actually myths and should not be discussed as such; they are real technologies that use data, hardware, and human skills to achieve their social, economic, environmental, political, and technological change.
Source: Careful Industries
Image: Teena Lalawat