Universities in the age of AI
Generative AI tools like ChatGPT, Claude, and Perplexity are now an integral part of my workflow. This is true of almost everything I produce these days, including this post (I used this tool to create the image alt text).
I use genAI in client work, and also in my academic studies. It’s incredibly useful as a kind of ‘thought partner’ and particularly handy in doing a RAG analysis of essays in relation to assignments. Do I use it to fabricate the answers to assessed questions which I then submit as my own work? No, of course not.
This article in The Guardian reports from the frontlines of the struggle in universities for academic rigour and against cheating. Different institutions are approaching the issue differently, as you would expect. The answer, I would suggest, is something akin to Cambridge University’s AI-positive approach, outlined in the quoted text below.
The whole point of Higher Education is to allow students to reflect on themselves and the world. It’s been my experience that using genAI in appropriate ways is an incredibly enriching experience. Especially given that my Systems Thinking modules focus on me as a practitioner in relation to a specific situation in my life, what would it even mean to “cheat”?
I was notified this morning that I received a distinction for my latest module, as I did for the one before it. Would I have achieved those grades without using genAI? Maybe. Probably, even, given I’ve already got a doctorate. But the experience for me as a distance learner was so much better than being limited to interactions with my (excellent) tutor and fellow students in the online forum.
At the end of the day, I’m studying for my own benefit, and I know that studying with genAI is better than studying without it. I’m very much looking forward to using Google’s latest upgrade to Gemini Live for my next module, which I found recently to be very useful to conversationally prepare for interviews!
More than half of students now use generative AI to help with their assessments, according to a survey by the Higher Education Policy Institute, and about 5% of students admit using it to cheat. In November, Times Higher Education reported that, despite “patchy record keeping”, cases appeared to be soaring at Russell Group universities, some of which had reported a 15-fold increase in cheating. But confusion over how these tools should be used – if at all – has sown suspicion in institutions designed to be built on trust. Some believe that AI stands to revolutionise how people learn for the better, like a 24/7 personal tutor – Professor HAL, if you like. To others, it is an existential threat to the entire system of learning – a “plague upon education” as one op-ed for Inside Higher Ed put it – that stands to demolish the process of academic inquiry.
In the struggle to stuff the genie back in the bottle, universities have become locked in an escalating technological arms race, even turning to AI themselves to try to catch misconduct. Tutors are turning on students, students on each other and hardworking learners are being caught by the flak. It’s left many feeling pessimistic about the future of higher education. But is ChatGPT really the problem universities need to grapple with? Or is it something deeper?
[…]
What counts as cheating is determined, ultimately, by institutions and examiners. Many universities are already adapting their approach to assessment, penning “AI-positive” policies. At Cambridge University, for example, appropriate use of generative AI includes using it for an “overview of new concepts”, “as a collaborative coach”, or “supporting time management”. The university warns against over-reliance on these tools, which could limit a student’s ability to develop critical thinking skills. Some lecturers I spoke to said they felt that this sort of approach was helpful, but others said it was capitulating. One conveyed frustration that her university didn’t seem to be taking academic misconduct seriously any more; she had received a “whispered warning” that she was no longer to refer cases where AI was suspected to the central disciplinary board.
If anything, the AI cheating crisis has exposed how transactional the process of gaining a degree has become. Higher education is increasingly marketised; universities are cash-strapped, chasing customers at the expense of quality learning. Students, meanwhile, are labouring under financial pressures of their own, painfully aware that secure graduate careers are increasingly scarce. Just as the rise of essay mills coincided with the rapid expansion of higher education in the noughties, ChatGPT has struck at a time when a degree feels more devalued than ever.
Source: The Guardian