Commentary: ChatGPT use in higher education assessment: Prospects and epistemic threats

Vic Benuyenah (Birmingham Business School, University of Birmingham, Birmingham, UK)

Journal of Research in Innovative Teaching & Learning

ISSN: 2397-7604

Article publication date: 30 March 2023

Issue publication date: 30 March 2023



Benuyenah, V. (2023), "Commentary: ChatGPT use in higher education assessment: Prospects and epistemic threats", Journal of Research in Innovative Teaching & Learning, Vol. 16 No. 1, pp. 134-135.



Emerald Publishing Limited

Copyright © 2023, Vic Benuyenah


Published in Journal of Research in Innovative Teaching & Learning. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at

Why has ChatGPT drawn so much attention?

The recent invention of ChatGPT shakes every academic institution, and while we continue to understand its full prospects and risks, it is worth providing an initial commentary. There are unprecedented prospects for ChatGPT in academia due to the extraordinary abilities of the chatbot's human-like capabilities that surpass most recent tools that we have seen (Illingworth, 2023). ChatGPT has drawn unprecedented attention from the academic community and the press over the past few months (≈650,000,000 results in Google Search as of 23/02/2023). It is doubtful that the chatbot was created deliberately as a proxy for academic writing, so its application to academic writing is a vicarious product of artificial intelligence (AI) ingenuity. Students worldwide would find a way around assessments if given the option, and so we are all concerned that despite its benefits, some students might abuse it. Whilst academia is far from being engulfed in an assessment integrity crisis, the emergence of formidable AI and tools that could aid cheating cannot be ignored. Some of us believe that some epistemic implications exist for the utility of ChatGPT in assessments; nonetheless, potential threats would not mean the end of our resolve. So far, we know that some university programmes have a higher risk (for example, Management Studies and Information Technology), yet educators are not new to academic cheating – they just do not fully understand ChatGPT yet.

Despite its unavoidable use in some academic scenarios, I see no compelling reason to endorse its use in assessments. Students are not taught to “copy and paste” but to “think and write critically”. It, therefore, should be of concern that ChatGPT has passed medical school exams (Purtill, 2023) and MBA assessments.

Why AIs like ChatGPT pose limited epistemic threat?

With the current pandemonium, some may recommend a return to paper-based assessment, but would this not be a disproportionate reaction and, frankly, premature? Like anything else in academia, the discourse surrounding ChatGPT will always be split between utilitarianist and consequentialist views, yet, we do not seem to know enough at this stage to form a compelling opinion except that we can only speculate and adapt our practice. The formidableness of the academic community is a significant reassurance as organisations such as Turnitin have already released AI and ChatGPT detection tools to deal with potential malpractices. Although threats occur, institutions can fend off such threats, if not immediately. Precisely 23 years ago, a study was undertaken to understand the impact of computer use on teachers (Lai, 2000), as there were ergonomic concerns, yet, university professors have continued to use computers safely until now. Other studies raised mixed concerns about the use of iPads in education (Perry and Steck, 2015), but academics have learnt to embrace their use – of course, with some caveats. Although new tools will emerge, they cannot pose an insurmountable hazard to university assessments for three main reasons: (1) Universities are policy driven and will always set new policies to counter cheating; (2) students are reasonable and want to learn and (3) there are tools and processes available to deal with intentional academic dishonesty (IAD).

Conclusion: consequences of AI on the future of assessments

Utilitarian ethicists will find no reason to reject the revolution of AI even if it limits the veracity of higher education assessment; however, consequentialists will argue that the spread of AI and questions surrounding the ethics of their use will constitute the future of research in many areas, including the long-term purpose and utility of higher education (Benuyenah and Boukareva, 2018). To suggest that AI will have no impact on the evolution of higher education is not only denial but existentially dangerous. Despite all the challenges of AI, we must acknowledge that higher education will thrive alongside any AI evolution as long as we learn to adapt our pedagogy and assessment strategies.


Benuyenah, V. and Boukareva, B. (2018), “Making HRM curriculum relevant – a hypothetical practitioners' guide”, Journal of Work-Applied Management, Vol. 10 No. 1, pp. 93-100, doi: 10.1108/JWAM-09-2017-0026.

Illingworth, S. (2023), “ChatGPT: students could use AI to cheat, but it's a chance to rethink assessment altogether”, The Conversation, available at: (accessed 6 February 2023).

Lai, K.W. (2000), “Health risks with teachers' computer use: some New Zealand observations”, Journal of Information Technology for Teacher Education, Vol. 9 No. 3, pp. 303-318, doi: 10.1080/14759390000200094.

Perry, D.R. and Steck, A.K. (2015), “Increasing student engagement, self-efficacy, and meta-cognitive self-regulation in the high school geometry classroom: do iPads help?”, Computers in the Schools, Vol. 32 No. 2, pp. 122-143, doi: 10.1080/07380569.2015.1036650.

Purtill, J. (2023), “‘ChatGPT appears to pass medical school exams. Educators are now rethinking assessments’ ABC Science”, available at: (accessed 6 February 2023).

Related articles