Citation
Evans, S. (2023), "
Publisher
:Emerald Publishing Limited
Copyright © 2023, Emerald Publishing Limited
Did I write this?
I get a little thrill when there is the magic of serendipity as I come to write another Literature and Insights editorial. Typically, it is where I might not already have a subject clearly in mind but, as the due date approaches, some occurrence connects strongly with a creative piece I have on hand or perhaps it is of a nature that shouts out to be discussed. Getting both at once is a real bonus.
It might concern a small event, one personal to my life and what you would regard as minor — in which case, I would better make it relevant to you as readers. Or, perhaps it is on a larger scale, affecting whole communities, inside or outside our own professional ones. Today, it is a bit of both, and very topical.
Why so? On the “minor” side, it concerns an issue I thought might be raised in an interview due the day after I began to write this; and, yes, it was raised. It is also a matter occupying the minds of many of our colleagues lately. Enough teasing. It concerns the value we place on human thinking.
Two days ago, I read a new article about chatbots. As a result, I sent an email to its author in the USA and a brief correspondence followed. That was, for me, the start of a comet shower of illuminating commentaries on AI in an academic context. As you have no doubt guessed, it was all about the charade of computer-generated work being submitted as a student's own efforts and whether that could be detected. It was going way beyond paper mills and old-fashioned plagiarism.
As John Warner says:
Some are worried that this is “the end of writing assignments” because if a bot can churn out passable prose in seconds that would not trigger any plagiarism detector because it is unique to each request, why would students go through the trouble to do this work themselves, given the outcome in terms of a grade will be the same? (Warner, 2022 online)
I was quickly immersed in articles looking beyond academia too, e.g. about art experts being challenged to say whether one of a pair of images in several examples represented an original human work or that of a computer. Whatever the style in each offered pair, they were frequently wrong, by the way (Lawson-Tancred 2023, online). Singer Nick Cave, he of the dark and broody lyrics, reacted dismissively to a fan's AI-generated imitation of his writing; though it seemed quite similar as far as I was concerned (Hyde 2023, Online).
AI is claimed to be a way to streamline journalism too, enabling faster production of material; allegedly a practice to be encouraged as long as there is editorial overview before publication (though I do hear the rustle of redundancy notices here). Andreas Veglis and Theodora A. Maniou addressed similar issues in “Chatbots on the Rise: A New Narrative in Journalism” (2019, online). What we get in this kind of journalism is similar to the situation with essays handed in by students using AI — shortcuts, delegation, and a regurgitation of information, albeit smoothed out with some predictive expression to make it seem like a human utterance, but which does not represent individual thought or critical analysis.
I do not hold out much hope regarding the claims made for effective final human screening by news editors either. Command of grammar and care for expression, for instance, are becoming woeful in this area; otherwise who would accept such clangers as “infront”, “ontop”, “hospo” (meaning “hospitality”), “reco” (meaning “recommendation”) and “aswell”, etc.
Only today, I read an article that said: “Scientists discover emperor penguin colony in Antarctica using satellite images. Colony of about 500 birds seen in remote region where they face existential threat due to global heating” (Devlin, 2023, online). So, do we now have technologically clever avians capable of using satellite technology, and global warming that is armed with philosophical conjecture? I think not. Maybe AI would have done a better job in this case.
Back to academia. If journalists can enlist AI, so too can academics who wish to delegate their work to software, using it to write articles and also grade student work. One aspect they should be wary about then is that the resulting quotes and references might look genuine but be false. Notionally, there is also the surreal situation in which academics can use AI software to grade assignments that were themselves produced by AI software. Perhaps we could all leave the room and allow the various programs to chat among themselves. It would mean that the students had not demonstrated learning and the teachers had not really assessed.
John Warner also says:
ChatGPT may be a threat to some of the things students are asked to do in school contexts, but it is not a threat to anything truly important when it comes to student learning. (Warner 2022, online)
I disagree. Whether students learn is down to opportunity, ability and choice. On that last point, if ChatGPT makes it easier for students to hand up “passable prose” (as Warner perhaps inadvertently punned), i.e. without engaging in their own meaningful enquiry, critical thinking and writing, where is the learning? Computer programs filter and assemble texts; they do not think. More needs to be done by teachers and students on this front.
There is an optimistic outlook. Steven Mintz advances an approach that involves students employing AI in a structured and critical manner; even a creative one (Mintz, 2023, online). It is well worth reading his suggestions. Similarly, Susan D'Agostino imagines what might lie ahead and gathers others' suggestions about how teachers and universities could respond (D'Agostino, 2023a, online), though she acknowledges that trying to perfect detection of chatbot use would likely mean we will be chasing our tails in an infinite quest (D'Agostino, 2023b, online). Christopher Grobe writes about how he uses AI with students to demonstrate its limitations (Grobe, 2023, online). The list of possible articles on these issues is rapidly growing.
The approaches they and others mention in much of the writing on AI commonly incorporate one thing; a concern with getting students to tackle the question of how AI works, including its benefits and pitfalls. Yes, there were other considerations apparent, but intrinsically there is also an elephant in the room. Unlike Mintz's ideas, too few of those discussions acknowledge how such AI-centered activity displaces the time previously spent on core material belonging to the subjects themselves, what that was meant to help students achieve the original and overarching learning objectives. The study of technology instead of, say, management accounting principles, was to take precedence at least for some time, but is that a good fit? If it is grounded in early studies, perhaps as foundational work on study skills more generally, it might well be desirable. I admit there is also a need to be aware of any key differences applicable where different subjects are involved.
The Devil's Advocate says students will soon be out in a world beyond university where computers are “doing the work” anyway and all that graduates will need do then is use the right software, so who can blame them for anticipating that earlier? The underlying assumption in proposing a study of AI in non-IT courses is all about integrity. It says that if we offer them a way to do so, the students will see the worth of understanding how AI is best used to pursue real learning rather than simulating scholarship. I hope it is true.
What next? One article about chatbots had been written by a chatbot, with the nominal author withholding that fact until the closing paragraph. I guess someone had to do it — and, by that, I mean make such an admission in order to underline how we cannot take authorship for granted even when debating authorship. Machines can do that for us too. Mind you, they cannot yet decide whether or not to publish the result. People do that — so far.
Will readers care about such things in the future? We already have imitation in the world of creative design and literature, do we not, so what is the harm? It matters because it is about the ability to frame arguments and to evaluate; to engage in meaningful critical thinking and develop a better aptitude in that area; to nurture creativity: all of these to develop and exercise human skills. Importantly, it is about integrity too, and whether we prize truth in our contracts with each other — whether in an educational or artistic or journalistic setting.
Did I write this? Yes. I briefly toyed with the idea of submitting it to ChatGPT to see whether it was rated as the effort of a human being or, at least the % probability of such. Unless I was specifically employing it as a teaching tool, however, I would feel using AI was cheating. I enjoy my writing and seeing ideas unfold. I do not mind editing what I have written either because it means I am refining my expressive abilities, i.e. my writing craft, and, hopefully, my thoughts. That is a learning process, which AI does not afford the person handing up work that is not really theirs.
Speaking of “really theirs”, Garry Carnegie certainly is the author of our featured piece in this issue in which he skewers notions of literally attaching a value to everything, including the invaluable. It is satire at its sharpest.
Your own creative contributions can be submitted via ScholarOne (see below), and your email correspondence is always welcome, of course, at: steve.evans@flinders.edu.au.
Literary editor
Accounting, Auditing & Accountability Journal (AAAJ) welcomes submissions of both research papers and creative writing. Creative writing in the form of poetry and short prose pieces is edited for the Literature and Insights Section only and does not undergo the refereeing procedures required for all research papers published in the main body of AAAJ.
Author guidelines for contributions to this section of the journal can be found at: http://www.emeraldgrouppublishing.com/products/journals/author_guidelines.htm?id=aaaj
References
Devlin, H. (2023), “Scientists discover emperor penguin colony in Antarctica using satellite images”, The Guardian, 20 January, available at: https://www.theguardian.com/environment/2023/jan/20/scientists-discover-emperor-penguin-colony-in-antarctica-using-satellite-images?utm_term=63ca1fd540dc8691245f98efd48b7ff0&utm_campaign=GuardianTodayUK&utm_source=esp&utm_medium=Email&CMP=GTUK_email&fbclid=IwAR3RLyY86Ll7NgYnuVb0oU6Wc_Ei4VUb13qUwH7AE7p_eU58kXIsIhTb1rs
D'Agostino, S. (2023a), “ChatGPT advice academics can use now”, Inside Higher Ed, 12 January, available at: https://www.insidehighered.com/news/2023/01/12/academic-experts-offer-advice-chatgpt?utm_source=Inside+Higher+Ed&utm_campaign=1691bbea9b-DNU_2021_COPY_02&utm_medium=email&utm_term=0_1fcbc04421-1691bbea9b-198457621&mc_cid=1691bbea9b&mc_eid=9fb6e84240
D'Agostino, S. (2023b), “AI writing detection: a losing battle worth fighting”, Inside Higher Ed, 20 January, available at: https://www.insidehighered.com/news/2023/01/20/academics-work-detect-chatgpt-and-other-ai-writing?utm_source=Inside+Higher+Ed&utm_campaign=6771fc5f7b-DNU_2021_COPY_02&utm_medium=email&utm_term=0_1fcbc04421-6771fc5f7b-198457621&mc_cid=6771fc5f7b&mc_eid=9fb6e84240
Grobe, C. (2023), “Why I'm not scared of ChatGPT”, Chronicle of Higher Education, 18 January, available at: https://www.chronicle.com/article/why-im-not-scared-of-chatgpt?resetPassword=true&email=stevewriter0001%40gmail.com&success=true&bc_nonce=jdshxtwo2vmtvr4bp2fxt&cid=gen_sign_
Hyde, G. (2023), “‘Song sucks’: nick Cave slams AI version of his work as concerns grow about ChatGPT”, The New Daily, 18 January, available at: https://thenewdaily.com.au/life/tech/2023/01/18/nick-cave-ai-copy-chatgpt/?utm_campaign=Morning%20News%20-%2020230119&utm_medium=email&utm_source=Adestra
Lawson-Tancred, J. (2023), “Is this by Rothko or a robot? We ask the experts to tell the difference between human and AI art”, The Guardian, 14 January, available at: https://www.theguardian.com/artanddesign/2023/jan/14/art-experts-try-to-spot-ai-works-dall-e-stable-diffusion?utm_term=63c4a195ba4a667780b7e41f50a14715&utm_campaign=GuardianTodayAUS&utm_source=esp&utm_medium=Email&CMP=GTAU_email
Mintz, S. (2023), “ChatGPT: threat or menace?”, Inside Higher Ed, 16 January, available at: https://www.insidehighered.com/blogs/higher-ed-gamma/chatgpt-threat-or-menace?utm_source=Inside+Higher+Ed&utm_campaign=1691bbea9b-DNU_2021_COPY_02&utm_medium=email&utm_term=0_1fcbc04421-1691bbea9b-198457621&mc_cid=1691bbea9b&mc_eid=9fb6e84240
Veglis, A. and Maniou, T.A. (2019), “Chatbots on the Rise: a new narrative in journalism”, Studies in Media and Communication, Vol. 7 No. 1, 22 January, doi: 10.11114/smc.v7i1.3986.
Warner, J. (2022), “Freaking out about ChatGPT—Part I”, Inside Higher Ed, 5 December, available at: https://www.insidehighered.com/blogs/just-visiting/freaking-out-about-chatgpt%E2%80%94part-i
Further reading
Mills, A. (2023), “Seeing past the dazzle of ChatGPT”, Inside Higher Ed, 19 January, available at: https://www.insidehighered.com/advice/2023/01/19/academics-must-collaborate-develop-guidelines-chatgpt-opinion?utm_source=Inside+Higher+Ed&utm_campaign=0c750586a8-DNU_2021_COPY_02&utm_medium=email&utm_term=0_1fcbc04421-0c750586a8-198457621&mc_cid=0c750586a8&mc_eid=9fb6e84240