Is ChatGPT Cheating?
The Pedagogical Ramifications of Generative AI
Anonymous       ⦿       October 23, 2024
Prelude
The future of writing is here. Ready or not. Like it or not. With notable yet niche exceptions, this emergent epoch is one in which the skill of writing will be fundamentally different from what it was at any point in the long history of this tekhne. The future of lay writing is not scrivening but crowdsourcing.
Confession
1983. On a snowy Friday in a Cincinnati suburb, I trudged home through the slush and sludge in a bewildered stupor. I recall the steadfast certainty that our teacher had grossly overestimated the skill sets of her charges. A Christmas poem?!? We were in 3rd grade. We could barely construct a complete sentence, never mind some sort of condensed and playful collection of tidily bound lines with syllabic meter, a rhyme scheme, coherence and meaning, a beginning-middle-and-end. That was elite level stuff. She might as well have asked us to draft the blueprints for a major metropolitan hospital. “Here’s some paper and a slide rule, kid. Get busy!”
I remember this assignment so well because it is my earliest memory of cheating on an assignment.
Plagiarism-adjacent Writing
As a 3rd grader, I’d never heard of plagiarism. Of course, I’d never heard of “robust” either, which turned out to be my undoing. If only I had ChatGPT, I could have asked it to stick to a 3rd-grade vocabulary.
I am happy to report that I came to deeply prize original thought, a strong vocabulary, and an ability to combine the two, if not in poetic verse, then in not-irremediably-terrible prose. Eventually I learned that plagiarism is not only unethical but unnecessary. There are countless acceptable plagiarism-adjacent alternatives: in-line quotes, block quotes, rephrasing, paraphrasing, summarizing, allusion, homage, intertextuality, cento, parody, pastiche, fan fiction, adaptation, and perhaps even convention itself in its many guises.
Of the plagiarism-adjacent forms of acceptable writing, quoted text is perhaps the most telling. Given the ubiquity of the form, it is beyond dispute that the act of copying the words of another verbatim is not in-and-of-itself a crime. No, it is author occlusion, especially as a matter of deception, that is the crime.
Generative AI
So what about text that was not copied from any other author, which has no duplicate anywhere in the world, but which was 0% written by the hand of the putative author?
Interestingly, texts generated by AI share many qualities with other plagiarism-adjacent forms of writing. The AI mimics writing styles and formats (pastiche) and produces text that is, by its very nature, elite-level rephrasing, paraphrasing, and summary.
Given the similarity, can the criteria used to govern appropriate use of someone else’s work (directly or indirectly) be repurposed for generative AI? At minimum, cited or excerpted text should adhere to the following:
- it should be transformative;
- it should be teleologically appropriate (intended to teach, illuminate, analyze, contrast, extend, demonstrate, evince, etc.);
- it should abide by accepted law and professional standards;
- it should have integrity (excerpted text should not be misrepresented or distorted)
While these criteria, when taken at face value, do not appear well-suited to generative AI, the spirit of each can perhaps offer some guidance.
Original thought. The first criterion is a matter of originality or novelty; there must be something about the manner in which the excerpt fits within the larger body of writing that communicates something unique, something different from that which it communicated in its original context, without running afoul of the fourth criterion.
Upright intention. The second criteria is a sort of virtue-based moral clause aimed at intention; when done properly, excerpts serve a noble cause and not selfish ends.
Standards. The third criterion is a matter of protocol; formal structures that set expectations and underwrite a sort of insurance for the first two criteria. Plagiarism-adjacent writing is only to be abided when it adheres to certain formal standards, standards that invariably mark it, set it apart, put it on display in such a way that it calls attention to itself (footnotes, in-line citations, block-quote formatting, etc.).
Intellectual honesty. The fourth criterion is also a sort of moral clause, one that connects back to the original sin of plagiarism: intellectual dishonesty. Distorting or misrepresenting an author’s work is the flip side of the plagiarism coin. Not crediting an author for what they have done and crediting an author for something they haven’t done are both types of intellectual dishonesty and not to be tolerated.
These four spirits are not bad starting points for creating guidelines for the appropriate incorporation of text-generative AI into the writer’s toolbox. However, given the wide ranging utility of the AI models already available, these rules are perhaps too specific to writing. It seems worthwhile to aim at a more general set of guidelines that can incorporate these insights while achieving broader purchase across use cases.
Beyond Writing
Thus far I have focused on writing, but people are engaging with LLM driven generative AI in myriad ways beyond help with academic composition: interpersonal communication; companionship; therapy; advice; idea sharpening/testing/expanding; legal or other professional guidance; learning; editing; feedback; outlining; satiation of curiosity; mining data and/or information; distilling information; coding assistance; help with school and/or job assignments; design ideas; fact-checking; resource creation; and more.
Clearly, several of these use cases raise thorny questions for educators and parents. Instead of attempting to offer moral guidance on “right” and “wrong” ways to engage with generative AI, it might be more helpful to speak in terms of healthy and unhealthy. Right and wrong, like good and evil, have more to do with moral trespass and the levying of punishments. Health is a matter of habit and lifestyle, of the cumulative and long term effects of various behaviors. I think of it not so much as a more apt metaphor but a more apt epistemology. And, as with just about anything in life, including revolutionary and disruptive technology, there are ways of engaging it that are beneficial, healthy, and ethical versus other ways that are diametric to these.
Healthy vs. Unhealthy Ways for Students to Use Text-Generative AI
LLM-driven generative AI models are powerful and can be an extraordinary boon to educators and students alike. To my mind, healthy, beneficial, and ethical ways of engaging this technology include the following:
- to learn about a topic
- to explore a question
- to conduct research
- to learn/practice a language
- to organize thoughts, writing, etc.
- to test ideas
- to improve a skill
- to engage a topic more deeply
- to debate a position
- to aid in study
- to simulate a conversation (like an interview)
Unhealthy, counterproductive, or unethical ways of engaging this technology include using it for the following:
- to dodge responsibility (which includes accepting the output uncritically)
- to bypass learning (e.g. to just get homework done, circumventing the learning process to merely check a box)
- to deceive
- to manipulate
- to avoid human collaboration and/or interaction
- to appropriate ideas
Best Practices
In order to maximize the benefit and minimize the harm of using LLM driven generative AI models as an educational support, we can craft and regularly refine a sort of policy of best practices. I offer the following as a rough draft of sorts, as a starting point for a much more expansive conversation and effort:
- Students should be taught how to be proficient users, adept at generating the kind of responses they are looking for; prompt engineering skills might include:
- role assignment
- step-wise reasoning
- style guidance
- platform navigation
- chat naming and tracking
- project and/or notebook management
- eliciting citations
- Students should be taught to be perpetual skeptics, always in doubt about the veracity of the output.
- Students should be taught about available features and their use cases:
- text generation
- visual analysis
- speech recognition
- coding
- image generation
- data analysis
- textual analysis
- math capabilities
- language translation
- Students should be taught to be transparent about the use of these supporting tools to whatever extent required (we may eventually reach a point, as we did with calculators, that it is expected and thus assumed in particular situations and thus unnecessary to call attention to it).
- Students should be taught how to use these models and the internet at large to trace the genealogy of the ideas with which they are presented.
- Perhaps most importantly, students should be taught how to develop their own original lines of thought and how to inject those into their queries and prompts to elicit original, interesting, engaging, and compelling responses.
The future is upon us. It is here. Instead of denial or panic, we need to adapt. Adapting, in this case, necessitates a new set of standards. Educators, parents, and scholars need to embark on this process—the process of crafting a sensible set of guidelines for the appropriate use of generative AI—as soon as possible. In a perfect world, we’d have had the guidelines prior to the technology. This is not a perfect world. But it is an exciting one!
Vestigial Skills
vestigial—degenerate, rudimentary, or atrophied, having become functionless in the course of evolution
In the more optimistic versions of conversations about LLMs and AI chatbots, I hear much ado about disruptive technology, about how there is always a period of discomfort and uncertainty as society adjusts to its brave new world. Granted. But what precisely is the relation between discomfort and disruption? A good way to get at this question is to examine the impact of disruptive technologies on society in terms of its ramifications: which sectors of society are transformed? Which norms are reconfigured? Which industries go extinct? And in relation to all of these, which skill sets become attenuated to the point of obsolescence?
Vestigial skills are those skills that, owing to their long tenure as indispensable bits of know-how, linger long after their utility has expired. Once upon a time, being able to write in cursive was a vital skill. Owing to its speed and efficiency, it had become the standard for handwritten documents and correspondence. Thus the ability to read it was at least as important as the ability to write it. Cursive survived the printing press, but eventually the advent of the typewriter, the word processor, and ultimately the home computer rendered cursive obsolete. And now, with the ubiquity and accessibility of digital text, combined with advancements in speech-to-text technology, text-generative AI,, and even the nascent thought-to-text interfaces, handwriting in general is increasingly impractical and unnecessary in a surprising number of contexts.
The persistence of vestigial skills presents a serious problem in education. Outmoded training regimens function like cholesterol in the pedagogical arteries. There are approximately 30 hours in a school week. When this limited resource is devoted to teaching skills that have little relevance in contemporary society, it prevents the allocation of that time for developing crucial 21st-century competencies. Cursive exemplifies this issue. Instead of being relegated to the realm of hobbyists along with calligraphy, it has enjoyed a resurgence. Several states have recently mandated its inclusion in elementary curricula, complete with proficiency standards. And so cursive still lingers, a quintessential vestigial skill, squandering precious classroom time.
Making Cuts
In the shadow of LLMs and generative AI, the debate over cursive seems almost quaint. All of a sudden we are faced with far more pressing and complex questions—questions for which we are woefully unprepared. While still quibbling over the merits of an 8th century technology, we have been caught off guard by the rapid advancements in artificial intelligence.
Debates over how and when to update our pedagogies have proceeded within a bubble of denial regarding the breakneck pace of technological progress. The evolution of technology alone should have compelled us to reassess our curricula, calling to account deeply entrenched vestigial skills that occupy an outsized space in education relative to their societal relevance. Calculators became ubiquitous in the mid 1970s: is long division now a vestigial skill? Spell-check and auto-correct have been commonplace since the 1990s—longer than our current students have been alive. When do we finally ask: has spelling become a vestigial skill?
If we are unable (or unwilling) to come to terms with cursive, how can we hope to tackle more contentious issues like long division and spelling? And now, without having grappled with the growing bloat of vestigial skills clogging our schools, we face a much more daunting and urgent question: with LLM driven generative AI models, is writing itself on the precipice of becoming a vestigial skill?
If so, do we embrace this shift and adapt? Or do we resist it, gnashing our teeth all the way?
The Predictable Fate of the Christmas Poem Cheater
I am not sure that my knack for writing poetry has improved a whole lot since third grade. If push came to shove, I could pen something passable as a poem. But, in what world would there be such push, let alone shove? ChatGPT won’t make me a great poet. Nor a great writer. And that’s okay. The quality of my life has not suffered owing to the mediocrity of my writing skill. For most of us, a task that is as commonly tedious as it is, well, common, is a lot less of the former.
The assistance of LLM-driven, text-generative AI models can make most of us better, more efficient writers. Of course, there will always be a place in highly circumscribed sectors of society for those gifted few who have genuine talent in wordcraft. The ability to write, to truly write, will still be highly prized in literature, journalism, law, history, and the like. For the rest of us, this novel, plagiarism-adjacent form of acceptable writing, can let the ideas themselves take center stage without us getting immobilized within the thicket of composition mechanics. Or, at the worst, when properly used, can give us insight into how to be better at intertextuality, critical thinking, and organizing our thoughts for the purpose of exchange.