Harvey J. Graff & Sean Kamperman
Posted June 29, 2025

[Against the Current is opening a discussion in our pages, and online, on the challenges and dangers as well as the potential represented by Artificial Intelligence (AI). Our July-August issue (ATC 237) will include an AI-for-beginners article by Peter Solenberger to initiate the coverage. Among the arenas of the AI debate is its controversial role in higher education. The authors of the following piece urge us not to fall into a binary pro-or-con attitude about AI in the academy.]
LIKE OTHER TECHNOLOGIES, Artificial Intelligence is never an independent variable. It is used — however well or poorly — by humans employing it appropriately and responsibly or not.
Reactions to generative AI or Artificial Intelligence applied to education, writing, and all forms of communication are predictably over-wrought. To me (Graff) as a social and cultural historian of literacy, they parallel (in the heightened forms in accord with today’s social media) historical responses to all major communications and technological breakthroughs.
Examples begin with cave painting, followed by the invention of one alphabet after another, different writing systems employing them, block and then movable typographic — Gutenberg — printing.
The debate echoes the responses to the invention of manual and then electric typewriters, and now one generation of computers after another. The almost completely positive-versus-negative but seldom interacting or balanced — let alone dialectical — reactions formed most responses to the telegraph, radio, and television.
Successive inventions were viewed as either the salvation and/or the death of civilization as it was then known. Historical, social, cultural, economic and political context, balanced sensibility, and comparison are rarely evident.
Today’s commentators seldom pause to define specifically what they mean when they refer to AI, or even generative AI. There are many kinds and uses. Even less often do they discuss the specific uses that they laud or fear and condemn.
Human Users
Why has so much of the conversation missed the points (plural)? Among the missing variables are the young or older human users, and how well or poorly they are introduced and taught to use the technologies.
This is the case, for example, in Zahid Naz’s “AI risks undermining the heart of higher education” (Times Higher Education, April 21, 2025). Of course, there is no single “heart of higher education.”
Similar is Juliette Rowsell’s “No student should graduate without being taught AI, leaders told” (Times Higher Education, April 3, 2025). Typically, she does not address which programs, and for what uses. Reception and uses differ across disciplines, a subject that is seldom addressed.
Writer after writer opines, “Is AI Enhancing Education or Replacing It” (Clay Shirky, Chronicle of Higher Education, April 29, 2025) Another writes, even less knowledgeably, “AI May Ruin the University as We Know It: The existential threat of the new wave of ed-tech” (Chronicle of Higher Education, October 30, 2024)
“The university as we know it” is not defined; how “new wave” is related to that or amounts to “the existential threat” receives no attention.
The answer must always be: How is AI used? Which programs? Toward what ends? AI, in itself, cannot “replace education.”
To be sure, there’s Donald Trump’s temporary Secretary of Education, former World Wide Wrestling head Linda McMahon, who has difficulty telling AI and A1 steak sauce apart. This leads to an exodus of self-proclaimed “stakeholders.”
Fears, Hopes and Dangers
Certainly, like all major technologies AI can be dangerous depending on its uses and users. In “How the War in Gaza Drove Israel’s A.I. Experiments,” Sheera Frenkel and Natan Odenheimer detail the risks and sometimes fatal consequences of premature, faulty and inappropriate efforts. (New York Times, April 25, 2025)
Each new invention’s publicity foments a new “myth” of fears and hopes. For example, in 1979, I coined “the literacy myth” or the exaggerated presumption of the importance assigned to the individual acquisition of reading and/or writing in themselves, regardless of circumstances or other factors.
This not only unrealistically inflated assumptions about possession of literacy regardless of actual life circumstance. But it also unfairly denigrated those with lesser and different abilities to read, write, and count. These notions underlay centuries of inequalities and limits on opportunities for many. (See The Literacy Myth, 4th edition, WAC Clearinghouse Press, 2023.)
We now have the “AI myth.” A myth, to be sure, is not fiction or falsehood — it is a partial truth. Myths do not spread or become accepted, at least by some, if they are completely false.
Seldom mentioned today is that high school and college students, educators, and researchers were using all the elements that ChatGPT seemingly suddenly collected, assembled, and packaged before it debuted in November 2022. Tools for editing, summarization, outlining, and even analysis have been available for years. Graff’s young friends learned and used separate applications in high school and used them legitimately and well throughout high school and college. That generation had CliffNotes and printed encyclopedias. World Book and sometimes Encyclopedia Britannica were ubiquitous in middle-class households.
Before ChatGPT, college students and some of their professors used apps and programs to search, sort, organize, and outline. But they almost never submitted final papers derived from separate AI programs. They seldom “cheated” or plagiarized in their uses of AI.
In fact, many of them were unimpressed with the highly promoted debut of ChatGPT. They laughed at it, and at others’ sudden and often uninformed awe, and at some professors’ exaggerated fears of the end of academic honesty and student literacy in its wake.
In the professors’ defense, the unrelenting media hype surrounding these technologies, fueled by ed tech companies, marketers, and journalists eager to strike it rich in the generative AI gold rush, was bound to take its toll. Institutional, professional, and public responses are increasingly unknowledgeable and often contradict both students’ and their own departments’ and colleges’ best interests.
Consider these article titles and headlines in the daily and education press from Inside Higher Ed:
“Eliminate the Required First-Year Writing Course: Students no longer need a required first-year writing course if AI can write for them” by Melissa Nicholas, a professor of English, Washington State University (Inside Higher Ed, Nov. 14, 2023)
She suggests that generative AI “will take care of students’ biggest writing problems” — presumably, by enabling them to produce clear, edited standard American English prose that their engineering and accounting professors won’t scoff at, in all the appropriate genres.
But first-year composition courses do far more than teach these skills. As Nicholas concedes, they are crash courses in critical thinking, analytical reasoning and academic discourse. For many students, these are the only courses that provide opportunities to critically interrogate the messages from state, corporate, cultural and religious authorities that bombard them throughout their entire lives.
Learning to Think and Write
Sweeping aside first-year composition is not the antidote to a world that is probably awash in more bullshit since ChatGPT exploded onto the scene than ever before. (Not to mention what the loss of comp courses would do to professor Nicholas’ own department’s enrollment and budgets.)
On the other hand, there is “Things to Consider Before All in Favor Say ‘AI’: Graduate students and postdocs shouldn’t use ChatGPT to help write first drafts, say Jovana Milosavljevic Ardeljan, as it robs them of an important opportunity.” (Inside Higher Ed, Feb. 26, 2024).
Ardeljan is director of career, professional and community development in the Graduate School, University of New Hampshire. She urges students writing research papers, CVs, and cover letters to leave generative AI out of their drafting process, on the grounds it could deprive them of “the opportunity to go through the creative process of writing and producing something that’s authentic and written in our [their] own voice.”
AI, Ardeljan writes, is “a triple-A issue,” presenting questions of “authorship, authenticity, and audience.”
Such blanket advice elides the fact that drafting is not a single or simple process. It is not the same for all writers. It’s a system of interconnected processes involving multiple tasks of organizing, outlining, ideating and revising (something routinely taught in first-year composition).
Discouraging the use of generative AI for such tasks is overly hasty and would likely do many students more harm than good, including students with particular kinds of cognitive, sensory and motor disabilities. It also distracts them from learning to revise.
We suspect that what Ardeljan worries about is students stifling their thinking by using generative AI too early and too often in the drafting process. But knowledgeable graduate and undergraduate students already are using generative AI for a variety of purposes. If we want to truly understand the uses and abuses of these technologies, we should talk to them. As usual, it is never one way or another.
Seductive Marketeering
While writing and AI pontificators continue to oversimplify, others line up to slip-slide away.
Ray Schroeder makes an all-too-common conflation, confusing “more features” with “AI Is Getting Smarter,” without asking about their uses and abuses (Inside Higher Ed, Feb. 28, 2024).
This is uninformed marketeering, a major accompaniment of all new technological communications innovations. He begins by quoting OpenAI CEO Sam Altman, owner of ChatGPT, hardly an objective source.
To Ardeljan’s three As, Schroeder avers that we can assess generative AI’s improvements by using the “‘four C’s’ of holistic student improvement”: “Critical thinking: Encouraging analytical and thoughtful decision-making; Communication: Developing effective interpersonal and expressive skills; Collaboration: Fostering teamwork, empathy and cooperation; Creativity: Cultivating innovation and problem-solving abilities.”
Marketing moves to sloganeering. We must take it on faith that ChatGPT is actually getting better at all these things. Schroeder makes no effort to interrelate AI with his lists, apart from noting an expanded feature of GPT-4 that lets users hold conversations with multiple GPTs at once.
This is presumably a step toward collaboration. But collaboration of who with whom?
As Schroeder demonstrates, much of the discourse of AI, ChatGPT, Generative AI, or GenAI is marketing. That is among the many reasons why the basic definitions of plagiarism and cheating, drafting and revising, general and personal style are absent. It is among the reasons why generative AI is always claimed to be either unlocking or imperiling mass literacy, never doing well or poorly at very specific writing tasks in specific situations.
No wonder higher ed professionals are rushing to pronouncements like Schroeder’s: “Those institutions that are prepared to take full advantage of the expanded abilities [of AI] will surge ahead of their competition in efficiency, effectiveness and student outcomes, especially in preparation for the workplace, where AI skills are increasingly valued.”
Indeed, why should teachers bother using AI detection software at all, he wonders, if “the expectation to embed the product of these tools within daily work will be assumed?”
Or just the opposite?
Moving Beyond Hype
The on-the-ground reality, of course, is that better detection tools are direly needed by English and humanities instructors whose students — as well as their peers and themselves–are increasingly using generative AI in haphazard, unethical and uninformed ways. Instructors whose concerns about plagiarism are frequently met with exasperated handwaving by overwhelmed administrators, now armed with the excuse that “AI detection tools are inaccurate and potentially problematic, so why bother?”
This is why workshops and institutes teaching critical AI literacy to teachers and administrators at all levels are urgently needed.
Educators, who are typically underpaid and overworked, need time, space and support to talk to experts, students and to each other; to experiment with tools; to learn about how large language models and chatbots work and what they actually do; and, most importantly, to slow down, plan, and think about how best to integrate — or not —generative AI technologies into their classrooms.
Until we learn to look differently at large language models and chatbots — as distinct entities, not accurate or distorted mirrors of ourselves — we will fail to truly understand their affordances and limitations, uses and abuses.
As Noam Chomsky offered in an interview with technology reporter Craig Smith of the “Eye on A.I.” podcast, asking if the way computers learn and use language can explain or approximate how humans use language is about as useful as using a plane to explain how birds fly (Episode #126: Noam Chomsky: Decoding the Human Mind & Neural Nets, Eye on A.I., June 6, 2023).
Let’s understand what we have built on its own terms, and shelve the premature talk of revolutions, whether pro or con.
Leave a Reply