AI thinkers for English teachers

A page which gathers together the thinkers and writers I am finding most useful in considering the implications of AI for my subject, English. I plan to update it whenever I find another helpful voice. There are no mindless boosters here, because you can easily find those everywhere else.

While I put together my thoughts, here are previous pieces on:


Dr Abeba Birhane is the founder and leader of Trinity College Dublin’s Artificial Intelligence Accountability Lab. On Bluesky. Particularly valuable for exposing the ethical underpinnings of the technology:

AI researcher whose pioneering work drew mainstream attention to the toxic contents of datasets powering gen AI, (and) is leading a new research lab aiming to counteract regulatory capture by tech corporations by offering evidence-driven and research-based policy advocacy.


Josh Brake in The Absent-Minded Professor, ‘a weekly outlet for manifestos on technology, education, and human flourishing.’ Brake teaches Engineering at Harvey Mudd College in California.

As we think about AI, we must ask: to what degree does AI help us achieve our educational goals? And not just the learning goals of a specific assignment, but the broader overarching goals of our programs—the character traits and virtues we want to instill in our students. Are the major selling points of enhanced “efficiency” and “productivity” offered by AI assistance really the values we want to instill in our students? I think not.


Daisy Christodoulou has been thinking forensically about education for some years, including technology (see her book Teacher vs Tech? the case for an ed tech revolution). Her Substack at No More Marking with Chris Wheadon regularly addresses issues like assessment and writing.


Leon Furze has lots of guidelines and practical suggestions on his site.

The problem is, OpenAI clearly doesn’t want to help writers. It wants to replace them. It wants to suck the joy out of writing and replace it with efficient, easy to understand microprose that goes down reeeeeal easy. Fast food for readers. Slop.

As a writer, a teacher of writing, and someone that actually gets a lot of use out of these technologies, it stings. I’ve written plenty of posts about how AI can be used as part of the writing process, teaching writing with and against AI, and how I use Generative AI myself as an author. But the way I use AI and the way I encourage students and teachers to think about AI is at odds with the message coming from the loudest AI developer of them all.

And I’m worried that the message will ultimately get lost in a sea of efficient, readable, emoji-riddled goo.


Ethan Mollick is far more positive than most on this page about AI. A Professor at the Wharton School at the University of Pennsylvania, he is right at the cutting edge of the technology. But he is rigorously evidence-based, and at One Useful Thing he always writes with clarity and coherence. Sample: 15 Times to use AI, and 5 Not to:

Though this list is based in science, it draws even more from experience. Like any form of wisdom, using AI well requires holding opposing ideas in mind: it can be transformative yet must be approached with skepticism, powerful yet prone to subtle failures, essential for some tasks yet actively harmful for others.


Benjamin Riley founded Cognitive Resonance, and is a vigorous critic of many uses of AI. Substack, and their website, where you can download for free an excellent short guide called ‘Education Hazards of Generative AI.’

We see AI as a useful tool, but one with predictable strengths and limitations. Long term, Cognitive Resonance will be successful if generative AI is no longer treated as an inexplicable “black box,” and is used in ways that are socially beneficial and in harmony with human cognition.


Jane Rosenzweig: In Writing Hacks she regularly considers writing in the AI age, and she is also launching The Important Work:

I chose the name as a nod to the fact that we’re often told AI tools will free us up to do “the important work”—but for those of us who teach writing, the writing itself has always been the important work, which raises questions about what writing instruction looks like in this new era … Each newsletter will be a dispatch from someone’s classroom—a reflection on an assignment that incorporates AI or one that actively doesn’t, a reckoning with what we’re gaining and losing, a call for advice or feedback from others who are experimenting in the classroom.

The first edition of the latter is by writer and teacher Spencer Lane Jones on ‘What are students using AI for?’, and she refers to the ways AI is being used to short-circuit human reading as

perhaps the most concerning set of responses. The short-form survey responses suggest that students are using AI to read for them as much as, if not more than, they are using it to write for them. This is a particular phenomenon that may not yet be getting the attention it deserves in mainstream media.


L.M. (Michael) Sacasas in The Convivial Society sends out ‘a newsletter exploring the relationship between technology and culture. It’s grounded in the history and philosophy of technology, with a sprinkling of media ecology.’ Sacasas writes in a profoundly well-considered way, and I believe is one of the most valuable voices of all at this time in the development of technology. Highly recommended.

[Life Cannot be Delegated] I am inviting us to critically consider at the outset where the thresholds of delegation might be for each of us. And these will, in fact, vary person to person, which is why I tend to traffic in questions rather than prescriptions. I am convinced that these are matters of practical wisdom. No one can set out a list of precise and universal rules applicable to every person under all circumstances. Indeed, the temptation to wish for such is likely a symptom of the general malaise. We must all think for ourselves, and in conversation with each other, so that we can arrive at sound judgments under our particular circumstances and given our particular aims.


Eryk Salvaggio is at Cybernetic Forests, His Bluesky Critical AI Starter Pack is a handy way to follow many more useful voices.

Aside from the ethics of the technology, AI raises significant concerns over our conceptual frameworks. These are highly seductive technologies, prone to be trusted without evidence and relied upon in lieu of human reasoning. Misunderstanding how they function increases the risk of being deployed in contexts where these functions create, rather than remedy, harms. But technical expertise in a system is limited to the system — it often fails to account for what happens beyond the scope of the tool's most direct use case.


John Warner in The Biblioracle is one of the most eloquent AI sceptics around. His background is deeply bookish, and his own forthcoming book More Than Words: How to Think About Writing in the Age of AI looks likely to be essential reading. A vigorous, passionate, well-informed voice.

It’s possible that one of the things we (as in society collectively) will decide is that students don’t need to learn to write anymore, since we have technology that can do that for us.

I think this would be a shame because one of the things I value about writing, is the act of writing itself. It is an embodied process that connects me to my own humanity, by putting me in touch with my mind, the same way a vigorous hike through the woods can put me in touch with my body.

For me, writing is simultaneously expression and exploration.

In a piece like this, writing is the expression and exploration of an idea (or collection of ideas). It is only through the writing that I can fully understand what I think.


Marc Watkins in Rhetorica is one of the most thoughtful and consistently helpful thinkers about AI and education. In 2024 he wrote a series called ‘Beyond ChatGPT’ which considered how

this emerging technology is transforming not simply writing, but many of the core skills we associate with learning. Educators must shift our discourse away from ChatGPT’s disruption of assessments and begin to grapple with what generative AI means for teaching and learning.


Audrey Watters has a long history of ferocious scepticism about technology, and after a break has now turned her attention back to it in Second Breakfast (‘Essays on engineering bodies and minds’):

Conveniently, generative AI promises now to both churn out prose – too much, too banal to read, let's be honest – and because it's so much and so banal, to summarize all this content in turn. A lot has been written about how generative AI undermines the practice and profession of writing; but clearly it does the same for reading too. Why write if only a robot will read it; and why read if a robot has written it?


Ben Williamson is Senior Lecturer and co-director at the Centre for Research in Digital Education at Edinburgh University. On Bluesky.

The huge excitement right now about AI in schools really needs tempering with some historical reality-checking. It won't be ‘different this time’ because the tech is only one part of a bundle of financial and political desires, as tech in schools always has been.