AI and the Humanities: First Questions

Like most middle-aged men, I have a few opinions that have only calcified as I get older. Star Wars: The Last Jedi is a great film; wearing shorts in public should be reserved for children unless the temperature is over 92º Fahrenheit; and most pressing for my career, all education should be seen as humanities education–anything else is job training (which is important but distinct).

Thus it's with that bias (and some hard-earned cynicism) that I approach any fad entering into classrooms. Of course, the latest technology that is promised by many to change not just education but the entire world is generative AI. So, what is the role of AI in a humanities education?

I'm still collecting my thoughts on the topic, but my initial take is that generative AI is a practical disaster, a plagiarism machine that is destroying the environment as well as huge amounts of capital while providing virtually no compelling use case for most anybody. That alone should be enough to dismiss it, but the evangelists will say that's all temporary, or at least those problems pale in comparison to the benefits the world will someday reap from the fruits of AI's labor.

But beyond those practical issues with AI, I continue to wrestle with the philosophical questions raised by tools like ChatGPT in education. Is there a place for generative AI in a human-centered education? It's essentially a calculator, but for words, right? Anybody skeptical is an anti-technology Luddite who wishes students still used slide rules, or perhaps wrote in the dirt with their fingers.

Thoughtful people whom I respect hold variations of that view, and they may ultimately be correct! But I cannot shake the feeling that there's something fundamentally anti-human about the entire endeavor, and thus not only distracting or useless, but actively harmful in education. These dangers strike me as particularly stark when applied to any endeavor that requires students to think critically, to write their own thoughts, to respond to and be present with others.

I have dozens of saved blog posts and magazine articles and books on this topic, and I want to better understand my own reactions to this technology. Which of my prior assumptions are based on deep-seated convictions and values, and which are simply because I'm old and misunderstanding the possibilities?

In a recent edition of his The Convivial Society newsletter titled "To Hell With Good Intentions, Silicon Valley Edition," L. M. Sacasas put forward a series of questions that have stuck with me since reading. The entire piece is worth reading, but these questions provide as clear a starting point for me as anything I've yet encountered (emphasis mine):

I think this is it. There is a vision of the good life, a vision of what it means to be human implicated in all of our tools, devices, apps, programs, systems, etc. There is a way of being in the world that they encourage. There is a perspective on the world that they subtly encourage their users to adopt. There is a form of life that they are designed to empower and support.

Is this way of life alive enough to be shared?

If I were to become the ideal user of the technology you would have me adopt, would I be more fully human as a result? Would my agency and skill be further developed? Would my experience of community and friendship be enriched? Would my capacity to care for others be enhanced? Would my delight in the world be deepened? Would you be inviting me into a way of life that was, well, alive?

Pondering these questions as they relate to generative AI and its early uses in and around education forces me to articulate really clearly what my ultimate mission is when it comes to education. Individual teachers as well as institutions will need to address these challenging, big-picture questions (which can be challenging for some!) to have any hope of making good decisions about the practical uses of this technology.