Teaching Critical Thinking in the Age of AI
“In class, the professor was discussing binaries, but I didn't really understand what they were. I asked ChatGPT about it, and it helped me figure it out.”
This is probably one of the most unsettling reflections I have ever read from a student. Yet as we struggle in higher education to figure out how to integrate AI into our classrooms, it is also one of the most exciting. I want to tell you how I have finally – after three long years of experimentation and wanting to give up – figured it out: I’ve basically made AI my co-instructor.
Listen, for example, to what another one of my students wrote: “What I found most useful about the conversation was how ChatGPT kept asking me questions that pushed me to explain my thinking more clearly. Instead of just agreeing with me, it challenged me to think about counterexamples and differences. Isn’t that what real learning should look like?”
Before you start hate-bombing my email, let me provide some context.
Higher education has been stuck in its own binary thinking of whether or not we should use AI in higher education. But that is the wrong question, as all of us face a crisis of purpose. (And please, spare me the nostalgic reminiscing about handwritten essays and “unplugging” from the technological matrix; these are all misguided arguments that misunderstand how AI has fundamentally and irrevocably broken the transmission model of education.) The correct (and urgent) question is, how do we use AI?
So to my class: I teach a general education course entitled “Ethics, Society & Identity,” where I help students learn about complex and contested issues. Think of any hot-button topic – the “myth of meritocracy,” how race is a “social construct,” the implications of generational poverty – to realize how daunting it is to teach such topics today. It also doesn’t help that I have anywhere from 50 to 100 students in this course in any given semester, as the research shows that large classes foster student disengagement.
That is why I require my students to talk to AI every week. For every single class and every single subject, I build a prompt that students must use to talk with AI, and then reflect on the experience and what they learned. The results have been transformative, far beyond my expectations. My students have been even more surprised: “I think that chatting with AI has helped me a lot to become more open-minded and think deeper into what we are learning about. When you first assigned these, I never thought I would get much out of it and that it was almost weird, but, after doing it a few times, now it has really helped push me to dig deeper in my learning.”
I want to be clear: this is not an abdication of my role as a professor; it is my realization that AI may actually be the perfect facilitator for fostering genuine habits of becoming a critical thinker.
There are two key components to this realization.
The first and most important realization is that it is my job to “academize” any and all topics that I teach. Stanley Fish, for over twenty years, has beaten this singular drum: “The imperative [for instructors] is to ‘academicize’ the subject; that is, to remove it from whatever context of urgency it inhabits in the world and insert it into a context of academic urgency.” For Fish, this is the key tenet of academic freedom in the college classroom and the guiding principle of professors and students “jointly engaged in an intellectual effort to understand something.”
So here’s the thing: while I am pretty smart about the subjects I teach, AI is by now just as smart as I am and can – if prompted correctly – engage students in “academizing” any topic at any time and at any level of sophistication. As another one of my students commented during her reflection on figuring out what a “social construct” meant, she used AI “because I feel it's the most helpful resource that makes me understand topics a lot easier. Especially since learning how to use it the correct way in this class.”
This gets to my second realization. Look, I pride myself on helping my students engage in “intellectual effort” by developing engaging lectures, encouraging students to come to my office hours, personally responding to every single student’s reflection, staying after class to talk, and creating Zoom meetings anytime a student wants to brainstorm an idea or doesn’t understand the class lecture.
But AI, I have to admit, may – with the right prompts – be better than me. It is always available, always patient, always clear, always ready to clarify any issue a student may bring up. There is literally no way for me to do that individually for 100 students, especially when one realizes that probably less than half of all students are engaged enough to even ask for help. “Talking with the AI felt like bouncing ideas off a friend who kept me grounded,” another student wrote about his preparation for the midterm paper. “The AI kept asking follow-ups that made me refine my thinking step by step.” It’s kind of like having a personalized mentor in your back pocket.
This is probably the most unsettling moment of my academic career. And, dear reader, I imagine that you, too, may have a lot of trepidation about this new reality. But for the first time in years, I feel hopeful. If my students can learn to think more carefully about some of our most contentious societal issues, and if they can learn that binary thinking forces us to oversimplify into problematic either/or extremes, I’m sure that we, too, can figure out the way forward.