AI and the Pedagogical Relationship
AI tools are being pushed as teachers, tutors, TAs, and guidance counsellors, not just by tech companies but by educators themselves. One prominent US administrator, Hollis Robbins, argues that AI should deliver all university-level Gen Ed content. She wants some teachers to stay on as “mentors,” guiding students through the AI system. But an AI-powered “Content Delivery Service” will deliver educational content, a “Project Recommendation Engine” will create projects for students to complete, and an “Assessment and Feedback Service” will assess and give feedback on those projects. (See also her debate with Anastasia Berg.)
These kinds of products are sometimes offered in a bureaucratic spirit: efficiency, individualization, etc. But they’re also given a revolutionary spin. AI tools are supposed to democratize education, making a high-quality education available to all. And, look, if language models stay cheap or open-source, they really will be available to anyone with an internet connection. And since they can already produce good advice and sound explanations, is it a stretch to think they’ll deliver a good education? Isn’t it obvious that they can already serve at least narrower educational roles?
But as always, the question isn’t just what AI can do (can it educate?), but what education is in the first place. Here’s one answer: it’s a social relationship. We don’t always talk about education in those terms, but teachers and students are people, working together on shared goals, holding themselves accountable to each other’s expectations, needs, and relevant social norms. And this social context seems essential to our teaching.
Consider one aspect of pedagogy: modeling. We don’t just tell students how to construct an argument, disagree productively, or reason about new information. Good teachers do all that themselves, in front of their students (Pace, 2017, Chapter 3; Weimer, 2002, Chapter 3). We model the skills involved in constructing an argument, understanding a disagreement, or analyzing a dataset. But we don’t just model skills. We model the deeper aspects of our discipline: the standards to which we hold arguments; the commitment we make to engage with good faith objections; the care with which we handle new information. We model these qualities by embodying them — by showing students what it looks like to make those commitments, to hold ourselves to those standards (Eyler, 2018, Chapter 2). And we push students to internalize these qualities themselves. They feel our expectation that they will commit to the ideals of our disciplines, and the respect of being held to its high standards (Bain, 2004, Chapter 4). They feel the validation of living up to those standards, and the disappointment when they don't make an honest effort to do so. And they are encouraged in all this by our confidence in them, our trust that they will challenge themselves, and our interest in their developing ideas (Bain, 2004, Chapter 6).
To come to the point, AI does not seem to support this kind of social relationship. AI tools don’t have the mental states required to make commitments or hold themselves to standards. Until they have those qualities, they can model skills — like constructing an argument — but they cannot embody the deeper ideals and standards of our disciplines. And while AI tools can at least ask students to perform to a certain standard, it’s not clear that they are a source of expectations, respect, interest, validation, or any of the other qualities mentioned above. This is an old point. Almost 50 years ago, Weizenbaum cautioned against the use of his therapy chatbot in clinical settings because, as realistic as its text messages might be, patients knew that those messages don’t express a real person’s acceptance, solidarity, or validation, and this undermines the point of the therapeutic encounter (Weizenbaum, 1977).
I don’t think this has changed. The most intentionally and thoroughly anthropomorphic AIs we have are chatbot companions, which mostly just manage to make users with a mental illness more anxious, depressed, and lonely (Laestadius et al., 2024; Skjuve et al., 2022; Xie et al., 2023). They are human-like enough to convince many people to treat them like mentors or friends or romantic partners. (This is not just saying “thanks” to Siri — it's bringing your chatbot girlfriend home for Thanksgiving.) But even if they elicit strong emotions and apparent social connection, these companions do not have the same effect as real social relationships. Real relationships, and the kind of validation, acceptance, and camaraderie we feel in them, typically make us less lonely and depressed, not more. Whatever we get out of these chatbots, it’s not what we get out of real relationships, and it doesn’t have the transformative effect that real relationships can have.
What this all suggests is that AI teachers, tutors, and guidance counsellors will not support the social relationship those roles require. They will not be a source of expectations, respect, or trust. Their confidence in you will not be the transformative kind of confidence a teacher can place in a student. You will not work as though your “Assessment and Feedback Service” will be proud when you give your best effort, or disappointed when you do not. And the issue isn’t just that AI can’t yet produce a good facsimile of human behavior. It’s already good enough to pass versions of the Turing Test. The issue, like with AI companions, is that we know there is nothing behind the text, nothing holding itself to a standard or expecting the same from us, nothing with the capacity to show us real respect or confidence. As with any social relationship, what is motivating and potentially transformative is not the behavior but what lies behind it. And as much as we might play our part in the back-and-forth of an AI tutoring session, we can’t work ourselves up into a real social relationship with the empty space behind a language model’s output.
* * *
There are lots of concerns about AI educational tools. Maybe they leave students unskilled and dependent. Maybe they’re sycophantic and encourage illusions of understanding. But even if we solve their technical problems, they still change the social nature of teaching and learning. They change the human relationship on which learning depends. This is not necessarily an argument against AI. There are situations where human factors and social relationships are hindrances. If a student is embarrassed to raise a taboo or controversial idea with a potentially judgmental teacher, we’re probably better off if they can consult an AI tutor instead of the 4chan board where that idea is typically discussed. But if I’m right about what education is, then to democratize it these tools aren’t enough — we need democratize access to teachers, not just technology.
I don’t think I’ve said anything too surprising about the nature of teaching or the student-teacher relationship. If you’ve tried to learn Python but haven’t been able to stick with your CodeAcademy course, you understand what education looks like with the human factors removed. I don’t think I’ve said anything too controversial about AI or its capacity to replace social relationships, either. But, together, these considerations give us a useful way of thinking about AI in education. With every application of AI, we should ask: When I use AI for this part of my teaching, how am I changing the social dynamics that charge the classroom, motivate students, and drive learning?
The Challenge
Think of something you want to learn but haven’t. Not a curiosity — something that would be genuinely useful for you.
- What has kept you from learning it?
Ask a chatbot to start teaching it to you. Dedicate at least 15 minutes to learning from it.
- After the first session, do you think you’ll continue?
- Was the chatbot effective? What did it do well? What did it do poorly?
- Does the type of relationship you can have with a chatbot seem to make a difference, for any of the above?
Think about a part of your own teaching — your syllabi, lesson plans, slides, assessments, feedback, …
- What do your students need from this part of your teaching?
- Given your answers above, what would happen if you used AI for this part of your teaching?
- Does the way you meet that need depend on your relationship with your students?
References
- Bain, K. (2004). What the best college teachers do. Harvard University Press. https://doi.org/10.2307/j.ctvjnrvvb
- Eyler, J. R. (2018). How humans learn: The science and stories behind effective college teaching. West Virginia University Press. https://wvupressonline.com/node/758
- Laestadius, L., Bishop, A., Gonzalez, M., Illenčík, D., & Campos-Castillo, C. (2024). Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika. New Media & Society, 26(10), 5923–5941. https://doi.org/10.1177/14614448221142007
- Pace, D. (2017). The decoding the disciplines paradigm: Seven steps to increased student learning. Indiana University Press. https://doi.org/10.2307/j.ctt2005z1w
- Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2022). A longitudinal study of human–chatbot relationships. International Journal of Human-Computer Studies, 168, 102903. https://doi.org/10.1016/j.ijhcs.2022.102903
- Weimer, M. [Maryellen]. (2002). Learner-centered teaching: Five key changes to practice. Jossey-Bass. https://www.wiley.com/en-us/Learner%2BCentered%2BTeaching%3A%2BFive%2BKey%2BChanges%2Bto%2BPractice-p-9780787966065
- Weizenbaum, J. (1977). Computers as “therapists.” Science, 198(4315), 354. https://doi.org/10.1126/science.198.4315.354
- Xie, T., Pentina, I., & Hancock, T. (2023). Friend, mentor, lover: Does chatbot engagement lead to psychological dependence? Journal of Service Management, 34(4), 806–828. https://doi.org/10.1108/JOSM-02-2022-0072
Disclosure
This post was written without AI assistance, because I think that to seriously engage with a challenge like this, you must trust that it is issued in good faith by someone who respects the time and effort they are asking from you. I suspect that certain uses of AI would undermine that trust.