When Miscommunication is a Feature, Not a Bug
Kyle Young, Upper School French Teacher, Hathaway Brown School
A recent encounter with a minor AI glitch has changed my perspective on how much perfection we should expect from technology in an educational setting. A few weeks ago, I was listening to my French students’ submissions for a speaking activity. The students were engaged in a brief spoken conversation with an AI chatbot. The AI platform had provided me with a convenient transcript of each conversation, and I was following along while listening when I encountered a problem: the transcript was not accurate. The student’s pronunciation was comprehensible (albeit not quite native-like), but the chatbot had nevertheless misunderstood what the student was trying to say. As a result, its reply was a non sequitur, which broke the flow of the conversation.
My initial reaction to this misunderstanding was frustration: the conversation had been ruined by a flawed instructional tool. However, the student’s reply made me rethink that assessment. She confidently corrected the chatbot, explaining that it had misunderstood her and that she had actually said something completely different. After that, the conversation flowed normally and reached its natural conclusion. Instead of derailing the activity, this hiccup gave the student the chance to show off more of her poise and skill.
After this small mishap, it occurred to me that even when students are speaking with a chatbot, they are still negotiating meaning. This process of negotiation naturally requires verifying an interlocutor’s understanding and, at times, clarifying to recover from a breakdown in communication. Indeed, with speaking activities between students, this sort of misunderstanding is to be tolerated or even celebrated if the students are ultimately able to use their communication skills to move past it and accomplish the communicative task.
It is tempting, and in many cases justified, to expect perfection from the technology we use to make our lives easier. But when the intended purpose of a piece of software is to simulate human interaction, then a certain amount of mistakes can actually help it better serve its purpose. When using AI chatbots with world language students, it is worth taking a few moments to coach them on how to handle misunderstandings. Students need to understand that this kind of mistake is not a failure on their part, and they should approach it as they would with a human conversation partner: by restating and rephrasing as necessary.
Working through this kind of mishap has several potential advantages. First, by realizing that not every difficulty they encounter in communication is their “fault,” students might actually build confidence, even when working through a partially broken conversation. Second, it encourages students to recognize the fallibility of AI and critically evaluate its output, a vital component of AI literacy. Ultimately, an experience like this can help students develop the patience and communication skills they will need to navigate real-world situations where miscommunication is all too common, especially (though certainly not exclusively) when using a second language.
There is, of course, a balance to be struck. If miscommunication occurs regularly enough to become a constant frustration or an insurmountable barrier to communication when working with AI, it is worth rethinking the use of that tool, as I have done with some chatbots in the past. An occasional need for clarification, however, can actually be advantageous in providing a more authentic experience for learners. To err is human, but perhaps we can occasionally extend that privilege to some of our inhuman tools as well.