AI in Education – Can robots be trusted?

The researchers in the project aim to improve our knowledge about how AI can be used responsibly in education.

Education faces significant changes as language models like ChatGPT are increasingly being used by teachers and students alike to search, present and test factual knowledge.

Humanoid robots are also being used more frequently in education because their interaction mimics that of a human teacher, potentially having a more positive impact on students’ learning than other educational media. As these robots start using language models to converse with students, there is also greater scope for natural conversations.

Yet it is well known that language models sometimes hallucinate, i.e. present incorrect facts with great confidence. This raises two fundamental questions when language models power social robots interacting with students:

  • Can students trust the facts presented by the robot?
  • Can a robot help students to correctly evaluate assertions made by language models by using words, voice and facial expressions to signal how confident it is about the accuracy of its information?

These questions are more complex than they might first appear, particularly since different groups of students are more or less inclined to trust facts presented by robots. It is therefore vital that teachers can identify students at risk of being misled and that students are trained in critical thinking to evaluate claims made by a robot or language model.

The researchers in the project will investigate how various student characteristics (e.g. gender, attitude toward and experience of AI technology, educational level) impact their ability to determine whether a robot’s statements are true or false. They will also test whether students’ critical thinking during conversations with AI can be strengthened through practice.

When assessing whether to trust another person’s assertions, the choice of words, how they are spoken, and facial expressions when they are spoken are highly influential. However, current language models present both true and false information with the same degree of conviction, making it hard to determine their trustworthiness. But humanoid robots could use voice and facial expressions to convey how confident they are in their claims, thus helping students to evaluate their accuracy.

The research team will study how students’ critical thinking is influenced by whether the robot appears confident or hesitant. The aim is to enhance knowledge about how AI can be responsibly used in education with diverse groups of students.

Project:
“Learning interactions with hallucinating robots”

Principal Investigator:
Professor Olov Engwall

Co-investigators:
KTH Royal Institute of Technology
Olga Viberg
Uppsala University
Katie Winkle

Institution:
KTH Royal Institute of Technology

Grant:
SEK 3.9 million