posted on 28 Feb 2025 by chek
The Quest for Immersive LLM-based Tutors
We aim to inform the design of modern LLM-based chatbot-style tutors for student learning.
We started by investigating how different visual and auditory representations of avatars in AI chatbot tutors influence students’ learning experiences.
Our current study focuses on a real-world educational environment, using a dedicated AI Tutor platform with three distinct avatar representations. These representations include a text-based interface, a video-based real-time deepfake of the course lecturer, and a neutral non-human 3D character.
The platform was used by students in an SIT design thinking course to explore the effects of different avatar interfaces on learning. The study employed a mixed-methods within-subjects design, incorporating qualitative analysis of interviews and open-ended responses, along with quantitative analysis based on established questionnaires (TAM, SIMS, BPNS).
Contrary to common negative connotations with generic deepfakes, the most interesting findings were related to the deepfake mode in our AI Tutor.
This underscores the potential of leveraging pre-existing social relationships to make AI learning tools more effective. It also indicates that familiar avatars can effectively extend learning beyond the classroom, offering a more continuous and immersive educational experience.
Chek Tien Tan, Indriyati Atmosukarto, Budianto Tandianus, Songjia Shen, and Steven Wong. 2025. Exploring the Impact of Avatar Representations in AI Chatbot Tutors on Learning Experiences. In CHI Conference on Human Factors in Computing Systems (CHI ‘25), April 26-May 1, 2025, Yokohama, Japan. ACM, New York, NY, USA, 12 pages. (preprint)
[WIP] An open-source locally deployable customizable AI tutor system that uses RAG to contextualize responses.
This research is supported by SIT’s Centre for Digital Enablement (CoDE).