Santawat Thanyadit
Tutor In-sight: Guiding and Visualizing Students Attention with Mixed Reality Avatar Presentation Tools
Thanyadit, Santawat; Heintz, Matthias; Law, Effie Lai-Chong
Authors
Abstract
Remote conferencing systems are increasingly used to supplement or even replace in-person teaching. However, prevailing conferencing systems restrict the teacher’s representation to a webcam live-stream, hamper the teacher’s use of body-language, and result in students’ decreased sense of co-presence and participation. While Virtual Reality (VR) systems may increase student engagement, the teacher may not have the time or expertise to conduct the lecture in VR. To address this issue and bridge the requirements between students and teachers, we have developed Tutor In-sight, a Mixed Reality (MR) avatar augmented into the student’s workspace based on four design requirements derived from the existing literature, namely: integrated virtual with physical space, improved teacher’s co-presence through avatar, direct attention with auto-generated body language, and usable workfow for teachers. Two user studies were conducted from the perspectives of students and teachers to determine the advantages of Tutor In-sight in comparison to two existing conferencing systems, Zoom (video-based) and Mozilla Hubs (VR-based). The participants of both studies favoured Tutor In-sight. Among others, this main fnding indicates that Tutor Insight satisfed the needs of both teachers and students. In addition, the participants’ feedback was used to empirically determine the four main teacher requirements and the four main student requirements in order to improve the future design of MR educational tools.
Citation
Thanyadit, S., Heintz, M., & Law, E. L.-C. (2023, April). Tutor In-sight: Guiding and Visualizing Students Attention with Mixed Reality Avatar Presentation Tools. Presented at CHI '23: CHI Conference on Human Factors in Computing Systems, Hamburg, Germany
Presentation Conference Type | Conference Paper (published) |
---|---|
Conference Name | CHI '23: CHI Conference on Human Factors in Computing Systems |
Start Date | Apr 23, 2023 |
End Date | Apr 28, 2023 |
Acceptance Date | Mar 1, 2023 |
Online Publication Date | Apr 19, 2023 |
Publication Date | 2023-04 |
Deposit Date | Mar 1, 2023 |
Publicly Available Date | Mar 2, 2023 |
Pages | 1-20 |
DOI | https://doi.org/10.1145/3544548.3581069 |
Public URL | https://durham-repository.worktribe.com/output/1134061 |
Files
Accepted Conference Proceeding
(24.3 Mb)
PDF
Copyright Statement
© ACM 2023. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, https://doi.org/10.1145/3544548.3581069
You might also like
Trustworthy Autonomous Systems Through Verifiability
(2023)
Journal Article
Building a three-level multimodal emotion recognition framework
(2022)
Journal Article
Future directions for chatbot research: an interdisciplinary research agenda
(2021)
Journal Article
How easy is it to eXtend Reality? A Usability Study of Authoring Toolkits
(2022)
Book Chapter
Downloadable Citations
About Durham Research Online (DRO)
Administrator e-mail: dro.admin@durham.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search