Abstract
This paper introduces the Visual Inspection Tool (VIT) which supports researchers in the annotation of multimodal data as well as the processing and exploitation for learning purposes. While most of the existing Multimodal Learning Analytics (MMLA) solutions are tailor-made for specific learning tasks and sensors, the VIT addresses the data annotation for different types of learning tasks that can be captured with a customisable set of sensors in a flexible way. The VIT supports MMLA researchers in 1) triangulating multimodal data with video recordings; 2) segmenting the multimodal data into time-intervals and adding annotations to the time-intervals; 3) downloading the annotated dataset and using it for multimodal data analysis. The VIT is a crucial component that was so far missing in the available tools for MMLA research. By filling this gap we also identified an integrated workflow that characterises current MMLA research. We call this workflow the Multimodal Learning Analytics Pipeline, a toolkit for orchestration, the use and application of various MMLA tools.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 9th International Conference on Learning Analytics and Knowledge |
| Subtitle of host publication | Learning Analytics to Promote Inclusion and Success, LAK 2019 |
| Publisher | Association for Computing Machinery (ACM) |
| Pages | 51-60 |
| Number of pages | 10 |
| ISBN (Electronic) | 9781450362566 |
| DOIs | |
| Publication status | Published - 4 Mar 2019 |
| Externally published | Yes |
| Event | 9th International Conference on Learning Analytics and Knowledge, LAK 2019 - Tempe, United States Duration: 4 Mar 2019 → 8 Mar 2019 |
Conference
| Conference | 9th International Conference on Learning Analytics and Knowledge, LAK 2019 |
|---|---|
| Country/Territory | United States |
| City | Tempe |
| Period | 4/03/19 → 8/03/19 |
Keywords
- Internet of things
- Learning analytics
- Multimodal data
- Sensors