Abstract
We summarize a recent article that studies the evaluation of a knowledge-based scheduling system. The article considers a user-adaptive personal assistant agent designed to assist a busy knowledge worker in time management. We examine the managerial and technical challenges of designing adequate evaluation and the tension of collecting adequate data without a fully functional, deployed system. The PTIME agent was part of the CALO project, a seminal multi-institution effort to develop a personalized cognitive assistant. The project included a significant attempt to rigorously quantify learning capability in the context of automated scheduling assistance. Retrospection on negative and positive experiences over the six years of the project underscores best practice in evaluating user-adaptive systems. Through the lessons illustrated from the case study, the article highlights how development and infusion of innovative technology must be supported by adequate evaluation of its efficacy.
Original language | English |
---|---|
Title of host publication | Proceedings of the ICAPS'18 Workshop on Knowledge Engineering for Planning and Scheduling (KEPS'18) |
Place of Publication | Delft, Netherlands |
Pages | 1-2 |
Publication status | Published - Jun 2018 |
Event | 28th International Conference on Automated Planning and Scheduling: KEPS 2018 - Delft, Delft, Netherlands Duration: 24 Jun 2018 → 29 Jun 2018 Conference number: 28 http://www.icaps-conference.org |
Conference
Conference | 28th International Conference on Automated Planning and Scheduling |
---|---|
Abbreviated title | ICAPS 2018 |
Country/Territory | Netherlands |
City | Delft |
Period | 24/06/18 → 29/06/18 |
Internet address |