Evaluating Intelligent Knowledge Systems (Article Abstract)

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

20 Downloads (Pure)


The article published in Knowledge and Information Systems examines the evaluation of a user-adaptive personal assistant agent designed to assist a busy knowledge worker in time management. The article examines the managerial and technical challenges of designing adequate evaluation and the tension of collecting adequate data without a fully functional, deployed system. The PTIME agent was part of the CALO project, a seminal multi-institution effort to develop a personalized cognitive assistant. The project included a significant attempt to rigorously quantify learning capability, which the article discusses for the first time, and ultimately the project led to multiple spin-outs including Siri. Retrospection on negative and positive experiences over the six years of the project underscores best practice in evaluating user-adaptive systems. Through the lessons illustrated from the case study of intelligent knowledge system evaluation, the article highlights how development and infusion of innovative technology must be supported by adequate evaluation of its efficacy.
Original languageEnglish
Title of host publicationBNAIC 2017 pre-proceedings
Subtitle of host publication29th Benelux Conference on Artificial Intelligence
EditorsBart Verheij, Marco Wiering
Number of pages2
ISBN (Electronic)978-94-034-0299-4
Publication statusPublished - Nov 2017
Event29th Benelux Conference on Artificial Intelligence: 29th Benelux Conference on Artificial Intelligence - Groningen, Netherlands
Duration: 8 Nov 20179 Nov 2017
Conference number: 29


Conference29th Benelux Conference on Artificial Intelligence
Abbreviated titleBNAIC 2017
Internet address


Dive into the research topics of 'Evaluating Intelligent Knowledge Systems (Article Abstract)'. Together they form a unique fingerprint.

Cite this