Continuous improvement: How systems design can benefit the data-driven design community

J.D. Lomas, Jodi L. Forlizzi, Nirmal Patel

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

Abstract

Introduction Currently, the learning science community is exploring the use of data-driven design to improve K12 educational systems. These “continuous-improvement systems” aim to align strategic goals, outcome metrics and human-computer system processes to support improved learning outcomes. However, the learning science community has only begun to apply systemic design to practical implementation of these systems. In this paper, we present several examples of data-driven design in K12 educational systems in order to identify aspects that can benefit from systemic design. Through these case studies, we focus on three concepts: 1) systemic designers can ensure that the system is capable of measuring successful outcomes; 2) systemic designers can ensure that system optimization will improve intended outcomes while minimizing unintended consequences; and 3) systemic designers can portray what a future with these continuous improvement systems will be like to the educational community, before any resources are committed to building the technology. Example #1: Ensure that the system is capable of measuring successful outcomes Data can be used to inform system stakeholders about the success of designed systems; that is, how well outcome measures align with system intentions. For instance, after providing an instructional activity (lecture, small group, video, etc) in class, a teacher might assign their students an “exit ticket” quiz to assess whether the instructional activity was successful. These quizzes support data-driven decisions about how to spend time and effort in the classroom. Variations in student performance give teachers an understanding of the students who need greater attention and the learning objectives that need greater attention. Further, digital data from exit tickets or other formative assessments can be aggregated across teachers to provide school administrators with continuous insight into the areas of need, such as students or teachers who need additional help or learning objectives that are posing special challenges. Providers of digital instruction can then aggregate usage and performance across many schools in order to identify successful and unsuccessful usage patterns. Data-driven continuous improvement can occur at multiple levels (i.e., teacher, school & software provider) when systems are designed to generate valid outcome metrics of success (goal achievement). Example #2: Ensure that system optimization will improve intended outcomes while minimizing unintended consequence Success metrics can be used by human teams and AI systems to drive continuous improvement. However, the optimization of metrics can produce unintended consequences when chosen metrics are not fully aligned to intended outcomes and when feedback loops about metric suitability are impoverished. In this case study, an online educational game is designed with the goal of motivating students to practice math problems. After being deployed online, the game attracts several thousand students a day; these players are randomly assigned to different game design variations to observe how the effects of different designs on key outcome metrics (e.g., duration of voluntary play). To investigate the role of AI in system design optimization, we implemented a UCB multi-armed bandit (a reinforcement learning AL/ML algorithm) to automatically test variations in the existing game parameter space (e.g., time limits, etc). The algorithm is designed to optimally balance the exploration of potential game designs with exploitation of the most successful designs; sometimes it will randomly search the game design space for configurations that maximize metrics (duration of voluntary play time) and sometimes it will deploy the most successful variations. While the algorithm worked as intended, the system “spun out of control” and primarily deployed malformed game designs that were maximizing the outcome metric but were misaligned with the original educational intent: the game variations were likely played for long periods of time because they were absurdly easy. This shows the pitfalls of having AI systems engage in automatic optimization without humans in the loop as a governing feedback system. Systemic designers need to design feedback systems to monitor system AI to ensure that outputs are meaningfully aligned to system intentions. Example #3: Portray what the future will be like Artificial intelligence has the potential to facilitate the work of teachers by reducing the effort required to use data to inform personalized instruction. However, AI can be intimidating or off-putting to teachers who do not understand its operation or intentions. In this case study, we deployed a teacher-facing recommendation system that uses reinforcement learning to continuously improve recommendation usefulness to teachers. To design a reinforcement learning AI system, there must be data representations of the system state, the space of possible actions and a reward signal tied to a success metric. In our case, the system state is student digital performance on learning activities, the action possibilities are the different digital items that teachers can next assign to a student and the reward signal occurs when teachers act upon a recommendation (i.e., when they assign those digital activities recommended by the system). This system embodies two key elements that diverge from most existing work in “adaptive learning” or “intelligent tutoring systems.” First, the system emphasizes human-technology teamwork, in contrast to human replacement, so that teachers are empowered by the assistance of the AI. Secondly, the artificial intelligence is deliberately constructed as an aggregation of human intelligence: the system learns from the activity-assignment decisions that are made by thousands of other human teachers and aggregates them into artificially intelligent recommendations. To promote adoption of this system, a key role for systemic design is making the intended future vision accessible and attractive to teachers and other stakeholders. Systemic designers can help to engage humans to participate in the decision making by presenting a glimpse of what a data-driven future might be like in the classroom. Conclusion Across these case studies, we show how systemic design can aid diverse participants in the implementation of data-driven design and optimization. Systemic design insight can contribute to the negotiation of meaningful and robust metrics of success, to the construction of human-in-the-loop governance of AI systems and to the representation of potential futures. We expect designers to play a crucial role in taming the complexity of practical AI-human systems and aligning system outcomes to sustainable, humanistic values.
Original languageEnglish
Title of host publicationProceedings of Relating Systems Thinking and Design 7 (RSD7)
Publication statusPublished - 2018
EventRelating Systems Thinking and Desing (RSD7) - Turin, Italy
Duration: 23 Oct 201826 Oct 2018

Conference

ConferenceRelating Systems Thinking and Desing (RSD7)
Country/TerritoryItaly
CityTurin
Period23/10/1826/10/18

Fingerprint

Dive into the research topics of 'Continuous improvement: How systems design can benefit the data-driven design community'. Together they form a unique fingerprint.

Cite this