TY - GEN
T1 - How can Explainability Methods be Used to Support Bug Identification in Computer Vision Models?
AU - Balayn, Agathe
AU - Rikalo, Natasa
AU - Lofi, Christoph
AU - Yang, Jie
AU - Bozzon, Alessandro
PY - 2022
Y1 - 2022
N2 - Deep learning models for image classification suffer from dangerous issues often discovered after deployment. The process of identifying bugs that cause these issues remains limited and understudied. Especially, explainability methods are often presented as obvious tools for bug identification. Yet, the current practice lacks an understanding of what kind of explanations can best support the different steps of the bug identification process, and how practitioners could interact with those explanations. Through a formative study and an iterative co-creation process, we build an interactive design probe providing various potentially relevant explainability functionalities, integrated into interfaces that allow for flexible workflows. Using the probe, we perform 18 user-studies with a diverse set of machine learning practitioners. Two-thirds of the practitioners engage in successful bug identification. They use multiple types of explanations, e.g. visual and textual ones, through non-standardized sequences of interactions including queries and exploration. Our results highlight the need for interactive, guiding, interfaces with diverse explanations, shedding light on future research directions.
AB - Deep learning models for image classification suffer from dangerous issues often discovered after deployment. The process of identifying bugs that cause these issues remains limited and understudied. Especially, explainability methods are often presented as obvious tools for bug identification. Yet, the current practice lacks an understanding of what kind of explanations can best support the different steps of the bug identification process, and how practitioners could interact with those explanations. Through a formative study and an iterative co-creation process, we build an interactive design probe providing various potentially relevant explainability functionalities, integrated into interfaces that allow for flexible workflows. Using the probe, we perform 18 user-studies with a diverse set of machine learning practitioners. Two-thirds of the practitioners engage in successful bug identification. They use multiple types of explanations, e.g. visual and textual ones, through non-standardized sequences of interactions including queries and exploration. Our results highlight the need for interactive, guiding, interfaces with diverse explanations, shedding light on future research directions.
KW - computer vision
KW - machine learning explainability
KW - machine learning model debugging
KW - user interface
UR - http://www.scopus.com/inward/record.url?scp=85130564103&partnerID=8YFLogxK
U2 - 10.1145/3491102.3517474
DO - 10.1145/3491102.3517474
M3 - Conference contribution
AN - SCOPUS:85130564103
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2022 - Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery (ACM)
T2 - 2022 CHI Conference on Human Factors in Computing Systems, CHI 2022
Y2 - 30 April 2022 through 5 May 2022
ER -