TY - GEN
T1 - Explainable Cross-Topic Stance Detection for Search Results
AU - Draws, Tim
AU - Natesan Ramamurthy, Karthikeyan
AU - Baldini, Ioana
AU - Dhurandhar, Amit
AU - Padhi, Inkit
AU - Timmermans, Benjamin
AU - Tintarev, Nava
PY - 2023
Y1 - 2023
N2 - One way to help users navigate debated topics online is to apply stance detection in web search. Automatically identifying whether search results are against, neutral, or in favor could facilitate diversification efforts and support interventions that aim to mitigate cognitive biases. To be truly useful in this context, however, stance detection models not only need to make accurate (cross-topic) predictions but also be sufficiently explainable to users when applied to search results - an issue that is currently unclear. This paper presents a study into the feasibility of using current stance detection approaches to assist users in their web search on debated topics. We train and evaluate 10 stance detection models using a stance-annotated data set of 1204 search results. In a preregistered user study (N = 291), we then investigate the quality of stance detection explanations created using different explainability methods and explanation visualization techniques. The models we implement predict stances of search results across topics with satisfying quality (i.e., similar to the state-of-the-art for other data types). However, our results reveal stark differences in explanation quality (i.e., as measured by users' ability to simulate model predictions and their attitudes towards the explanations) between different models and explainability methods. A qualitative analysis of textual user feedback further reveals potential application areas, user concerns, and improvement suggestions for such explanations. Our findings have important implications for the development of user-centered solutions surrounding web search on debated topics.
AB - One way to help users navigate debated topics online is to apply stance detection in web search. Automatically identifying whether search results are against, neutral, or in favor could facilitate diversification efforts and support interventions that aim to mitigate cognitive biases. To be truly useful in this context, however, stance detection models not only need to make accurate (cross-topic) predictions but also be sufficiently explainable to users when applied to search results - an issue that is currently unclear. This paper presents a study into the feasibility of using current stance detection approaches to assist users in their web search on debated topics. We train and evaluate 10 stance detection models using a stance-annotated data set of 1204 search results. In a preregistered user study (N = 291), we then investigate the quality of stance detection explanations created using different explainability methods and explanation visualization techniques. The models we implement predict stances of search results across topics with satisfying quality (i.e., similar to the state-of-the-art for other data types). However, our results reveal stark differences in explanation quality (i.e., as measured by users' ability to simulate model predictions and their attitudes towards the explanations) between different models and explainability methods. A qualitative analysis of textual user feedback further reveals potential application areas, user concerns, and improvement suggestions for such explanations. Our findings have important implications for the development of user-centered solutions surrounding web search on debated topics.
KW - bias
KW - explainability
KW - stance detection
KW - viewpoint
KW - web search
UR - http://www.scopus.com/inward/record.url?scp=85151157281&partnerID=8YFLogxK
U2 - 10.1145/3576840.3578296
DO - 10.1145/3576840.3578296
M3 - Conference contribution
AN - SCOPUS:85151157281
T3 - CHIIR 2023 - Proceedings of the 2023 Conference on Human Information Interaction and Retrieval
SP - 221
EP - 235
BT - CHIIR 2023 - Proceedings of the 2023 Conference on Human Information Interaction and Retrieval
PB - Association for Computing Machinery (ACM)
CY - New York, NY, USA
T2 - 8th ACM SIGIR Conference on Human Information Interaction and Retrieval, CHIIR 2023
Y2 - 19 March 2023 through 23 March 2023
ER -