TY - JOUR
T1 - Aggregating value systems for decision support
AU - Lera-Leri, Roger X.
AU - Liscio, Enrico
AU - Bistaffa, Filippo
AU - Jonker, Catholijn M.
AU - Lopez-Sanchez, Maite
AU - Murukannaiah, Pradeep K.
AU - Rodriguez-Aguilar, Juan A.
AU - Salas-Molina, Francisco
PY - 2024
Y1 - 2024
N2 - We adopt an emerging and prominent vision of human-centred Artificial Intelligence that requires building trustworthy intelligent systems. Such systems should be capable of dealing with the challenges of an interconnected, globalised world by handling plurality and by abiding by human values. Within this vision, pluralistic value alignment is a core problem for AI– that is, the challenge of creating AI systems that align with a set of diverse individual value systems. So far, most literature on value alignment has considered alignment to a single value system. To address this research gap, we propose a novel method for estimating and aggregating multiple individual value systems. We rely on recent results in the social choice literature and formalise the value system aggregation problem as an optimisation problem. We then cast this problem as an ℓp-regression problem. Doing so provides a principled and general theoretical framework to model and solve the aggregation problem. Our aggregation method allows us to consider a range of ethical principles, from utilitarian (maximum utility) to egalitarian (maximum fairness). We illustrate the aggregation of value systems by considering real-world data from two case studies: the Participatory Value Evaluation process and the European Values Study. Our experimental evaluation shows how different consensus value systems can be obtained depending on the ethical principle of choice, leading to practical insights for a decision-maker on how to perform value system aggregation.
AB - We adopt an emerging and prominent vision of human-centred Artificial Intelligence that requires building trustworthy intelligent systems. Such systems should be capable of dealing with the challenges of an interconnected, globalised world by handling plurality and by abiding by human values. Within this vision, pluralistic value alignment is a core problem for AI– that is, the challenge of creating AI systems that align with a set of diverse individual value systems. So far, most literature on value alignment has considered alignment to a single value system. To address this research gap, we propose a novel method for estimating and aggregating multiple individual value systems. We rely on recent results in the social choice literature and formalise the value system aggregation problem as an optimisation problem. We then cast this problem as an ℓp-regression problem. Doing so provides a principled and general theoretical framework to model and solve the aggregation problem. Our aggregation method allows us to consider a range of ethical principles, from utilitarian (maximum utility) to egalitarian (maximum fairness). We illustrate the aggregation of value systems by considering real-world data from two case studies: the Participatory Value Evaluation process and the European Values Study. Our experimental evaluation shows how different consensus value systems can be obtained depending on the ethical principle of choice, leading to practical insights for a decision-maker on how to perform value system aggregation.
KW - AI & ethics
KW - Optimisation
KW - Value systems
UR - http://www.scopus.com/inward/record.url?scp=85184071686&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2024.111453
DO - 10.1016/j.knosys.2024.111453
M3 - Article
AN - SCOPUS:85184071686
SN - 0950-7051
VL - 287
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 111453
ER -