Abstract
Artificial Intelligence systems are more and more being introduced into first response; however, this introduction needs to be done responsibly. While generic claims on what this entails already exist, more details are required to understand the exact nature of responsible application of AI within the first response domain. The context in which AI systems are applied largely determines the ethical, legal, and societal impact and how to deal with this impact responsibly. For that reason, we empirically investigate relevant human values that are affected by the introduction of a specific AI-based Decision Aid (AIDA), a decision support system under development for Fire Services in the Netherlands. We held 10 expert group sessions and discussed the impact of AIDA on different stakeholders. This paper presents the design and implementation of the study and, as we are still in process of analyzing the sessions in detail, summarizes preliminary insights and steps forward.
Original language | English |
---|---|
Title of host publication | Proceedings of the 21st ISCRAM Conference |
Number of pages | 9 |
Volume | 21 |
Publication status | Published - 2024 |
Event | 21st ISCRAM Conference: Information Systems for Crisis Response and Management - Münster, Germany Duration: 25 May 2024 → 29 May 2024 Conference number: 21 |
Conference
Conference | 21st ISCRAM Conference: Information Systems for Crisis Response and Management |
---|---|
Abbreviated title | ISCRAM 2024 |
Country/Territory | Germany |
City | Münster |
Period | 25/05/24 → 29/05/24 |
Keywords
- Values
- Decision-Support
- Responsible AI
- Fire Services