DescriptionWhilst the application of artificial intelligence promises immense potential benefits to virtually all disciplines and activities, it also raises new risks, in scope like in scale. This duality is especially relevant with health applications and more particularly mental health. Making decisions in regards to the application of AI in healthcare based on the evaluation of risks and benefits should be made according to ethics principles. - Practically, a complex framework of national and international regulatory bodies exists to establish and enforce regulations that define the conditions under which new technologies can be developed, validated, marketed and deployed for research, wellness or clinical use. - Our work identifies and defines the different modes under which current and emerging data-driven AI applications for mental health are regulated, and will assess how these modes address the main principles identified in the many AI ethics principle declarations published in the last 6 years by governments and international institutions. - More particularly, we evaluate how the concepts of trust, safety and transparency are embedded into these regulatory modes, whether these modes contribute to a more trusted adoption of AI tools into the clinician-driven decision making process, and the mechanism by which this adoption can impact the bias, accuracy, and precision of the decisions thus made.
|26 Nov 2021
|2021 Symposium on Artificial Intelligence in Mental Health
|Degree of Recognition