Navigation auf uzh.ch
Abstract
Conversational agents, such as chatbots, are artificial intelligence (AI)- or rule-based systems increasingly utilised in mental healthcare. These systems can address critical issues such as limited access to mental health services, high costs, and a shortage of qualified professionals, particularly impacting underserved populations. Recent advancements in large language models (LLMs) have amplified interest in conversational AI’s (CAI) potential to alleviate these challenges by providing accessible, cost-effective, and often free mental health support.
CAI development focuses on simulating human abilities and characteristics. However, this raises concerns about the ethical implications of these simulations, especially in sensitive mental healthcare contexts. The PhD thesis aims to outline epistemic, normative and ethical challenges associated with CAI’s simulation of humans to provide a holistic and interdisciplinary understanding of CAI’s role in psychotherapy and mental healthcare. Thereby, the PhD thesis focuses on two CAI’s applications: CAI as a digital therapist in the psychotherapeutic context and CAI for data collection purposes. The research employs a combination of theoretical and empirical methods, including conceptual, normative, and ethical analyses, a systematic review, and an empirical study. This approach underscores the importance of interdisciplinary research in addressing the complex challenges and ensuring the ethical integration of CAI into mental healthcare.
The collection of articles provides an overview of challenges associated with CAI’s simulation of human characteristics and abilities in mental healthcare as well as argumentative strategies to understand CAI’s role therein. The first article provides a conceptual framework to understand and critically analyses the CAI's potential role in psychotherapy on a spectrum between a tool and a human agent, highlighting the need for novel ways of conceptualising and understanding CAI. The subsequent two articles build on this foundation. The second article offers philosophical reflections on the value of CAI's simulations in epistemic space and the third article critically analyses the robustness and humanization of LLM-enhanced CAI when used by patients with depression. The fourth article addresses the technical challenges of integrating unstructured data, such as data collected from CAI interactions, into health research. The fifth empirical study examines the impact of different chatbot personas asking mental health-related questions on user experience, attitudes and willingness to disclose information. The results underscore the need for ethically-sensitive CAI design. Finally, the sixth article analyses the conceptual shift of epistemic trust towards CAI and proposes a novel conceptualisation of CAI as a fictional character to better capture its unique characteristics and normative complexities.
Combined, the PhD thesis provides recommendations for diverse stakeholders to navigate the ethical landscape of human-like CAI in mental healthcare.