TY - JOUR
T1 - Transparency in large language model (LLM)-powered digital human twins
T2 - the AI ethics perspective
AU - Pigac, Tilen
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025.
PY - 2025
Y1 - 2025
N2 - Digital human twins (DHTs), powered by large language models (LLMs), are transforming industries such as healthcare and finance by mimicking human behaviors, preferences, and decision-making processes. While their adoption offers unprecedented personalization and engagement, it also raises significant ethical concerns, particularly regarding transparency. Ensuring users understand how these systems function is critical to fostering trust and accountability. This study explores transparency in LLM-powered DHTs through qualitative analysis of 30 semi-structured interviews with users across diverse sectors. The findings reveal critical challenges, including algorithmic opacity, data privacy vulnerabilities, and threats to user autonomy. Participants consistently expressed a need for clear disclosures about data practices and emphasized the importance of robust ethical safeguards to prevent misuse. The research highlights the tension between achieving transparency and maintaining the seamless functionality of DHT systems. It underscores the risks of oversimplifying algorithmic processes while pointing out the erosion of trust caused by opaque operations. To address these challenges, the study proposes actionable strategies, including tiered transparency models, enhanced regulatory oversight, and user-centric design principles. By bridging ethical principles with practical applications, this research provides a roadmap for fostering responsible AI innovation. It advances the discourse on ethical AI by addressing transparency challenges in LLM-powered DHTs, emphasizing the need for systems that uphold trust, accountability, and user autonomy.
AB - Digital human twins (DHTs), powered by large language models (LLMs), are transforming industries such as healthcare and finance by mimicking human behaviors, preferences, and decision-making processes. While their adoption offers unprecedented personalization and engagement, it also raises significant ethical concerns, particularly regarding transparency. Ensuring users understand how these systems function is critical to fostering trust and accountability. This study explores transparency in LLM-powered DHTs through qualitative analysis of 30 semi-structured interviews with users across diverse sectors. The findings reveal critical challenges, including algorithmic opacity, data privacy vulnerabilities, and threats to user autonomy. Participants consistently expressed a need for clear disclosures about data practices and emphasized the importance of robust ethical safeguards to prevent misuse. The research highlights the tension between achieving transparency and maintaining the seamless functionality of DHT systems. It underscores the risks of oversimplifying algorithmic processes while pointing out the erosion of trust caused by opaque operations. To address these challenges, the study proposes actionable strategies, including tiered transparency models, enhanced regulatory oversight, and user-centric design principles. By bridging ethical principles with practical applications, this research provides a roadmap for fostering responsible AI innovation. It advances the discourse on ethical AI by addressing transparency challenges in LLM-powered DHTs, emphasizing the need for systems that uphold trust, accountability, and user autonomy.
KW - AI ethics
KW - Data privacy
KW - Digital human twins
KW - LLM
KW - Personalization
KW - Transparency
UR - https://www.scopus.com/pages/publications/105018327497
U2 - 10.1007/s00146-025-02617-y
DO - 10.1007/s00146-025-02617-y
M3 - Article
AN - SCOPUS:105018327497
SN - 0951-5666
JO - AI and Society
JF - AI and Society
ER -