This paper examines AI narratives in healthcare and, more specifically, how people attribute trust to the role of AI in breast cancer detection. To do so, it draws on a content analysis conducted on 701 online user comments to a New York Times article, using Sztompka’s trust framework to identify and scrutinise the different forms of trust at play. Findings offer critical insights for digital health studies and social science studies of risk and uncertainty. At the analytical level, the article operationalises Sztompka’s trust framework as an interpretive lens to scrutinise how trust is negotiated and problematised within AI narratives on breast cancer detection. At the empirical level, we highlight that technological trust in AI does not emerge in a vacuum, but is rather intertwined with other kinds of trust: positional trust in physicians, segmental trust in the U.S. healthcare system, and the related organisational trust. These different trust domains act as interpretive frameworks through which individuals negotiate their trust in the role of AI. In this sense, technological trust in AI emerges as relational, context-dependent, and shaped by broader socio-institutional and political conditions. Moreover, we show that, rather than adopting polarised stances towards AI, several users expressed a moderate position, advocating for the use of AI under human supervision, given that the patient-doctor relationship was considered irreplaceable. This position emerged as a normative strategy to reduce uncertainty, redistribute and control diagnostic risk. Simultaneously, this has the potential to undermine the diagnostic authority of doctors.
The risk of trust: AI narratives in breast cancer detection, 2026-04-21.
The risk of trust: AI narratives in breast cancer detection
Pronzato, Riccardo
;
2026-04-21
Abstract
This paper examines AI narratives in healthcare and, more specifically, how people attribute trust to the role of AI in breast cancer detection. To do so, it draws on a content analysis conducted on 701 online user comments to a New York Times article, using Sztompka’s trust framework to identify and scrutinise the different forms of trust at play. Findings offer critical insights for digital health studies and social science studies of risk and uncertainty. At the analytical level, the article operationalises Sztompka’s trust framework as an interpretive lens to scrutinise how trust is negotiated and problematised within AI narratives on breast cancer detection. At the empirical level, we highlight that technological trust in AI does not emerge in a vacuum, but is rather intertwined with other kinds of trust: positional trust in physicians, segmental trust in the U.S. healthcare system, and the related organisational trust. These different trust domains act as interpretive frameworks through which individuals negotiate their trust in the role of AI. In this sense, technological trust in AI emerges as relational, context-dependent, and shaped by broader socio-institutional and political conditions. Moreover, we show that, rather than adopting polarised stances towards AI, several users expressed a moderate position, advocating for the use of AI under human supervision, given that the patient-doctor relationship was considered irreplaceable. This position emerged as a normative strategy to reduce uncertainty, redistribute and control diagnostic risk. Simultaneously, this has the potential to undermine the diagnostic authority of doctors.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



