With the rise of generative AI tools like ChatGPT, Midjourney, and DALL-E, new questions emerge about their role in reproducing societal biases, including gender-related ones, and how these tools can be integrated into sociological research. This paper argues that generative AI platforms act as epistemological machines that reflect and perpetuate biases present in their training data. It examines how these technologies can analyze and challenge social biases through their generated content, critiquing the use of “hallucinations” to shift responsibility from tech companies involved in data management. By focusing on the dialogic nature of AI interactions and prompting as procedural rhetoric, the paper underscores the connection between agency, platform design, and societal norms. Empirical examples show that investigating AI outputs can reveal implicit biases and how media representations can reinforce or challenge existing disparities, stressing the need for a critical approach to understanding AI-generated content and its impact on social dynamics.
Unpacking the Outputs of Generative AI Platforms and Revealing Gender and Social Re-Presentations, 2024.
Unpacking the Outputs of Generative AI Platforms and Revealing Gender and Social Re-Presentations
Risi, Elisabetta;Briziarelli, Marco;
2024-01-01
Abstract
With the rise of generative AI tools like ChatGPT, Midjourney, and DALL-E, new questions emerge about their role in reproducing societal biases, including gender-related ones, and how these tools can be integrated into sociological research. This paper argues that generative AI platforms act as epistemological machines that reflect and perpetuate biases present in their training data. It examines how these technologies can analyze and challenge social biases through their generated content, critiquing the use of “hallucinations” to shift responsibility from tech companies involved in data management. By focusing on the dialogic nature of AI interactions and prompting as procedural rhetoric, the paper underscores the connection between agency, platform design, and societal norms. Empirical examples show that investigating AI outputs can reveal implicit biases and how media representations can reinforce or challenge existing disparities, stressing the need for a critical approach to understanding AI-generated content and its impact on social dynamics.| File | Dimensione | Formato | |
|---|---|---|---|
|
IGI Chapter - 24_11_24.docx
Non accessibile
Tipologia:
Documento in Pre-print
Dimensione
84.98 kB
Formato
Microsoft Word XML
|
84.98 kB | Microsoft Word XML | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



