Detecting and Reducing Gender Bias in Spanish Texts Generated with ChatGPT and Mistral Chatbots: The Lovelace Project
View/ Open
Date
2024-11-10Author
Estado
info:eu-repo/semantics/publishedVersionMetadata
Show full item recordAbstract
. Current Artificial Intelligence (AI) systems can effortlessly and instantaneously generate text, images, songs, and videos. This capability will lead us to a future where a significant portion of available information will be partially or wholly generated by AI. In this context, it is crucial to ensure that AI-generated texts and images do not perpetuate or exacerbate existing gender biases. We examined the behavior of two common AI chatbots, ChatGPT and Mistral, when generating text in Spanish, both in terms of language inclusiveness and perpetuation of traditional male/female roles. Our analysis revealed that both tools demonstrated relatively low gender bias in terms of reinforcing traditional gender roles but exhibited higher gender bias concerning language inclusiveness, at least in the Spanish language. Additionally, although ChatGPT showed lower overall gender bias than Mistral, Mistral provided users with more control to modify its behavior through prompt modifiers. As a final conclusion, while both AIs exhibit some degree of gender bias in their responses, this bias is significantly lower than the gender bias present in their human-authored source materials.
Detecting and Reducing Gender Bias in Spanish Texts Generated with ChatGPT and Mistral Chatbots: The Lovelace Project
Tipo de Actividad
Artículos en revistasISSN
2783-7777Palabras Clave
.artificial intelligence, gender bias, inclusive language, chatGPT, Mistral