Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/11531/66702
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorVillegas Galaviz, Carolinaes-ES
dc.contributor.authorFernández Fernández, José Luises-ES
dc.date.accessioned2022-03-16T18:18:35Z-
dc.date.available2022-03-16T18:18:35Z-
dc.identifier.urihttp://hdl.handle.net/11531/66702-
dc.description.abstractes-ES
dc.description.abstractAI is on the rise. Data analytics continues to be the main source for managerial decision-making. In some cases, it might even be irresponsible not to use the power of AI to inform certain decision-making: as in some diseases diagnoses, if there is a tool to improve the precision, it is nonsense not to use it. However, in some other cases, the AI outcome can create an impact such as the use of the model may exacerbate the harm: as in the case of the COMPAS algorithm where the use of the tool disregards unfair discrimination and even create a new problem of injustice. The purpose of this paper is to delimit the scope of AI while arguing that in some circumstances we should refrain from the use of this technology. First, based on previous literature, we defend that once a path is proposed, those who use AI are directly affected by the proposition. For example, in court, if an algorithm proposes to sentence someone as guilty, it is often difficult for judges to contradict the AI model. In this type of scenario, those who deploy AI defer decision-making to algorithms even when they are the ones responsible for deciding. Secondly, we argue that in some contexts the impact of the decision or action is so big that there should always be a human in the loop. For this second case take the example of how Amazon is algorithmically rating and firing its drivers, without human intervention. Here the drivers only receive an email, sent by a bot, telling them they are fired. In this type of context, the impact of the action should be a cause to refrain from the use of AI, any employee should be treated with dignity and the impact of a job loss is so disrupting that should be done in a certain way: giving voice to the persons affected and treating them in a respectful and attentive way. Our two arguments are not exhaustive, but the purpose of our article is to start the conversation of when and where to put limits to the use of AI, while identifying those scenarios where society should refrain from the use of algorithms. However, we do not want to overlook all the benefits of the good use of AI.en-GB
dc.format.mimetypeapplication/pdfes_ES
dc.language.isoen-GBes_ES
dc.rightsCreative Commons Reconocimiento-NoComercial-SinObraDerivada Españaes_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/es_ES
dc.titleThe untouchables for AIes_ES
dc.typeinfo:eu-repo/semantics/workingPaperes_ES
dc.description.versioninfo:eu-repo/semantics/draftes_ES
dc.rights.holderes_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.keywordsAI ethics; Business Ethicses-ES
dc.keywordsen-GB
Aparece en las colecciones: Documentos de Trabajo

Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
The untouchable for AI. Para Empresa y Humanismo.pdf51,8 kBAdobe PDFVista previa
Visualizar/Abrir


Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.