Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/11531/109233
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorHerrera Triguero, Franciscoes-ES
dc.contributor.authorCalderón Cuadrado, María Reyeses-ES
dc.date.accessioned2026-03-19T05:28:27Z-
dc.date.available2026-03-19T05:28:27Z-
dc.date.issued2026-03-14es_ES
dc.identifier.issn0160-791Xes_ES
dc.identifier.urihttps://doi.org/10.1016/j.techsoc.2026.103302es_ES
dc.identifier.urihttp://hdl.handle.net/11531/109233-
dc.descriptionArtículos en revistases_ES
dc.description.abstractThis paper introduces the LoBOX (Lack of Belief: Opacity & eXplainability) ethics governance framework, a governance-centric approach for managing artificial intelligence (AI) opacity when full transparency is infeasible. While transparency-centric approaches treat transparency as the social/ideal goal and therefore opacity as a design flaw, LoBOX suggests opacity is a condition which should be ethically governed through role-sensitive explanation and institutional accountability. The LoBOX framework comprises a three-stage pathway: reduce accidental opacity, bound irreducible opacity, and delegate trust through institutional oversight. Integrating the stakeholder-sensitive explanation described in the RED/BLUE XAI model, which is aligned with emerging legal instruments such as the EU AI Act, LoBOX offers a scalable and context-aware alternative to transparency-centric approaches. LoBOX reframes trust as an outcome of institutional credibility, structured justification, and stakeholder-sensitive accountability, and it is designed to remain aligned with evolving technological contexts and stakeholder expectations while ethically governing opacity. In the end, to ensure responsible AI systems, LoBOX moves from transparency ideals to ethical governance, emphasizing that trustworthiness in AI must be institutionally grounded and contextually justified.es-ES
dc.description.abstractThis paper introduces the LoBOX (Lack of Belief: Opacity & eXplainability) ethics governance framework, a governance-centric approach for managing artificial intelligence (AI) opacity when full transparency is infeasible. While transparency-centric approaches treat transparency as the social/ideal goal and therefore opacity as a design flaw, LoBOX suggests opacity is a condition which should be ethically governed through role-sensitive explanation and institutional accountability. The LoBOX framework comprises a three-stage pathway: reduce accidental opacity, bound irreducible opacity, and delegate trust through institutional oversight. Integrating the stakeholder-sensitive explanation described in the RED/BLUE XAI model, which is aligned with emerging legal instruments such as the EU AI Act, LoBOX offers a scalable and context-aware alternative to transparency-centric approaches. LoBOX reframes trust as an outcome of institutional credibility, structured justification, and stakeholder-sensitive accountability, and it is designed to remain aligned with evolving technological contexts and stakeholder expectations while ethically governing opacity. In the end, to ensure responsible AI systems, LoBOX moves from transparency ideals to ethical governance, emphasizing that trustworthiness in AI must be institutionally grounded and contextually justified.en-GB
dc.format.mimetypeapplication/pdfes_ES
dc.language.isoen-GBes_ES
dc.sourceRevista: Technology in Society, Periodo: 1, Volumen: En imprenta, Número: , Página inicial: 0, Página final: 0es_ES
dc.subject.otherInstituto de Investigación Tecnológica (IIT)es_ES
dc.titleOpacity as a feature, not a flaw: Role-sensitive explainability, institutional trust, and the LoBOX ethics governance framework for AIes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.description.versioninfo:eu-repo/semantics/publishedVersiones_ES
dc.rights.holderes_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.keywordsExplainable artificial intelligence (XAI)Algorithmic opacityAI governanceInstitutional trustRole-sensitive explainabilityAccountabilityes-ES
dc.keywordsExplainable artificial intelligence (XAI)Algorithmic opacityAI governanceInstitutional trustRole-sensitive explainabilityAccountabilityen-GB
Aparece en las colecciones: Artículos

Ficheros en este ítem:
Fichero Tamaño Formato  
IIT-26-068R_preview.pdf3,14 kBAdobe PDFVisualizar/Abrir


Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.