Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/11531/94136
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.advisorVendrell Morancho, Mireiaes-ES
dc.contributor.authorPedrosa Prats, Elenaes-ES
dc.contributor.otherUniversidad Pontificia Comillas, Facultad de Ciencias Humanas y Socialeses_ES
dc.date.accessioned2024-09-15T16:36:21Z
dc.date.available2024-09-15T16:36:21Z
dc.date.issued2025es_ES
dc.identifier.urihttp://hdl.handle.net/11531/94136
dc.descriptionGrado en Psicologíaes_ES
dc.description.abstractBackground: The integration of artificial intelligence (AI) tools in higher education has accelerated rapidly, prompting growing interest in their impact on students’ critical thinking (CT) skills. While some studies suggest that AI can enhance analytical reasoning and reflective learning, others raise concerns about overreliance and cognitive offloading. Despite this growing literature, the conceptualization and measurement of “critical thinking” remain inconsistent across studies, complicating efforts to synthesize findings. Objectives: This systematic review examines how recent empirical studies (2022–2025) define, operationalize, and assess critical thinking in the context of AI-enhanced learning in higher education. Specifically, it investigates the theoretical frameworks employed, the assessment tools used, the types of AI tools integrated, and the reported outcomes on CT development. Methods: Following the PRISMA 2020 guidelines, we conducted a systematic search in Scopus, Web of Science, and ERIC for studies published between 2022 and February 2025 that involved higher education student populations, AI-based tools, and CT-related outcomes. A total of 22 eligible studies were identified and analyzed using narrative synthesis and thematic coding. Risk of bias was assessed using JBI and CASP tools. Results: Only 8 of the 22 studies provided a formal definition of critical thinking, and even fewer used dedicated CT assessment instruments. Most studies relied on mixed methods and domain-specific performance tasks. The findings indicate that AI tools can support CT development, particularly when embedded in human-facilitated learning environments that promote reflection, evaluation, and dialogue. However, studies also reported risks such as superficial learning and diminished metacognitive engagement when AI was used as a cognitive substitute. Conclusions: AI’s impact on critical thinking in higher education is shaped by tool design, instructional context, and the clarity of CT conceptualization. This review highlights the need for consistent definitions, theoretically grounded assessments, and pedagogical models that combine AI affordances with reflective, instructor-guided learning. Future research should emphasize longitudinal designs and the development of CT-specific instruments aligned with validated frameworks.es-ES
dc.description.abstracten-GB
dc.format.mimetypeapplication/pdfes_ES
dc.language.isoen-GBes_ES
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United Stateses_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/es_ES
dc.subject61 Psicologíaes_ES
dc.subject6104 Psicopedagogíaes_ES
dc.subject610402 Métodos educativoses_ES
dc.subject.otherKP3es_ES
dc.titleAssessment of AI’s Impact on Critical Thinking in Higher Education: A Systematic Review of Evaluation Methodses_ES
dc.typeinfo:eu-repo/semantics/bachelorThesises_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.keywordsCritical thinking, Artificial intelligence, Higher education, ChatGPT, Systematic review, AI in education, PRISMA 2020, Cognitive skills, Metacognitiones-ES
dc.keywordsen-GB
Aparece en las colecciones: KP2-Trabajos Fin de Grado

Ficheros en este ítem:
Fichero Tamaño Formato  
TFG - Pedrosa Prats,Elena.pdf1,17 MBAdobe PDFVisualizar/Abrir


Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.