Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/11531/78847
Título : Comparing BERT against Traditional Machine Learning Models in Text Classification
Autor : Garrido Merchán, Eduardo César
Gozalo Brizuela, Roberto
Gonzalez Carvajal, Santiago
Fecha de publicación : 21-abr-2023
Resumen : -
TheBERT model has arisen as a popular state of the art model in recent years. It is able to cope with NLP tasks such as supervised text classification without human supervision. Its flexibility to cope with any corpus delivering great results has make this approach very popular in academia and industry. Although, other approaches have been used before successfully. We first present BERT and a review on classical NLP approaches. Then, we empirically test with a suite of different scenarios the behaviour of BERT against traditional TF-IDF vocabulary fed to machine learning models. The purpose of this work is adding empirical evidence to Support the use of BERT as a default on NLP tasks. Experiments show the superiority of BERT and its Independence of features of the NLP problema such as the language of the text adding empirical evidence to use BERT as a default technique in NLP problems.
Descripción : Artículos en revistas
URI : https://doi.org/10.47852/bonviewJCCE3202838
ISSN : 2810-9503
Aparece en las colecciones: Artículos

Ficheros en este ítem:
Fichero Tamaño Formato  
document.pdf336,3 kBAdobe PDFVisualizar/Abrir


Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.