• English
    • español
  • español 
    • English
    • español
  • Login
Ver ítem 
  •   DSpace Principal
  • 2.- Investigación
  • Documentos de Trabajo
  • Ver ítem
  •   DSpace Principal
  • 2.- Investigación
  • Documentos de Trabajo
  • Ver ítem
JavaScript is disabled for your browser. Some features of this site may not work without it.

Collinearity-aware Explainability for Time-series Forecasting: Evidence from Synthetic Benchmarks

Thumbnail
Ver/
IIT-25-346C.pdf (992.2Kb)
Autor
Pizarroso Gonzalo, Jaime
Estado
info:eu-repo/semantics/draft
Metadatos
Mostrar el registro completo del ítem
Mostrar METS del ítem
Ver registro en CKH

Refworks Export

Resumen
 
 
Post-hoc explainability is routinely used to interpret machine-learning forecasters, yet in the common “lagsto-forecast” setting autocorrelation and cross-correlation induce severe multicollinearity that renders per-lag attributions statistically fragile. We study this phenomenon with controlled synthetic benchmarks where the ground-truth drivers are known, and evaluate three representative model families (Random Forest, LSTM, Transformer-style Informer). We introduce a collinearityaware evaluation protocol that (i) respects temporal dependence via blocked permutation tests and (ii) aligns the unit of explanation with the unit of non-dentifiability through group-wise (lag-block) attributions. Across models, per-lag SHAP rankings are unstable under small refits, whereas grouping markedly improves stability (e.g., Spearman rank correlation rises by up to %2B0.23 for tree models) with consistent gains in Top-k overlap. Ablation experiments show that removing a handful of top-ranked individual lags yields only minor AUROC changes, confirming redundancy among correlated lags; in contrast, dropping an entire lag group corresponding to a true driver produces large performance losses. Blocked permutation further yields more conservative and reliable reliance estimates than i.i.d. permutation and can alter driver rankings under seasonality. Taken together, the results clarify that, under autocorrelation, post-hoc explanations primarily reflect what the model relies on given the observed dependence, not process causality. We provide practical guidance: explain groups rather than isolated lags, respect serial structure in perturbations, and report stability metrics to distinguish robust insights from artefacts of collinearity.
 
URI
http://hdl.handle.net/11531/107170
Collinearity-aware Explainability for Time-series Forecasting: Evidence from Synthetic Benchmarks
Palabras Clave

Explainability, forecasting, correlation, XAI, interpretability
Colecciones
  • Documentos de Trabajo

Repositorio de la Universidad Pontificia Comillas copyright © 2015  Desarrollado con DSpace Software
Contacto | Sugerencias
 

 

Búsqueda semántica (CKH Explorer)


Listar

Todo DSpaceComunidades & ColeccionesPor fecha de publicaciónAutoresTítulosMateriasPor DirectorPor tipoEsta colecciónPor fecha de publicaciónAutoresTítulosMateriasPor DirectorPor tipo

Mi cuenta

AccederRegistro

Repositorio de la Universidad Pontificia Comillas copyright © 2015  Desarrollado con DSpace Software
Contacto | Sugerencias