• English
    • español
  • English 
    • English
    • español
  • Login
View Item 
  •   Home
  • 2.- Investigación
  • Artículos
  • View Item
  •   Home
  • 2.- Investigación
  • Artículos
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Bayesian good-of-fit tests: past, present and future

Thumbnail
View/Open
IIT-15-097A.pdf (314.2Kb)
Date
04/06/2015
Author
Ciuiu, Daniel
Maté Jiménez, Carlos
Estado
info:eu-repo/semantics/publishedVersion
Metadata
Show full item record
Mostrar METS del ítem
Ver registro en CKH

Refworks Export

Abstract
 
 
In this paper we will build the Bayesian version for the good-of-fitt tests Chi^2 and Kolmogorov-Smirnov. Because for the last test the theoretical distribution must be totally specified, we will divide first the sample in two parts: the first part is for inference, and the second part is for test. The completely specified theoretical cdf for the second part of the sample is the Bayesian forecasted cdf from the first part. This is unique if the prior distribution is fixed. For the Chi^2 test, we do the same Bayesian inference in the first part, and we perform the Bayesian forecasts for the probability such that X belongs to the involved intervals (the values of p_i). The parameters of the prior distribution are chosen such that the Chi^2 statistics is minimum, and the number of degrees of freedom is k-1-npar, where k is the number of intervals, and npar is the number of parameters of X. Of course, we can fix the prior distribution as for Kolmogorov-Smirnov test, but the number of degrees of freedom is k - 1. For the last test we can consider the whole sample, and the parameters that characterise the distribution of X are the Bayesian estimators. The number of degrees of freedom are the same as above, and npar is again the number of parameters of the distribution of X. When we estimate the values of forecasted cdf/ forecasted probabilities of the intervals or when we estimate the parameters for the chi square test we apply analytical formulae if they exist. Otherwise, we generate a sample according the forecasted distribution of X|S (or the posterior distribution of theta|S), and next we apply the Monte Carlo method. The way we generate the values of X is to use the mixture method: we generate theta according the posterior distribution, and X is generated forr each theta.
 
URI
http://hdl.handle.net/11531/7948
Bayesian good-of-fit tests: past, present and future
Tipo de Actividad
Capítulos en libros
Materias/ categorías / ODS
Instituto de Investigación Tecnológica (IIT)
Palabras Clave

Bayesian forecasting, Bayesian estimators, good-of-fit tests.
Collections
  • Artículos

Repositorio de la Universidad Pontificia Comillas copyright © 2015  Desarrollado con DSpace Software
Contact Us | Send Feedback
 

 

Búsqueda semántica (CKH Explorer)


Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsxmlui.ArtifactBrowser.Navigation.browse_advisorxmlui.ArtifactBrowser.Navigation.browse_typeThis CollectionBy Issue DateAuthorsTitlesSubjectsxmlui.ArtifactBrowser.Navigation.browse_advisorxmlui.ArtifactBrowser.Navigation.browse_type

My Account

LoginRegister

Repositorio de la Universidad Pontificia Comillas copyright © 2015  Desarrollado con DSpace Software
Contact Us | Send Feedback