The rule of thumb: AIC and BIC preference/signficance levels
The Akaike information criterion (AIC) is a measure of the relative goodness of fit of a statistical model, which was developed by Hirotsugu Akaike in 1974. It was not much appreciated until 21 century. Now it’s one of the most used fit statistic output for statistical modeling. The preferred model is the one with the minimum AIC value; however, there are no statistic test for model choice based on this criterion. I found a table in Joseph M Hilbe’s book (Negative Binomial Regression, 2nd Ed, 2011) which may help us to choose a better model. Below is a table I modified based on his confusing table:
AIC = -2(log-likelihood) + 2(number of predictors including the intercept)
========================================
AIC of (Model A – Model B) , if AIC(A) > AIC(B)
----------------------------------------------------------------
< 2.5 No difference in models
2.5 – 5.9 Prefer B if sample size n > 256
6.0 – 9.9 Prefer B if sample size n > 64
≥ 10.0 Prefer B
========================================
Another criterion measure is Bayesian information criterion (BIC). Raftery AE (1995) gave the scale for relative preference of two models (I modified the table in Hilbe's book):
BIC = -2(log-likelihood) + (number of predictors including the intercept)*(ln(sample size))
========================================
BIC of (Model A – Model B) , if BIC(A) > BIC(B)
----------------------------------------------------------------
< 2.0 Weak
2.0 – 5.9 Positive and prefer B
6.0 – 9.9 Strong and prefer B
≥ 10.0 Very strong and prefer B
========================================
1 comment:
Do you have any idea behind the reasoning behind these rules of thumb? I asked a question on Stack Exchange to this effect: https://stats.stackexchange.com/questions/349883/what-is-the-logic-behind-rule-of-thumb-for-meaningful-differences-in-aic. Could you help?
Post a Comment