AIC model selection VS Likelihood Ratio Tests #119
-
Hello. First of all, thank you for this great package that has been helping me a lot with the analysis of community data. I am currently analyzing community data from an experiment where I have a fully-factorial design of two factors with three levels each (agrochemical treatment and spatial isolation). So I wish to test whether I have additive or interactive effects of these two treatments on community data. Something like: So I first ran the following models and compared them using AICc (I am omitting some arguments just to make the code cleaner). Also, I am using zero latent variables because the AIC value for this model was by far the smallest one if compared to 1, 2, 3, 4, and 5 latent variables (all tested with the full model). When I compare those models using AICc (using the function In a cleaner model selection table (this is from a function that I designed myself):
It seems that the most plausible model is the one that only includes additive effects of both factors. Considering the rule that models with delta AICc < 2 are equally plausible, I would say that the most plausible model is actually the one that only includes the effects of treatments, since it das delta AICc < 2 and is a simpler model. Also, the model that includes the interaction is by far the less plausible model. However, if I do likelihood ratio tests using the Testing the effects of treatments:
Testing the effects of isolation:
Testing the effects of the interaction:
It basically says that all main effects and the interaction are significant and with very small p-values. I understand that model selection using AIC is not supposed to return exactly the same results (qualitatively) as likelihood ratio tests. I am also aware that the Thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Thanks for your question; there is no reason why In general I would suggest caution in doing extensive model-selection with either LRT or AIC in models with mixed-effects (or generally, actually). LRT does not penalize for the number of parameters, AICc does, and a model is likely to improve if you throw in a number of parameters equal to the number of species! As the I am very hesitant on making a recommendation what you should do, as there are many different opinions and thoughts on the subject of model-selection (and how to do it in mixed-effects models). I would personally go with As an aside, the |
Beta Was this translation helpful? Give feedback.
Thanks for your question; there is no reason why
anova
andAICc
should correspond here (though ideally they would of course).In general I would suggest caution in doing extensive model-selection with either LRT or AIC in models with mixed-effects (or generally, actually). LRT does not penalize for the number of parameters, AICc does, and a model is likely to improve if you throw in a number of parameters equal to the number of species! As the
anova
function says, please do not rely on it when the different in the number of parameters is so large (as is the case here).I am very hesitant on making a recommendation what you should do, as there are many different opinions and thoughts on th…