Speaker
Description
In psychological research, measurement models often aggregate items under the assumption that the effects of other variables remain consistent across those items (i.e., effect invariance). This assumption can lead to biased statistical conclusions, particularly by producing a positive bias in the F-test and inflating the Type I error rate, since it overlooks the random variability in item-specific effects.
This study advocates for adopting effect-variant models, which treat both participants and items as random factors, and demonstrates when these models are more suitable than traditional aggregation methods. We further develop this argument by introducing the Aggregation Bias Index (ABI), a new metric to quantify the bias associated with standard aggregation approaches.
Through a series of simulations and analyses of real experimental data, we show that ignoring item-specific variability in aggregated models leads to lower accuracy in detecting the condition effect and increases Type I error rates, especially in larger samples. The ABI provides an effective way to assess the extent of this bias.
These results emphasise the importance of using effect-variant models and accounting for random variability in both items and participants, ultimately enhancing the validity of statistical inferences in psychological research involving questionnaires and scales.
| If you're submitting a symposium talk, what's the symposium title? | Innovations in psychometric modelling: New approaches to understanding psychological measurement |
|---|---|
| If you're submitting a symposium, or a talk that is part of a symposium, is this a junior symposium? | No |