Speaker
Description
The inclusion of negatively worded items in psychological scales often introduces wording effects—systematic response biases that distort factor structures and inflate dimensionality estimates, threatening structural validity. The random intercept item factor analysis (RIIFA) offers a promising strategy to isolate systematic variance due to item wording by incorporating a latent method factor that explicitly models individual differences in response style. This study empirically evaluates the effectiveness of RIIFA, using the Short Grit Scale (Grit-S), across two complementary studies. Study 1 (N = 977) compared standard dimensionality reduction techniques—Exploratory Graph Analysis (EGA) and Parallel Analysis (PA)—with their RIIFA-based counterparts (riEGA and riPA). Results showed that traditional methods overestimated the number of factors, identifying separate factors for positively and negatively worded items. In contrast, RIIFA-based techniques yielded unidimensional and more stable solutions, as confirmed by bootstrap analyses. Study 2 (N = 496) tested four confirmatory factor analysis (CFA) models (unidimensional, two-correlated factors, bifactor, and RIIFA-based CFA). The model including a random intercept factor (riCFA) provided the best balance between parsimony and fit (RMSEA = .048 [.022–.072]; CFI = .984; TLI = .974; SRMR = .027), with a satisfactory reliability (ω_H = .73). Overall, findings validate the application of RIIFA for mitigating method variance in both exploratory and confirmatory frameworks, enhancing the interpretability and structural integrity of psychological constructs measured through mixed-worded scales
| If you're submitting a symposium, or a talk that is part of a symposium, is this a junior symposium? | No |
|---|