Speaker
Description
While adaptive administration procedures are widely adopted in educational assessment, their application in psychological testing remains limited. A plausible reason for this delay may lie in the rigid administration rules required by classical test theory, that is all individuals receive the same items in the same order and must respond to all of them. These constraints are incompatible with the principles of adaptive testing, where the set and sequence of items vary across individuals. Nevertheless, adaptive procedures represent an effective way to obtain shorter and more personalized assessments, while still producing an estimate of the performance that refers to the full item pool. Adaptive testing algorithms have been successfully implemented within both Item Response Theory and Knowledge Space Theory (KST). The latter, introduced in 1985, was designed specifically to build an “efficient machine for assessment”. Despite the numerous successful applications, few statistical indices have been developed to evaluate the reliability and accuracy of adaptive assessments. This contribution, grounded in the framework of KST, introduces a novel index to estimate the stability of an adaptive assessment, conceived as the consistency between the obtained evaluation and the one that would have been obtained through a full test administration. Moreover, this index provides the foundation of a new adaptive assessment algorithm implementing a dynamic termination criterion. The utility and validity of the proposed index are shown through simulation studies and empirical applications based on a computerized test of fluid intelligence administered to both general and clinical populations.
| If you're submitting a symposium talk, what's the symposium title? | New perspectives for developing short forms of tests |
|---|---|
| If you're submitting a symposium, or a talk that is part of a symposium, is this a junior symposium? | No |