Speaker
Description
Differently from other measurement approaches, such as classical test theory and item response theory, the knowledge space theory (KST) framework has long lacked an overall index for measuring reliability. Recent literature has proposed the introduction of two indices, both grounded in an information-theoretic framework. In this research, we further investigate reliability in KST and introduce two new key measures: the expected accuracy rate and the expected discrepancy. The accuracy rate quantifies the probability that the estimated knowledge state (obtained through an assessment) matches the true state, while the expected discrepancy measures the average deviation when misclassification occurs. These two indices are immediately interpretable as a probability and a distance, respectively, thus offering clear insight into the reliability of the assessment.
In this talk, we briefly discuss these indices from a theoretical perspective and present two simulation studies aimed at evaluating their performance under different conditions. The main results of the simulations reveal that smaller structures exhibit consistent accuracy across varying error levels, while larger structures show increasing discrepancies as error rates rise. Furthermore, accuracy improves with a greater number of items in larger structures, mitigating the impact of errors. Importantly, the results also highlight that, when misclassification occurs, the estimated state is generally very close to the true one. This finding is particularly important, as it indicates that the effect of misclassifications in the assessment is almost negligible. In summary, these new indices represent a significant improvement in KST, providing the framework with a robust method for estimating assessment reliability.