Speaker
Description
Current psychological models of human reasoning use a small set of assumptions to explain and predict the pattern of competence and errors in solving deductive reasoning problems. Here, we test those theories on a large dataset derived from the CISIA TOLC-PSI test, which collected answers to a wide range of deductive problems from over 40000 prospective university students.
The test included four categories of verbal reasoning problems: propositional logic, syllogisms, iterative reasoning, and the definition of necessary and sufficient conditions. Additionally, each problem can be characterized along several dimensions, some shared across the four categories (e.g., logical depth), others category-specific (e.g., the figure of a quantified syllogism). Each participant solved ten problems selected from a pool of 200, for an average of 2000 responses to each problem. Unlike many previous psychological studies that have focused on a narrow range of problem types or relied on relatively small sample sizes, the diversity of the CISIA dataset, encompassing the spectrum of logical complexity, offers a distinctive advantage.
We present results from our two-step approach: first, formalized theory-based predictions of problem difficulty (e.g. that problems involving a greater number of implication schemas, semantic models, or assumptions would be more challenging); second, the comparison between theory-based and empirical difficulty estimates obtained from the application of polytomous Item Response Theory models.