Key research themes
1. How can Item Response Tree models capture complex response processes beyond traditional IRT outcomes?
This research theme explores advances in Item Response Theory (IRT) that model the internal cognitive or psychological decision processes influencing item response selection. Beyond assessing terminal item responses, item response tree models characterize sequential, nested, and multidimensional decision-making pathways. This detailed modeling offers nuanced insights into psychological assessments, response omissions, and the structure of Likert-type scale responses, addressing limitations of classical IRT models that treat responses as flat outcome categories.
2. What advantages does Item Response Theory offer over Classical Test Theory in psychological test development and measurement precision?
This research area investigates the methodological and practical benefits of Item Response Theory (IRT) compared to Classical Test Theory (CTT) in psychological and educational assessments. It focuses on how IRT models provide invariant item and person parameters, permit precise measurement precision quantification at varying trait levels, and support refined test development practices. These advantages are crucial for improving test validity, reliability, and interpretability, especially in scales with graded responses.
3. How can item-fit and model-data fit be accurately assessed in IRT to identify aberrant items and improve measurement validity?
This theme focuses on the development and evaluation of statistical methods for assessing the fit of IRT models at the item level, crucial for ensuring accurate parameter estimation and valid test scores. It compares chi-square and entropy-based techniques, investigates challenges with traditional fit statistics due to model dependency and sample-specific grouping, and explores computational innovations to provide more precise diagnostics of item misfit, enabling enhanced item selection and test calibration.
4. What computational methods and software can enhance IRT parameter estimation for complex models and simulation studies?
This area addresses the challenges of flexible IRT model estimation using Bayesian methods and computational resources. It covers implementation strategies using BUGS-language software for various common and extended IRT models, enabling customization for longitudinal or multi-level data structures. It also examines automation with R scripting to conduct large-scale simulation studies using stand-alone software packages, streamlining iterative model fitting and fit metric extraction critical for psychometric research.
5. How can IRT be extended and applied to continuous and polytomous response data while accounting for measurement constraints?
This theme investigates the modeling of non-dichotomous responses in IRT, including continuous measurements such as response times and Likert-scale data. It explores latent trait models that incorporate distributional restrictions (e.g., response boundedness), extend traditional discrete IRT to continuous domains, and develop threshold models suited for polytomous items. Addressing response scale properties improves model appropriateness, measurement validity, and the treatment of response patterns in diverse assessment contexts.