Key research themes
1. How do latent class models characterize discrete latent structures and improve classification accuracy?
This research area focuses on the development and extension of latent class (LC) models, which posit that latent variables are categorical and classify observations into mutually exclusive latent classes. These models provide a framework to explain associations among observed variables via latent classes, useful particularly in categorical data contexts. Understanding the statistical modeling, parameter interpretation, and model fit assessment methods for LC models is crucial for effective classification and clustering applications across social sciences and related fields.
2. What advances exist in latent variable model estimation methods to overcome local maxima and computational challenges?
This theme addresses methodological innovations for estimating discrete latent variable (DLV) models, including latent class and hidden Markov models, which suffer from multimodality in likelihood surfaces and computational expense. Key methods include tempered expectation-maximization (EM) algorithms that explore parameter spaces more thoroughly to reach global maxima, and dimension reduction approximations facilitating tractable likelihood inference in generalized linear latent variable models (GLLVMs). Improving estimation robustness and scalability is critical for reliable inference from latent variable models, especially in high-dimensional and complex settings.
3. How are latent variable models compared and integrated with network models and neural architectures for dimensionality reduction and data representation?
This research theme investigates the equivalences, distinctions, and hybrids between latent variable models and network models, especially in psychological and machine learning applications. It examines the interpretative differences despite statistical equivalence, proposes comparative test procedures, and develops models that integrate latent variable frameworks with modern neural network techniques to improve latent representation, supervised or semi-supervised learning, and dimensionality reduction. Such integration advances understanding of latent structures while leveraging flexible nonlinear mappings.