Key research themes
1. How can feature selection methods optimally balance relevance and redundancy to improve classification performance?
This research area investigates algorithms and theoretical frameworks for selecting subsets of features that retain maximal relevance to the target while minimizing redundancy among features, aiming to improve model accuracy, interpretability, and computational efficiency in classification tasks. The importance lies in mitigating the curse of dimensionality and enhancing both supervised and unsupervised learning performance through principled feature subset selection.
2. How can automated and meta-learning approaches enhance feature engineering to improve classification accuracy efficiently?
This theme focuses on innovating feature engineering automation through learning from prior experiences and meta-information extraction. It seeks to reduce computational costs associated with exhaustive feature transformation and selection by predicting useful transformations, thereby enabling scalable and interpretable feature construction that generalizes across datasets and models for classification.
3. What methodologies improve the stability and robustness of feature selection under data perturbations and limited labeled data?
Research within this theme addresses the variability and sensitivity of feature selection outcomes when faced with data fluctuations, sample size limitations, and partially labeled datasets. It emphasizes designing stable and accurate feature selectors through validation techniques, semi-supervised frameworks, and theoretical measures to ensure reliable, reproducible, and generalizable feature subsets that enhance downstream classifier performance.