Key research themes
1. How can Sequential Minimal Optimization (SMO) be adapted and optimized for efficient training of Least Squares SVM (LS-SVM) classifiers?
This research area investigates the applicability and performance enhancement of SMO algorithms specifically for LS-SVM classifiers, focusing on variations in working set selection strategies and kernel choices to improve training efficiency and convergence for these models. The motivation arises because traditional conjugate gradient methods dominate LS-SVM training, but SMO, well-known for standard SVMs, may offer computational advantages if appropriately adapted.
2. What are effective sequential and batch sequential adaptive design strategies for optimizing complex black-box and high-dimensional functions, potentially improving over classical SMO methods?
This theme includes research exploring sequential adaptive designs for global optimization that allow efficient and parallelizable function evaluations in complex black-box settings. It extends optimization methodologies beyond the classical SMO framework to batch and adaptive contexts, thereby addressing computational challenges posed by expensive and high-dimensional objective functions. These studies contribute alternative stochastic and surrogate-model-based procedures for optimizing parameters in machine learning and applied settings.
3. How can machine learning techniques, including SMO, be integrated with landscape and network analyses to enhance algorithm selection and performance prediction in combinatorial optimization problems?
This theme captures research integrating SMO with methods for feature extraction and analysis of problem landscapes—such as Local Optima Networks—to improve automatic algorithm selection for hard combinatorial problems like TSP. By connecting problem instance features to algorithm performance predictions via machine learning, these studies aim to optimize solver choice and parameter tuning, thereby advancing automated and data-driven optimization.