Key research themes
1. How can distance and divergence measures be unified and generalized to quantify dissimilarity between probability distributions?
This research theme focuses on the theoretical foundations and general frameworks for directed distances—often called divergences—used to measure dissimilarity between probability distributions in statistics and related fields such as machine learning and information theory. Unifying classical divergences like Kullback-Leibler, Pearson’s chi-square, and newer cumulative divergences helps in understanding statistical inference, goodness-of-fit, and model estimation. This line of work clarifies properties like non-negativity, reflexivity, and asymmetry of divergences and investigates continuous parameterizations covering known special cases.
2. What novel and application-specific dissimilarity and similarity measures improve clustering and classification performance for complex or uncertain data types?
This research area explores the design and evaluation of dissimilarity measures tailored for complex data types such as categorical data, fuzzy sets, neutrosophic sets, interval-valued data, and data with heterogeneous components. The goal is to enhance cluster quality, classification accuracy, and decision-making under uncertainty by capturing data-specific relationships beyond standard numeric distances. Methodological advances include learning-based adaptive dissimilarities for categorical data and refined metrics based on algebraic, geometric, or statistical principles for fuzzy and neutrosophic sets, often applied in pattern recognition, decision making, and medical diagnosis.
3. How can distance and similarity measures be adapted and applied in functional and fuzzy data analysis contexts, and what are their properties and practical utilities?
This theme targets the theoretical development and application of distance measures specialized for fuzzy numbers, functional data, intuitionistic fuzzy sets, and related constructs. It includes the formulation of new fuzzy distance definitions, entropy measures derived from similarity concepts, and adaptations of standard distances for non-standard data types. The research emphasizes how these measures address challenges in uncertainty quantification, scale measurement, classification, testing equality of variability, and image similarity assessment.