Key research themes
1. What are the theoretical foundations and metric properties of component-wise dissimilarity measures and their role in pattern recognition?
This theme examines the mathematical characterization of component-wise dissimilarity measures used in pattern recognition, especially when dealing with complex heterogeneous data structures. It addresses how such measures, often non-Euclidean and non-metric, support representation and learning in unconventional data spaces by combining sub-dissimilarities tailored to data types. Metric learning techniques over these spaces enable more accurate similarity estimation crucial for classification and clustering.
2. How is semantic similarity quantified through ontology-based structural measures incorporating information content for digital resource comparison?
This theme explores ontology-based semantic similarity measures that leverage structural relationships and concept information content to compare digital resources semantically annotated with ontology concepts. The focus is on methods integrating taxonomic reasoning with weighting schemes derived from ontology structure or annotated data to compute concept similarities. Experimental validation on real-world large datasets, including expert judgments, demonstrates the effectiveness of such parametric semantic similarity methods.
3. What are the advances in image similarity measures incorporating fuzzy set theory and their robustness to noise and distortions compared to classical methods?
This theme investigates image similarity indices grounded in fuzzy relations and fuzzy set theory, aiming to improve robustness against noise and image distortions. By modeling images as fuzzy sets and measuring similarity through fuzzy relation equations solved via greatest and smallest fuzzy sets, these methods offer symmetry and noise resilience advantages. They provide an alternative to pixel-wise approaches like PSNR and SSIM, with particular attention to block-wise fuzzy partitioning for image comparison.
4. How can structural similarity computations between XML documents and DTDs be efficiently accomplished using tree edit distance methods?
This research area deals with computing structural similarity between XML documents and their corresponding Document Type Definitions (DTDs) by representing both as ordered labeled trees and employing tree edit distance algorithms. The challenge lies in handling the expressive constructs of DTDs (e.g., repeatability, alternativeness), which differ from traditional approximate pattern matching problems. Polynomial-time tree edit algorithms adapted for XML/DTD structures enable efficient and effective similarity measurements supporting various applications like classification, querying, and data integration.