Key research themes
1. How can a-posteriori novelty and uncommonness metrics be refined to better assess design creativity across heterogeneous idea sets?
This theme investigates the measurement of novelty or interestingness in design ideas generated during creative processes, focusing on a-posteriori methods where novelty is assessed relative to the current idea set. Key issues include limitations of existing metrics like the Shah Novelty Metric (SNM) in handling heterogeneous attribute sets, and the integration of multiple uncommonness perspectives to yield comprehensive novelty assessments. Refinements address empirical applicability to real, heterogeneous design outcomes and aim to provide more reliable, nuanced metrics to quantify creativity in engineering and design studies.
2. What objective interestingness measures improve pattern selection and summarization in dynamic and streaming data mining scenarios?
This theme centers on developing and applying objective interestingness metrics for pattern mining, especially in contexts where massive, dynamic, or streaming data generate a continuously growing set of patterns or association rules. The focus is on post-processing and online summarization techniques that can efficiently identify patterns of high value or surprisingness for users without heavy domain-dependent input. Approaches integrate information theory, data compression principles, and classification models to manage scalability while enhancing pattern relevance, including in knowledge graphs and evolving networks.
3. How do alternative interestingness, correlation, and similarity measures enhance association rule mining and recommender system performance?
This theme explores the development and evaluation of novel interestingness and similarity measures to improve the quality and relevance of association rules in data mining and enhance recommendation accuracy in recommender systems. It emphasizes quantifying meaningful positive or negative item correlations beyond traditional support-confidence frameworks, addressing cold-start problems, and incorporating user preferences for classification and filtering. The studies rigorously test new metrics against established baselines using real-world data sets, highlighting improved interpretability, statistical significance, and predictive power.