Key research themes
1. How can abductive reasoning be operationalized and evaluated in natural language for commonsense inference?
This research area focuses on modeling abductive reasoning—inferring the most plausible explanation for incomplete observations—within natural language inference (NLI) frameworks. It addresses the challenges of conceptualizing and benchmarking abductive reasoning using language-based tasks and large narrative datasets, aiming to bridge the gap between formal logic-based abduction and natural language understanding. This is crucial for advancing AI systems that interpret narratives and reason about everyday events as humans do.
2. What role does structured commonsense metaphysics and lexical semantics play in enabling natural language understanding with commonsense reasoning?
This theme investigates the formal axiomatization of fundamental commonsense concepts—such as granularity, time, space, material, causality, functionality, and force—and their operationalization in lexical semantics to support comprehensive commonsense reasoning in natural language understanding systems. It emphasizes the methodological principles for constructing minimal ontological structures to underpin word meanings and reasoning processes applied to text, especially in technical domains.
3. How can large-scale explicit and implicit commonsense knowledge be represented, extracted, and integrated to improve machine commonsense reasoning?
This research stream explores constructing, consolidating, and leveraging commonsense knowledge bases and reasoning mechanisms to enhance AI applications' understanding and inference capabilities. It includes methodologies for harvesting comparative knowledge from the Web, extracting commonsense from structured resources like Wikidata, and enhancing language model reasoning with formalized commonsense, focusing on bridging gaps in coverage and reasoning fidelity.
4. How can large language models (LLMs) be enhanced with human-like reasoning strategies for improved commonsense reasoning and explainability?
This area investigates methods for integrating cognitive theories of human reasoning, such as dual-process (heuristic-analytic) thinking, into LLM-based approaches to achieve more coherent, transparent, and faithful commonsense reasoning. It also explores methods to automate chain-of-thought prompt engineering and grounding in multimodal tasks, aiming to overcome limitations of purely data-driven or surface-based reasoning methods.
5. What are the epistemic and normative foundations of commonsense and logical reasoning in human cognition?
This line of research examines the underlying epistemic norms, functionalist accounts, and philosophical reflections on commonsense reasoning, skepticism, and logical inference. It interrogates how knowledge, reasoning strategies, and commonsense beliefs are normatively grounded and integrated, with implications for understanding human critical thinking and the role of common knowledge in social coordination.