Academia.eduAcademia.edu

Commonsense Reasoning

description667 papers
group181 followers
lightbulbAbout this topic
Commonsense reasoning is the ability to make inferences and judgments based on everyday knowledge and experiences that are generally accepted as true. It involves understanding implicit information, contextual cues, and the relationships between concepts, enabling individuals to navigate and interpret the complexities of real-world situations.
lightbulbAbout this topic
Commonsense reasoning is the ability to make inferences and judgments based on everyday knowledge and experiences that are generally accepted as true. It involves understanding implicit information, contextual cues, and the relationships between concepts, enabling individuals to navigate and interpret the complexities of real-world situations.

Key research themes

1. How can abductive reasoning be operationalized and evaluated in natural language for commonsense inference?

This research area focuses on modeling abductive reasoning—inferring the most plausible explanation for incomplete observations—within natural language inference (NLI) frameworks. It addresses the challenges of conceptualizing and benchmarking abductive reasoning using language-based tasks and large narrative datasets, aiming to bridge the gap between formal logic-based abduction and natural language understanding. This is crucial for advancing AI systems that interpret narratives and reason about everyday events as humans do.

Key finding: Introduces the first large-scale challenge dataset, ART, containing 20K narratives and 200K explanation hypotheses, to evaluate abductive reasoning in narrative contexts through two novel tasks: Abductive Natural Language... Read more

2. What role does structured commonsense metaphysics and lexical semantics play in enabling natural language understanding with commonsense reasoning?

This theme investigates the formal axiomatization of fundamental commonsense concepts—such as granularity, time, space, material, causality, functionality, and force—and their operationalization in lexical semantics to support comprehensive commonsense reasoning in natural language understanding systems. It emphasizes the methodological principles for constructing minimal ontological structures to underpin word meanings and reasoning processes applied to text, especially in technical domains.

by Todd Davies and 
1 more
Key finding: Presents a methodological framework for axiomatizing core commonsense phenomena (e.g., time, space, causality) to mediate between natural language descriptions and causal models, enhancing text interpretation about mechanical... Read more
Key finding: Argues for discovering (rather than inventing) a well-typed ontological structure isomorphic to how language describes the world, using natural language as a guide. Demonstrates that grounding semantics in a strongly typed... Read more

3. How can large-scale explicit and implicit commonsense knowledge be represented, extracted, and integrated to improve machine commonsense reasoning?

This research stream explores constructing, consolidating, and leveraging commonsense knowledge bases and reasoning mechanisms to enhance AI applications' understanding and inference capabilities. It includes methodologies for harvesting comparative knowledge from the Web, extracting commonsense from structured resources like Wikidata, and enhancing language model reasoning with formalized commonsense, focusing on bridging gaps in coverage and reasoning fidelity.

Key finding: Develops an open information extraction approach to harvest large-scale comparative commonsense assertions (e.g., 'bears are more dangerous than dogs') from Web texts, applying integer linear programming for joint... Read more
Key finding: Proposes methodology to extract a commonsense subgraph (Wikidata-CS) from Wikidata by defining guiding principles for commonsense knowledge (well-known concepts, general relations) and mapping Wikidata relations to... Read more
Key finding: Proposes an architecture inspired by cognitive systems that combines incomplete commonsense domain knowledge (expressed in logical rules and defaults) with deep learning and incremental learning, applied to tasks such as... Read more
Key finding: Introduces ConceptNet as a large semantic network of commonsense knowledge crowdsourced via Open Mind Common Sense project, and AnalogySpace, which applies dimensionality reduction (factor analysis) to infer new knowledge and... Read more

4. How can large language models (LLMs) be enhanced with human-like reasoning strategies for improved commonsense reasoning and explainability?

This area investigates methods for integrating cognitive theories of human reasoning, such as dual-process (heuristic-analytic) thinking, into LLM-based approaches to achieve more coherent, transparent, and faithful commonsense reasoning. It also explores methods to automate chain-of-thought prompt engineering and grounding in multimodal tasks, aiming to overcome limitations of purely data-driven or surface-based reasoning methods.

Key finding: Proposes a heuristic-analytic reasoning (HAR) framework inspired by dual-process cognitive theories that involves bootstrapping detailed analytic rationalizations from higher-level heuristic decisions within PLMs.... Read more
Key finding: Develops Automate-CoT, a method to automatically generate, prune, and select high-quality chain-of-thought rationales from small labeled datasets for better prompt design without human intervention. Utilizes variance-reduced... Read more
Key finding: Proposes an LLM-agent framework for zero-shot open-vocabulary 3D visual grounding that decomposes complex queries into sub-tasks and leverages spatial and commonsense knowledge to ground objects in 3D scenes. Combining... Read more

5. What are the epistemic and normative foundations of commonsense and logical reasoning in human cognition?

This line of research examines the underlying epistemic norms, functionalist accounts, and philosophical reflections on commonsense reasoning, skepticism, and logical inference. It interrogates how knowledge, reasoning strategies, and commonsense beliefs are normatively grounded and integrated, with implications for understanding human critical thinking and the role of common knowledge in social coordination.

Key finding: Develops a normative functionalist framework positing epistemic norms as arising from epistemic functions governing reasoning practices. Argues for the epistemic normativity of practical reasoning as generating knowledge of... Read more
Key finding: Analyzes argumentation from ignorance (argumentum ad ignorantiam) as a form of plausible reasoning based on lack of contrary evidence, formalizing it within epistemic logic frameworks and demonstrating its rational,... Read more
Key finding: Critically examines philosophical uses of commonsense as a response to radical skepticism, arguing that appeals to commonsense often lack interrogation in the context of skepticism. Contrasts non-inferential, non-concessive... Read more
Key finding: Explores the historical development and conceptualization of common sense in philosophy, tracing its evolution from ancient to modern notions. Argues that common sense encompasses both a cognitive power and body of knowledge,... Read more
Key finding: Reinterprets David Lewis’s notion of common knowledge centered on 'having reason to believe' instead of mental states like knowledge or belief, proposing a formal reconstruction that overcomes gaps in Lewis’s informal... Read more

All papers in Commonsense Reasoning

Commonsense knowledge often omits the temporal incidence of facts, and even the ordering between occurrences is only available for some of their instances. Reasoning about the temporal extent of facts and their sequencing becomes complex... more
We first present a short introduction illustrating how argumentation could be viewed as an universal mechanism humans use in their practical reasoning where by practical reasoning we mean both commonsense reasoning and reasoning by... more
This paper presents the STAR system, a system for automated narrative comprehension, developed on top of an argumentation-theoretic formulation of defeasible reasoning, and strongly following guidelines from the psychology of... more
Recent advancements in large language models have enabled them to perform well on complex tasks that require step-by-step reasoning with few-shot learning. However, it is unclear whether these models are applying reasoning skills they... more
In this paper, we conduct a thorough investigation into the reasoning capabilities of Large Language Models (LLMs), focusing specifically on the Open Pretrained Transformers (OPT) models as a representative of such models. Our study... more
Recent advancements in large language models have enabled them to perform well on complex tasks that require step-by-step reasoning with few-shot learning. However, it is unclear whether these models are applying reasoning skills they... more
Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero-and few-shot learning. Given their computational cost, these models are difficult to replicate without... more
We report on the design and development of the CASPR system, a socialbot designed to compete in the Amazon Alexa Socialbot Challenge 4. CASPR’s distinguishing characteristic is that it will use automated commonsense reasoning to truly... more
Rich computer simulations or quantitative models can enable an agent to realistically predict real-world behavior with precision and performance that is difficult to emulate in logical formalisms. Unfortunately, such simulations lack the... more
Every day we use intuitive reasoning to make and update predictions about the world. For instance, when playing soccer we must predict the trajectories of the ball but also update those predictions in light of new information. But these... more
Large Language Models (LLMs) can be used as repositories of biological and chemical information to generate pharmacological lead compounds. However, for LLMs to focus on specific drug targets typically requires experimentation with... more
The proof of the Second Incompleteness Theorem consists essentially of proving the uniqueness and explicit definability of the sentence asserting its own unprovability. This turns out to be a rather general phenomenon: Every instance of... more
We are interested in systems which do not prescribe one single kind of preference, but in which varying kinds of preferences can be used simultaneously. In such systems it is essential to know the interaction among the kinds of... more
The structure-preference (SP) order is a way of defining argument preference relations in structured argumentation theory that takes into account how arguments are constructed. The SP order was first introduced in the context of endowing... more
We endow prioritised default logic (PDL) with argumentation semantics using the ASPIC + framework for structured argumentation, and prove that the conclusions of the justified arguments are exactly the prioritised default extensions.... more
A Logic of Arbitrary and Indefinite Objects, LA, has been developed as the logic for knowledge representation and reasoning systems designed to support natural language understanding and generation, and commonsense reasoning. The... more
We discuss the value of argumentation in reaching agreements, based on its capability for dealing with conflicts and uncertainty. Logic-based models of argumentation have recently emerged as a key topic within Artificial Intelligence. Key... more
This paper explores how active logic empowers robots to reason about goals, actions, expectations, and deadlines during collaborative search tasks. These time-aware agents operate with future-oriented expectations and adapt their behavior... more
Art is imaginative human creation meant to be appreciated, make people think, and evoke an emotional response. Here for the first time, we create a dataset of more than 4,000 pieces of art (mostly paintings) that has annotations for... more
We present DEGARI (Dynamic Emotion Generator And ReclassIfier), an explainable system for emotion attribution and recommendation. This system relies on a recently introduced commonsense reasoning framework, the T CL logic, which is based... more
The concept of strength or weight of an argument appears frequently in argumentation theory. Its function is to explain how reasons or arguments interact to support their outcomes. Most approaches, however, use it as little more than a... more
This article examines the problem of forming basic concepts in robots within the framework of the development of Artificial General Intelligence (AGI). The theories of concept formation in infants were reviewed. There is the consensus... more
The logical reasoning capabilities of pretrained language models have recently received much attention. As one of the vital reasoning paradigms, non-monotonic reasoning refers to the fact that conclusions may be invalidated with new... more
We report on an ongoing research program to develop a formal framework for automated narrative text comprehension, bringing together know-how from research in Artificial Intelligence and the Psychology of Reading and Comprehension. It... more
This paper reports on how work in psychology can inform research into automated narrative text comprehension. Specifically, we describe the psychology's view of how human readers combine information from text with their own commonsense... more
Pre-trained language models (PLMs) have shown impressive performance in various language tasks. However, they are prone to spurious correlations, and often generate illusory information. In real-world applications, PLMs should justify... more
Reasoning agents are often faced with the need to robustly deal with erroneous information. When a robot given the task of returning with the red cup from the kitchen table arrives in the kitchen to find no red cup but instead notices a... more
shot grounding accuracy. Our findings indicate that LLMs significantly improve the grounding capability, especially for complex language queries, making LLM-Grounder an effective approach for 3D vision-language tasks in robotics.
Visual Dialog requires an agent to engage in a conversation with humans grounded in an image. Many studies on Visual Dialog focus on the understanding of the dialog history or the content of an image, while a considerable amount of... more
Previously we have proposed a logic, called priority logic 18, 20], where a theory consists of a collection of logic programming-like inference rules (without default negation) and a priority constraint among them. We showed that... more
Inspired by the cognitive science theory, we explicitly model an agent with both semantic and episodic memory systems, and show that it is better than having just one of the two memory systems. In order to show this, we have designed and... more
This paper addresses identification of implicit requirements (IMRs) in software requirements specifications (SRS). IMRs, as opposed to explicit requirements, are not specified by users but are more subtle. It has been noticed that IMRs... more
Chain-of-thought (CoT) advances the reasoning abilities of large language models (LLMs) and achieves superior performance in complex reasoning tasks. However, most CoT studies rely on carefully designed human-annotated rational chains to... more
There has been an explosion of work in the vision & language community during the past few years from image captioning to video transcription, and answering questions about images. These tasks have focused on literal descriptions of the... more
In formal systems for reasoning about actions, the ramification problem denotes the problem of handling indirect effects. These effects are not explicitly represented in action specifications but follow from general laws describing... more
Commonsense reasoning has proven exceedingly difficult both to model and to implement in artificial reasoning systems. This paper discusses some of the features of human reasoning that may account for this difficulty, surveys a number of... more
Non-monotonic logics are examined and found to be inadequate as descriptions of reason maintenance systems (sometimes called truth maintenance systems). A logic is proposed that directly addresses the problem of characterizing the mental... more
This paper shows how a nonmonotonic ILP system XHAIL can perform general-purpose learning and revision of temporal theories in a full-fledged Discrete Event Calculus (DEC) framework with several features (now introduced into ILP for the... more
Inductive Logic Programming (ILP) is concerned with the task of generalising sets of positive and negative examples with respect to background knowledge expressed as logic programs. Negation as Failure (NAF) is a key feature of logic... more
Inductive Logic Programming (ILP) is concerned with the task of generalising sets of positive and negative examples with respect to background knowledge expressed as logic programs. Negation as Failure (NAF) is a key feature of logic... more
This paper introduces a novel method for generating artistic images that express particular affective states. Leveraging state-of-the-art deep learning methods for visual generation (through generative adversarial networks), semantic... more
This paper introduces a novel method for generating artistic images that express particular affective states. Leveraging state-of-the-art deep learning methods for visual generation (through generative adversarial networks), semantic... more
Sentences that ascribe action are logically related, but it is not always obvious why. According to event semantics, implications and non-implications result from referential relations between unpronounced constituents. Taking as starting... more
WC argue that the question selection proccsscs used in the existing AZ in Afct-licine programs arc inadequate. WC trace thcsc inadcquacics to their use of purely surface lcvcl models of discasc and to the lack of planning in sequencing... more
Commonsense reasoning refers to the ability of evaluating a social situation and acting accordingly. Identification of the implicit causes and effects of a social context is the driving capability which can enable machines to perform... more
Contextualized or discourse aware commonsense inference [1] is the task of generating commonsense assertions (i.e., facts) from a given story, and a sentence from that story. (Here, we think of a story as a sequence of causally-related... more
Story understanding systems need to be able to perform commonsense reasoning, specifically regarding characters' goals and their associated actions. Some efforts have been made to form large-scale commonsense knowledge bases, but... more
Recent efforts in natural language processing (NLP) commonsense reasoning research have yielded a considerable number of new datasets and benchmarks. However, most of these datasets formulate commonsense reasoning challenges in artificial... more
Download research papers for free!