Papers by Stefania Costantini
Digital Society
Drawing from practical philosophy, we argue that AI-based systems could develop ethical decision-... more Drawing from practical philosophy, we argue that AI-based systems could develop ethical decision-making and judgment capabilities by learning from experience. This has inspired our work which combines answer set programming and inductive logic programming to learn domain ethical principles from the interactions with users in the context of a dialogue system.

Cornell University - arXiv, Feb 21, 2014
Stable Logic Programming (SLP) is an emergent, alternative style of logic programming: each solut... more Stable Logic Programming (SLP) is an emergent, alternative style of logic programming: each solution to a problem is represented by a stable model of a deductive database/function-free logic program encoding the problem itself. Several implementations now exist for stable logic programming, and their performance is rapidly improving. To make SLP generally applicable, it should be possible to check for consistency (i.e., existence of stable models) of the input program before attempting to answer queries. In the literature, only rather strong sufficient conditions have been proposed for consistency, e.g., stratification. This paper extends these results in several directions. First, the syntactic features of programs, viz. cyclic negative dependencies, affecting the existence of stable models are characterized, and their relevance is discussed. Next, a new graph representation of logic programs, the Extended Dependency Graph (EDG), is introduced, which conveys enough information for reasoning about stable models (while the traditional Dependency Graph does not). Finally, we show that the problem of the existence of stable models can be reformulated in terms of coloring of the EDG.
Adaptive Agents and Multi-Agents Systems, May 9, 2016
In this paper we consider the software-engineering problem of how to empower modular agent archit... more In this paper we consider the software-engineering problem of how to empower modular agent architectures with the capability to perform quantitative reasoning in a uniform and principled way.
Adaptive Agents and Multi-Agents Systems, May 5, 2014

IEEE Transactions on Network Science and Engineering, 2022
Centrality metrics have been widely applied to identify the nodes in a graph whose removal is eff... more Centrality metrics have been widely applied to identify the nodes in a graph whose removal is effective in decomposing the graph into smaller sub-components. The node-removal process is generally used to test network robustness against failures. Most of the available studies assume that the node removal task is always successful. Yet, we argue that this assumption is unrealistic. Indeed, the removal process should take into account also the strength of the targeted node itself, to simulate the failure scenarios in a more effective and realistic fashion. Unlike previous literature, herein a probabilistic node failure model is proposed, in which nodes may fail with a particular probability, considering two variants, namely: Uniform (in which the nodes survival-to-failure probability is fixed) and Best Connected (BC) (where the nodes survival probability is proportional to their degree). To evaluate our method, we consider five popular centrality metrics carrying out an experimental, comparative analysis to evaluate them in terms of effectiveness and coverage, on four real-world graphs. By effectiveness and coverage we mean the ability of selecting nodes whose removal decreases graph connectivity the most. Specifically, the graph spectral radius reduction works as a proxy indicator of effectiveness, and the reduction of the largest connected component (LCC) size is a parameter to assess coverage. The metric that caused the biggest drop has been then compared with the Benchmark analysis (i.e, the non-probabilistic degree centrality node removal process) to compare the two approaches. The main finding has been that significant differences emerged through this comparison with a deviation range that varies from 2% up to 80% regardless of the dataset used that highlight the existence of a gap between the common practice with a more realistic approach.

Chatbots are tools aimed at simplifying the interaction between humans and computers, typically u... more Chatbots are tools aimed at simplifying the interaction between humans and computers, typically used in dialogue systems for various practical purposes. These systems should be built on ethical foundations because their behavior may heavily influence a user (think especially about children). The primary objective of this paper is to present the architecture and prototype implementation of a Multi Agent System (MAS) designed for ethical monitoring and evaluation of a dialogue system. A prototype application, for monitoring and evaluation of chatting agents’ (human/artificial) ethical behavior in an online customer service chat point w.r.t their institution/company’s codes of ethics and conduct, is developed and presented. We focus on the implementation specifics of the proposed system and the presented prototype application. Future work and open issues with this research are discussed.
International Journal of Interactive Multimedia and Artificial Intelligence, 2021
In this paper we introduce an approach to the possible adoption of Answer Set Programming (ASP) f... more In this paper we introduce an approach to the possible adoption of Answer Set Programming (ASP) for the definition of microservices, which are a successful abstraction for designing distributed applications as suites of independently deployable interacting components. Such ASP-based components might be employed in distributed architectures related to Cloud Computing or to the Internet of Things (IoT), where the ASP microservices might be usefully coordinated with intelligent logic-based agents. We develop a case study where we consider ASP microservices in synergy with agents defined in DALI, a well-known logic-based agent-oriented programming language developed by our research group.

12th ACM Conference on Web Science, 2020
We consider information diffusion on Web-like networks and how random walks can simulate it. A we... more We consider information diffusion on Web-like networks and how random walks can simulate it. A well-studied problem in this domain is Partial Cover Time, i.e., the calculation of the expected number of steps a random walker needs to visit a given fraction of the nodes of the network. We notice that some of the fastest solutions in fact require that nodes have perfect knowledge of the degree distribution of their neighbors, which in many practical cases is not obtainable, e.g., for privacy reasons. We thus introduce a version of the Cover problem that considers such limitations: Partial Cover Time with Budget. The budget is a limit on the number of neighbors that can be inspected for their degree; we have adapted optimal random walks strategies from the literature to operate under such budget. Our solution is called Min-degree (MD) and, essentially, it biases random walkers towards visiting peripheral areas of the network first. Extensive benchmarking on six real datasets proves that the-perhaps counter-intuitive strategy-MD strategy is in fact highly competitive wrt. state-of-the-art algorithms for cover.
Inductive Logic Programming, 2020
Machine Ethics is a newly emerging interdisciplinary field which is concerned with adding an ethi... more Machine Ethics is a newly emerging interdisciplinary field which is concerned with adding an ethical dimension to Artificial Intelligent (AI) agents. In this paper we address the problem of representing and acquiring rules of codes of ethics in the online customer service domain. The proposed solution approach relies on the non-monotonic features of Answer Set Programming (ASP) and applies ILP. The approach is illustrated by means of examples taken from the preliminary tests conducted with a couple of state-of-the-art ILP algorithms for learning ASP rules.
PRIMA 2019: Principles and Practice of Multi-Agent Systems, 2019
In this paper we consider complex application scenarios, typically concerning smart Cyber-Physica... more In this paper we consider complex application scenarios, typically concerning smart Cyber-Physical Systems, where several components and subsystems interact among themselves, with human users and with the physical environment, and employ forms of intelligent reasoning for meeting the system's requirements and reaching its overall objectives. We propose a new multicomponent multi-level architecture called K-ACE, which provides a high degree of flexibility in the system's definition, though within a formal semantics.
In AI, Multi-Agent Systems are able to model many kind of collective behavior and have therefore ... more In AI, Multi-Agent Systems are able to model many kind of collective behavior and have therefore a wide range of application. In this paper, we propose a logical framework (Logic of “Inferable”) which enable reasoning about whether a group of agents can perform an action, highlighting the concepts of cost of actions and of budget that agents have available to perform actions. The focus is on modeling the group dynamics of cooperative agents.
Methods for implementing Automated Reasoning in a fashion that is at least reminiscent of human c... more Methods for implementing Automated Reasoning in a fashion that is at least reminiscent of human cognition and behavior must refer (also) to Intelligent Agents. In fact they implement many important autonomous applications upon which, nowadays, the life and welfare of living beings may depend. In such contexts, ’humanized’ agents should do what is expected of them, but perhaps more importantly they should not behave in improper/unethical ways given the present context. We propose techniques for introspective selfmonitoring and checking.
Chatbot is an artificial intelligent software which can simulate a conversation with a user in na... more Chatbot is an artificial intelligent software which can simulate a conversation with a user in natural language via auditory or textual methods. Businesses are rapidly moving towards the need for chatbots. However chatbots raise many ethical concerns. To ensure that they behave ethically, their behavior should be guided by the codes of ethics and conduct of their company.

Digital Forensics is a branch of criminalistics which deals with the identification, acquisition,... more Digital Forensics is a branch of criminalistics which deals with the identification, acquisition, preservation, analysis and presentation of the information content of digital devices. In this paper, we briefly describe DigForASP, a COST Action that aims to create a cooperation network for exploring the potential of the application of techniques from the field of Artificial Intelligence, in particular from the area of Knowledge Representation and Reasoning, in the Digital Forensics field, and to foster synergies between these fields. More precisely, in DigForASP the challenge is to address the so-called Evidence Analysis phase, where evidence about possible crimes and crimes’ perpetrators must be exploited so as to reconstruct possible events, event sequences and scenarios related to a crime. Results from this phase are then made available to the involved stakeholders (law enforcement, investigators, public prosecutors, lawyers and judges). Reliability, explainability and verifiabil...

Reflection through Constraint Satisfaction
The need for expressing and using meta-level knowledge has been widely recognized in the AI liter... more The need for expressing and using meta-level knowledge has been widely recognized in the AI literature. Meta-knowledge and meta-level reasoning are suitable, for example, for devising proof strategies in automated deduction systems, for controlling the inferencein problem solving, and for increasing the expressive power of knowledge representation languages. In order to carry out meta-level reasoning there must be a stated relationship between expressions at different levels, i.e., a reflection principle and a naming relation. We present a new mechanism [1] that allows us to model reflection principles in meta-level architectures. Such a mechanism is based on the integration of constraint satisfaction techniques into the inference process of such systems. We employ an abstract language and introduce the concept of a name theory for such a language. The semantics of a name theory is a name interpretation, which generalizes the name relation of other reflective formalisms. We present a reflective inference system that is parameterized with a name theory and whose semantics is expressed in terms of a name interpretation. This mechanism is completely general aud can be easily concretized for a family of metalogic languages. Relevant applications or the proposed formalization have been investigated in legal reasoning [2], in the context of communication-based reasoning, where the interaction among agents is based on communication acts, and in the context of analogical reasoning [3].

The results of the evidence analysis phase in Digital Forensics (DF) provide objective data which... more The results of the evidence analysis phase in Digital Forensics (DF) provide objective data which however require further elaboration by the investigators, that have to contextualize analysis results within an investigative environment so as to provide possible hypotheses that can be proposed as proofs in court, to be evaluated by lawyers and judges. Aim of our research has been that of exploring the applicability of Answer Set Programming (ASP) to the automatization of evidence analysis. This offers many advantages, among which that of making different possible investigative hypotheses explicit, while otherwise different human experts often devise and select different solutions in an implicit way. Moreover, ASP provides a potential for verifiability which is crucial in such an application field. Very complex investigations for which human experts can hardly find solutions turn out in fact to be reducible to optimization problems in classes P or NP or not far beyond, that can be thu...

Methods for implementing Automated Reasoning in a fashion that is at least reminiscent of human c... more Methods for implementing Automated Reasoning in a fashion that is at least reminiscent of human cognition and behavior must refer (also) to Intelligent Agents. In fact, agent-based systems nowadays implement many important autonomous applications in critical contexts. Sometimes, life and welfare of living beings may depend upon these applications. In order to interact in a proper way with human beings and human environments, agents operating in critical contexts should be to some extent ‘humanized’: i.e., they should do what is expected of them, but perhaps more importantly they should not behave in improper/unethical ways. Ensuring ethical reliability can also help to improve the ‘relationship’ between humans and robots: in fact, despite the promise of immensely improving the quality of life, humans take an ambivalent stance in regard to autonomous systems, because we fear that autonomous systems may abuse of their power to take decisions not aligned with human values. To this aim,...
Agenti ed Ontologie : verso la Web Intelligence
The issue of noise pollution is becoming more and more relevant in our today’s way of life. Studi... more The issue of noise pollution is becoming more and more relevant in our today’s way of life. Studies have shown that some noise waves are especially damaging, triggering continuous harm to the nervous scheme with the resulting failure of listening capacity in some instances. Thanks to the latest technological findings, noises can be sampled and analyzed even on very tiny appliances that can possibly be carried anywhere. By testing the noise via a condenser microphone and evaluating the outcome of applying the Fast Fourier Transform to the sampled samples, we can identify the existence of frequencies that are considered detrimental to the auditory system, warning a person in real-time about the prospective risk to which (s)he is facing.

In this paper we consider how to enhance flexibility and generality in Multi-Context Systems (MCS... more In this paper we consider how to enhance flexibility and generality in Multi-Context Systems (MCS) by considering that contexts can evolve over time, that bridge-rule application can be proactive (according to a context's specific choice), and not instantaneous but requiring an execution mechanism. We introduce bridge-rule patterns to make bridge-rules parametric w.r.t. the involved contexts. 1 BRIDGE RULES AND MCS Multi-Context Systems (MCSs) have been proposed in Artificial Intelligence and Knowledge Representation to model information exchange among heterogeneous sources. MCSs are defined so as to drop the assumption of making such sources in some sense homogeneous: rather, the approach deals explicitly with their different representation languages and semantics. Heterogeneous "contexts" (also called "sources", or "modules") interact through special inter-context bridge rules. The reason why MCSs are particularly interesting is that they aim at modeling in a formal way real applications requiring access to sources distributed, for instance, on the web. In view of such practical applications it is important to notice that, being logicbased, contexts may encompass logical agents, to which MCSs have in fact already been extended (cf. [2, 3]). We refer the reader to [1, 5], and the references therein, for the formal definition of basic notions and properties concerning MCSs and managed MCSs (for short, mMCSs), such as context, bridge rule, belief state, management function, equilibrium, etc.
Uploads
Papers by Stefania Costantini