Explanations in the Information Extraction System iDocument
2008, Künstliche Intelligenz - KI
Sign up for access to the world's latest research
Abstract
The information extraction system iDocument interactively extracts information from texts such as instances and relations with respect to existing background knowledge. An extraction process creates weighted hypotheses describing indications of relevant information. During execution, each process step records its output into an instantiated process model. We reused these bits of information for generating conceptual, functional as well as causal explanations. In order to visualise explanations, our component utilises different mechanisms for textual, explorative, and pictorial rendering styles.
Related papers
In a certain sense, explanations in computer science are answers to questions and often an explanatory dialog is necessary to support users of a software tool. In this paper, we introduce the concept of intuitive explanations representing the first explanations in an explanatory dialog. Based on an abstract approach of explanation generation we present the generic explanation component Koios++ applying Semantic Technologies to derive intuitive explanations. We illustrate our generation approach by means of the information extraction system smartFIX and put a special emphasis on visualizing explanations as semantic networks using a special layouting algorithm. smartFIX itself is a product portfolio for knowledge-based extraction of data from any document format. The system automatically determines the document type and extracts all relevant data for the respective business process. In this context, Koios++ is used to justify extraction results.
1993
Our main research aim is to improve the provision of explanation facilities in information systems generally, and to identify what is meant by "explanation". This paper reports research which identifies both the strengths and weaknesses of current research and shows how to overcome those weaknesses. We are also concerned with both present and future uses of explanation in information systems and the role of explanation in a broad range of interactive applications.
2002
Abstract According to the multimedia design principle of spatial contiguity, presenting text explanations for visualizations within the image space improves the users' ability to make referential links between the text and its corresponding objects. In this paper we introduce a concept of Dual-Use of Image Space (DUIS) and we show how the concept presents text explanations for visualizations within the image space without obstructing the image.
Knowledge Acquisition, 1990
The paper* describes a framework, RATIONALE, for building knowledge-based diagnostic systems that explain by reasoning explicitly. Unlike most existing explanation facilities that are grafted onto an independently designed inference engine, RATIONALE behaves as though it has to deliberate over and explain to itself, each refinement step. By treating explanation as primary, RATIONALE forces the system designer to represent knowledge explicitly that might otherwise be left implicit. This includes knowledge as to why a particular hypothesis is preferred, an exception is ignored, and a global inference strategy is chosen. RATIONALE integrates explanations with reasoning by allowing a causal and/or functional description of the domain to be represented explicitly. Reasoning proceeds by constructing a hypothesis-based classification tree whose root hypothesis contains the most general diagnosis of the system. Guided by a focusing algorithm, the classification tree branches into more specific hypotheses that explain the more detailed symptoms provided by the user. As the system is used, the classification tree also forms the basis for a dynamically generated explanation tree which holds both the successful and failed branches of the reasoning knowledge. RATIONALE is implemented in Quintus Prolog with a hypertext and graphics oriented interface tinder NEWS. § It provides an environment for tying together the processes of knowledge acquisition, system implementation and explanation of system reasoning.
2020
We have defined an interdisciplinary program for training a new generation of researchers who will be ready to leverage the use of Artificial Intelligence (AI)-based models and techniques even by nonexpert users. The final goal is to make AI self-explaining and thus contribute to translating knowledge into products and services for economic and social benefit, with the support of Explainable AI systems. Moreover, our focus is on the automatic generation of interactive explanations Supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Sk lodowska-Curie grant agreement No 860621. c © Springer Nature Switzerland AG 2021 F. Heintz et al. (Eds.): TAILOR 2020, LNAI 12641, pp. 63–70, 2021. https://doi.org/10.1007/978-3-030-73959-1_5 64 J. M. Alonso et al. in natural language, the preferred modality among humans, with visualization as a complementary modality.
Abstract To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited.
Engineering Applications of Artificial Intelligence, 2007
Explaining how engineering devices work is important to students, engineers, and operators. In general, machine generated explanations have been produced from a particular perspective. This paper introduces a system called automatic generation of explanations (AGE) capable of generating causal, behavioral, and functional explanations of physical devices in natural language. AGE explanations can involve different user selected state variables at different abstraction levels. AGE uses a library of engineering components as building blocks. Each component is associated with a qualitative model, information about the meaning of state variables and their possible values, information about substances, and information about the different functions each component can perform. AGE uses: (i) a compositional modeling approach to construct large qualitative models, (ii) causal analysis to build a causal dependency graph, (iii) a novel qualitative simulation approach to efficiently obtain the system's behavior on large systems, and (iv) decomposition analysis to automatically divide large devices into smaller subsystems. AGE effectiveness is demonstrated with different devices that range from a simple water tank to an industrial chemical plant. r
Proceedings of International Conference on Expert Systems for Development
The aim of the paper is to fill the gap between theory and practice in the production of explanations by a system. One reason of this gap arises from the fact that a problem is often solved thanks to a cooperation between the user and the system, and both participants in the cooperation need explanations. Explanations essentially depend on the context in which the user and the system interact. Such contextualized explanations are the result of a process and constitute a medium of communication between the user and the system during the problem solving. We focus on the need to make the context notion explicit in the explanation process. We analyze explanation and context in term of chunks of knowledge. Then we point out what the contribution of the context to explanation is. An example, which is drawn from a real application, introduces what the problem is.
1997
Abstract Recent years have witnessed rapid progress in explanation generation. Despite these advances, the quality of prose produced by explanation generators warrants significant improvement. Revision-based explanation generation offers a promising means for improving explanations at runtime. In contrast to singledraft explanation generation architectures, a revision-based generator could dynamically create, evaluate, and refine multiple drafts of explanations.
1995
Graphical presentations can be used to communicate information in relational data sets succinctly and effectively. However, novel graphical presentations about numerous attributes and their relationships are often difficult to understand completely until explained. Automatically generated graphical presentations must therefore either be limited to simple, conventional ones, or risk incomprehensibility. One way of alleviating this problem is to design graphical presentation systems that can work in conjunction with a natural language generator to produce "explanatory captions." This paper presents three strategies for generating explanatory captions to accompany information graphics based on: (1) a representation of the structure of the graphical presentation (2) a framework for identifying the perceptual complexity of graphical elements, and (3) the structure of the data expressed in the graphic. We describe an implemented system and illustrate how it is used to generate explanatory captions for a range of graphics from a data set about real estate transactions in Pittsburgh.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.