A robust logic for rule-based reasoning under uncertainty
1992
Sign up for access to the world's latest research
Abstract
Abstract A symbolically quantified logic is presented for reasoning under uncertainty that is based upon the concept of rough sets. This mathematical model provides a simple yet sound basis for a robust reasoning system. A rule of inference analogous to modus ponens is described, and it is shown how it might be used by a reasoning system to determine the most likely outcome under conditions of uncertain knowledge. An analysis of the robustness of the logic in rule-based reasoning is also presented.<>
Related papers
Transactions on Rough Sets IV, 2005
Rough sets framework has two appealing aspects. First, it is a mathematical approach to deal with vague concepts. Second, rough set techniques can be used in data analysis to find patterns hidden in the data. The number of applications of rough sets to practical problems in different fields demonstrates the increasing interest in this framework and its applicability. Most of the current rough sets techniques and software systems based on them only consider rough sets defined explicitly by concrete examples given in tabular form. The previous research mostly disregards the following two problems. The first problem is related with how to define rough sets in terms of other rough sets. The second problem is related with how to incorporate domain or expert knowledge. This thesis 1 proposes a language that caters for implicit definitions of rough sets obtained by combining different regions of other rough sets. In this way, concept approximations can be derived by taking into account domain knowledge. A declarative semantics for the language is also discussed. It is then shown that programs in the proposed language can be compiled to extended logic programs under the paraconsistent stable model semantics. The equivalence between the declarative semantics of the language and the declarative semantics of the compiled programs is proved. This transformation provides the computational basis for implementing our ideas. A query language for retrieving information about the concepts represented through the defined rough sets is also defined. Several motivating applications are described. Finally, an extension of the proposed language with numerical measures is discussed. This extension is motivated by the fact that numerical measures are an important aspect in data mining applications. I would like to express my sincere gratitude towards my supervisor, Jan Ma luszynski, for his invaluable guidance and constant support. At the department of Computer Science, at New University of Lisbon, my thanks goes to Carlos Viegas Damásio for his suggestions and the time taken to discuss many aspects of this work with me. I also would like to thank the following people for their comments and support:
2008
Since the present Part has a certain complexity, it is worth introducing, with some details, the intuitive motivations of the entire picture and their connections with the mathematical machinery which will be used.
We show how definite extended logic programs can be used for defining and reason with rough sets. Moreover, a rough-set-specific query language is presented and an answering algorithm is outlined. Thus, we not only show a possible application of a paraconsistent logic to the field of rough sets as we also establish a link between rough set theory and logic programming, making possible transfer of expertise between both fields.
International Journal of Intelligent Systems, 2000
This paper concerns the modeling of imprecision, vagueness, and uncertainty in databases through an extension of the relational model of data: the fuzzy rough relational database, an approach which uses both fuzzy set and rough set theories for knowledge representation of imprecise data in a relational database model. The fuzzy rough relational database is formally defined, along with a fuzzy rough relational algebra for querying. Comparisons of theoretical properties of operators in this model with those in the standard relational model are discussed. An example application is used to illustrate other aspects of this model, including a fuzzy entity᎐relationship type diagram for database design, a fuzzy rough data definition language, and an SQL-like query language supportive of the fuzzy rough relational database model. This example also illustrates the ease of use of the fuzzy rough relational database, which often produces results that are better than those of conventional databases since it more accurately models the uncertainty of real-world enterprises than do conventional databases through the use of indiscernibility and fuzzy membership values. ᮊ
Information Processing and Management of Uncertainty in Knowledge-Based Systems, 2014
Rough approximations, which consist of lower and upper approximations, are described under objects characterized by possibilistic information that is expressed by a normal possibility distribution. Concepts of not only possibility but also certainty are used to construct an indiscernibility relation. First, rough approximations are shown for a set of discernible objects by using the indiscernibility relation. Next, a set of objects characterized by possibilistic information is approximated. Consequently, rough approximations consist of objects with a degree expressed by an interval value where lower and upper degrees mean the lower and the upper bounds of the actual degree. This leads to the complementarity property linked with lower and upper approximations in the case of a set of discernible objects, as is valid under complete information. Furthermore, a criterion is introduced to judge whether or not an object is regarded as supporting rules. By using the criterion, we can select only objects that are regarded as inducing rules.
Studies in Fuzziness and Soft Computing, 2000
Basic ideas of rough set theory were proposed by Zdzis law Pawlak [85, 86] in the early 1980's. In the ensuing years, we have witnessed a systematic, worldwide growth of interest in rough sets and their applications. The main goal of rough set analysis is induction of approximations of concepts. This main goal is motivated by the basic fact, constituting also the main problem of KDD, that languages we may choose for knowledge description are incomplete. A fortiori, we have to describe concepts of interest (features, properties, relations etc.) not completely but by means of their reflections (i.e. approximations) in the chosen language. The most important issues in this induction process are:-construction of relevant primitive concepts from which approximations of more complex concepts are assembled,-measures of inclusion and similarity (closeness) on concepts,-construction of operations producing complex concepts from the primitive ones. Basic tools of rough set approach are related to concept approximations. They are defined by approximation spaces. For many applications, in particular for KDD problems, it is necessary to search for relevant approximation spaces in the large space of parameterized approximation spaces. Strategies for tuning parameters of approximation spaces are crucial for inducing concept approximations of high quality. Methods proposed in rough set approach are kin to general methods used to solve Knowledge Discovery and Data Mining (KDD) problems like feature selection, feature extraction (e.g. discretization or grouping of symbolic value), data reduction, decision rule generation, pattern extraction(templates, association rules), or decomposition of large data tables. In this Chapter we examine rough set contributions to Knowledge Discovery from the perspective of KDD as a whole. This Chapter shows how several aspects of the above problems are solved by the classical rough set approach and how they are approached by some recent extensions to the classical theory of rough sets. We point out the role of Boolean reasoning in solving discussed problems. Rough sets induce via its methods a specific logic, which we call rough logic. In the second part of this Chapter, we discuss rough logic and related incomplete logics from a wider perspective of logical approach in KDD.
International Journal of Intelligent Systems and Applications, 2016
Modeling the uncertain aspect of the world in ontologies is attracting a lot of interests to ontologies builders especially in the World Wide Web community. This paper defines a way of handling uncertainty in description logic ontologies without remodeling existing ontologies or altering the syntax of existing ontologies modeling languages. We show that the source of vagueness in an ontology is from vague attributes and vague roles. Therefore, to have a clear separation between crisp concepts and vague concepts, the set of roles R is split into two distinct sets and representing the set of crisp roles and the set of vague roles respectively. Similarly, the set of attributes A was split into two distinct sets and representing the set of crisp attributes and the set of vague attributes respectively. Concepts are therefore clearly classified as crisp concepts or vague concepts depending on whether vague attributes or vague roles are used in its conceptualization or not. The concept of rough set introduced by Pawlak is used to measure the degree of satisfiability of vague concepts as well as vague roles. In this approach, the cost of reengineering existing ontologies in order to cope with reasoning over the uncertain aspects of the world is minimal.
2004
We have proposed a fuzzy rough set approach without using any fuzzy logical connectives to extract gradual decision rules from decision tables. In this paper, we discuss the use of these gradual decision rules within modus ponens and modus tollens inference patterns. We show that these patterns are very similar and, moreover, we generalize them to formalize approximate reasoning based on the extracted gradual decision rules. We demonstrate that approximate reasoning can be performed by manipulation of modifier functions associated with the gradual decision rules.
Computational Intelligence, 1995
We present the syntax and proof theory of a logic of argumentation, LA. We also outline the development of a category theoretic semantics for LA. LA is the core of a proof theoretic model for reasoning under uncertainty. In this logic, propositions are labelled with a representation of the arguments which support their validity. Arguments may then be aggregated to collect more information about the potential validity of the propositions of interest. We make the notion of aggregation primitive to the logic, and then define strength mappings from sets of arguments to one of a number of possible dictionaries. This provides a uniform framework which incorporates a number of numerical and symbolic techniques for assigning subjective confidences to propositions on the basis of their supporting arguments. These aggregation techniques are also described, with examples.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (6)
- Fariñas del Cerro, L., Orlowska, E., DAL -a logic for data analysis, Theoretical Comp. Sci. 36, 251- 264. 1985.
- Orlowska, E. and Pawlak, Z. Expressive power of knowledge representation systems, International Journal of Man-Machine Studies, 20, 485-500, 1984.
- Parsons, S., Kubat, M. and Dohnal, M. A rough set approach to reasoning under uncertainty, Technical Report, Dept. Electronic Engineering, Queen Mary and Westfield College, 1991.
- Pawlak, Z. Rough Sets, International Journal of Information and Computer Sciences, 11, 341-356, 1982.
- Saffiotti, A. An AI view of the treatment of uncertainty. The Knowledge Engineering Review, 2, 75- 97 1987.
- Wong, S. K. M., Ziarko W. and Li Ye, R. Comparison of rough-set and statistical methods in inductive learning, International Journal of Man-Machine Studies, 24, 53-72, 1986.