Association Nets: an Alternative Formalization of Common Thinking
1997
https://doi.org/10.1007/3-540-63045-7_39…
15 pages
1 file
Sign up for access to the world's latest research
Abstract
The development of programming as well as some studies in artificial intelligence have produced their own conceptualizations used in the representation of tasks being solved and methods applied. This paper is an attempt at logic-like representation of a programming system, based on the open data model and object-oriented programming, earlier developed and extensively used by the author for various tasks including authomatic programming and natural language understanding. This is suggested as an alternative to traditional logic systems in description of common thinking.
Related papers
1998
Imperative programming has largely dominated both aspects ofWeb programming: adding sophisticated interactive behaviours to the Web and constructing programswhich interact with theWeb. Mostmobile code languages such as Java are based on the imperative programming paradigm. Imperative languages are widely used for building Web browsers and information gathering tools. The focus of much programming language research has been on raising the level of abstraction. Logic programming languages, which view computation as deduction from a set of axioms, is at a higher level of abstraction than imperative programming languages enabling a problem or subject domain to be modelled without focusing on the computer’s VonNeumann architecture. Logic programming with program structuring abstractions has shown its utility in a variety of applications including expert systems, Artificial Intelligence problem solving, and deductive databases. Implementations of logic programming such as Prolog have feat...
Theory and Practice of Logic Programming, 2008
In everyday life it happens that a person has to reason about what other people think and how they behave, in order to achieve his goals. In other words, an individual may be required to adapt his behaviour by reasoning about the others' mental state. In this paper we focus on a knowledge representation language derived from logic programming which both supports the representation of mental states of individual communities and provides each with the capability of reasoning about others' mental states and acting accordingly. The proposed semantics is shown to be translatable into stable model semantics of logic programs with aggregates.
Journal of Applied Logic, 2007
Encyclopedia of Information Science and Technology, Fourth Edition, 2018
In what seem to be never-ending quests for automation, integration, seamlessness, new genres of applications, and “smart systems”, all of which are fueled in part by technological changes, intellectual maturity (or so one thinks), and out-of-the-box thinking that says “surely, there must be a better way”, one dreams of a future. This paper suggests that logic programs employing recent advances in semantics and in knowledge representation formalisms provide a more robust framework in which to develop very intelligent systems in any domain of knowledge or application. The author has performed work applying this paradigm and these reasoning formalisms in the areas of financial applications, security applications, and enterprise information systems.
Lecture Notes in Computer Science, 1994
Intelligent Systems Reference Library, 2011
For over thirty years, the complexity of knowledge acquisition has been the greatest obstacle to widespread use of semantic systems. The task of translating information from a textbook to a computable semantic form requires the combined skills of a linguist, logician, computer scientist, and subject-matter expert. Any system that requires its users to have all those skills will have few, if any, users. The challenge is to design automated tools that can combine the contributions from multiple experts with different kinds of skills. This article surveys systems with different levels of semantics: lightweight, middleweight, and heavyweight. Linked data systems with lightweight semantics are easy to develop, but they can't interpret the data they link. The heavyweight systems of traditional AI can perform deep reasoning, but they place too many demands on the knowledge engineers. No one can predict what innovations will be discovered in the future, but commercially successful systems must satisfy two criteria: first, they must solve problems for which a large number of people need solutions; second, they must have automated and semi-automated methods for acquiring, analyzing, and organizing the required knowledge. This is a slightly revised preprint of an article in Intelligence-based Software Engineering, edited by Andreas Tolk and Lakhmi C. Jain, Springer Verlag, Berlin, 2011, pp. 23-47. Computers can process numbers, data structures, and even axioms in logic much faster than people can. But people take advantage of background knowledge that computers don't have. Hao Wang (1960), for example, wrote a program that proved all 378 theorems in propositional and first-order logic from the Principia Mathematica. On a slow vacuum-tube computer, Wang's program took an average of 1.1 seconds per theorem -far less time than Whitehead and Russell, the two brilliant logicians who wrote the book. But the theorems in the Principia require a negligible amount of built-in knowledge -just five axioms and a few rules of inference. The computer Wang used had only 144K bytes of RAM, but that was sufficient to store the rules and axioms and manipulate them faster than professional logicians. During the 1970s and '80s, rule-based expert systems and programs for processing natural languages became quite sophisticated. But most applications required an enormous amount of background knowledge to produce useful results. Knowledge engineers and subject-matter experts (SMEs) had to encode that knowledge in formal logic or some informal rules, frames, or diagrams. The experts were usually highly paid professionals, such as physicians or geologists, and the knowledge engineers required long years of training in logic, ontology, conceptual analysis, systems design, and methods for interviewing the experts. For critical applications, the investment in knowledge acquisition produced significant results. For other applications, the cost of defining the knowledge might be justified, but the AI tools were not integrated with commercial software. Furthermore, most programmers did not know how to use AI languages and tools, and the cost of training people and adapting tools was too high for mainstream commercial applications. During the 1990s, vast amounts of data on the World Wide Web provided raw data for statistical methods. Machine learning, data mining, and knowledge discovery found patterns more cheaply and often more accurately than rules written by experts. The more challenging goal of language
2021
Modeling human behavior is a popular area of research. Special attention is then focused on activities related to knowledge processing. It is the knowledge that has a fundamental influence on an individual's decisionmaking and its dynamics. The subject of research is both the representation of knowledge and the procedures of their processing. The processing also comprises associative reasoning. Associations significantly influence the knowledge base used in processing stimuli and thus participate in creating a knowledge context that is further used for knowledge derivation and decision making. This paper focuses on the area of associative knowledge processing. There are already classical approaches associated with developing probabilistic neural networks, which can also be used with modifications at a higher abstraction level. This paper aims to show that associative processing of knowledge can be described with these approaches and simulated. The article will present a possible implementation of the model of knowledge storage and associative processing on the individual's knowledge base. The behavior of this model will be demonstrated in experiments.
SIGART newsletter, 1991
Access-Limited Logic (ALL) is a theory of knowledge representation which formalizes the access limitations inherent in a network structured knowledge-base. Where a deductive method such as resolution would retrieve all assertions that satisfy a given pattern, an access-limited logic retrieves only those assertions reachable by following an available access path. The time complexity of inference in ALL is a polynomial function of the size of the accessible portion of the knowledge-base, rather than an exponential function of the size of the entire knowledge-base (as in much past work). Access-Limited Logic, though incomplete, still has a well de ned semantics and a weakened form of completeness, Socratic Completeness, which guarantees that for any fact which is a logical consequence of the knowledge-base, there is a series of preliminary queries and assumptions after which a query of the fact will succeed. Algernon implements Access-Limited Logic. Algernon is important in testing the claims that common-sense knowledge can be encoded cleanly using access paths, and that in common-sense reasoning the preliminary queries and assumptions can generally be determined from domain knowledge. In this paper we overview the principles of ALL and discuss the application of Algernon to three domains: expert systems, qualitative model building, and logic puzzles.
2007
Abstract: In order to endow computers with common sense with respect to specific domains we need to have a representation of the world and make commitments about what knowledge is and how it is obtained. This paper is an attempt to introduce such a representation and underlying 'naive'logic on the basis of an analysis of the properties of cognitive activity.