Term rewriting: Some experimental results
1991, Journal of Symbolic Computation
Sign up for access to the world's latest research
Abstract
We discuss tenn rewriting in conjunction with sprfn, a Prolog-based theorem prover. Two techniques for theorem proving that utilize tenn rewriting are presented. We demonstrate their effectiveness by exhibiting the results of our experiments in proving some theorems of von Neumann-Bemays-Godel set theory. Some outstanding problems associated with tenn rewriting are also addressed.
Related papers
1988
Satchmo is a theorem prover consisting of just a few short and simple Prolog programs. Prolog may be used for representing problem clauses as well. SATCHMO is based on a model-generation paradigm. It is refutation-complete if used in a level-saturation manner. The paper provides a thorough report on experiences with SATCHMO. A considerable amount of problems could be solved with surprising efficiency.
Logic Journal of IGPL, 1995
We provide an intensional semantics for certain elementary program transformations by describing a translation from these transformations to the derivations of a simple theory of operations and types and we show that this semantics is intensionally faithful. Our objective is to understand more precisely the intensional structure of a class of semi-formal program derivations.
2003
For term rewrite systems (TRSs), a huge number of automated termination analysis techniques have been developed during the last decades, and by automated transformations of Prolog programs to TRSs, these techniques can also be used to prove termination of Prolog programs. Very recently, techniques for automated termination analysis of TRSs have been adapted to prove asymptotic upper bounds for the runtime complexity of TRSs automatically. In this paper, we present an automated transformation from Prolog programs to TRSs such that the runtime of the resulting TRS is an asymptotic upper bound for the runtime of the original Prolog program (where the runtime of a Prolog program is measured by the number of unification attempts). Thus, techniques for complexity analysis of TRSs can now also be applied to prove upper complexity bounds for Prolog programs. Our experiments show that this transformational approach indeed yields more precise bounds than existing direct approaches for automat...
1997
For theorem proving in non-classical logics two similar ap- proaches have been established: a matrix characterization of Wallen and Ohlbach's resolution calculus. In both cases the essential extension of the classical calculi is a need for unifying the so-called prefixes of atomic formulae or so-called world-paths respectively. We present a specialized string-unification algorithm which computes a minimal set of most
1986
There are many \vays of representing finite functions. 1Iaving chosen one. \ve 111ust tl~~finc the operations of colllposition of functions. the identity function alit1 the 0lwra.t ion of' t a kills the i nvcrsc of a function. 'l'11cu we insist prove the facts (1)-(3) using t Ila,t interpret ation. k\:e ca.n alivays assume that a representation of a finite set of objects t<ivcs alho 2111 enlinlerat ion of it. Therefore \vc 1ua.y represent finite functions a,s lists and use t,hc ilsioillh of LISP to prove facts a.bout them. W'e give two diKerent. representations of permutations, one using associat iota list 5 and t hc other using lists of numbers. In the first case the association list cont,ains the gl*aph of the function. Doma.in alid range are represented by lists obtained in the obvious wa,y from the given association list. In t.he second case the list contains the range in the order given by the tloiila.iil. The tloniaii1 is not wpreseuted by a list: rabher it is a segment. of the set of nat,ura.l i~~imIw~*s. Itr t llis sense \vc have a more abstract. representation, in which it is slightly easier to aspply the pigeoli hole principle as an abst.ract fact of arithmetic. This representaAtion has been traditiona.lly rlsetl in Ina.t,hematics in order to tall; about. finite permutations. The a.pplication of the pigeon hole principle occurs at simi1a.r points of t,he proofs, but the second order statement espressing it is instantiated by different functions. The improvement of efficiency obtained 13~ higher order logic is pa.rt.icula.rly obvious here. \\'e also give tlvo versions of the results for representations using lists of nu~i~bcrs. In the first version t.he operations of colnposition., identity and inverse are defined by predicates: \ve shall call it Permritation-Predicate or PERMP. In the second these operations are tleiilled 1)~ fllnct.ions: \ve shall call this ap1)lxoach PERMF. for Perllluta,tion-Function. The cant rast between the representations through predicates and through functions is an aspect of the tension bet ween extensional and intensiona.1 approaches to mathematics. 'This is relevant in general to the a11 tomatic verifica.tion of the correctness of programs. The u-ay if*e dealt. \vi t h this tensiol~ ca.n be taken. in some sense. as the 'moral' of our esperiment. L\'e try to summa,rizc oui point iii the folloning (idealized) history of the project. Siippose \ve have written a. LISP program for permutations. using any wprewiltation and we \va.nt. to prove it correct 'by pencil a.nd pa.per'. If we a,re willing to a,ssume the pigeon hole principle as eviclent. and to justify the inferences by the la.bel 'evident by elementary arit hectic', then the 'proof of correctness is fairly simple, no ma,tter wha,t representakion one chooses. Orlly the forms of the intluctions require some thought. On the other ha.nd, if we try to check our proof mecha.nica.lly, say using EKL, and have in our proof libra.ry only simple facts of arithmetic and of LISP, then the task may look tliscouraging. Too Illa.ny facts of clcment,a.ry arithmetic and LISP functions nay be needed, especially if \ve stick to t,he original form of our recursive programs in a 'too constixctive' fashion. This feeling of uneasiness is Ivell l~nown and perhaps unavoida,ble in t)llc CAI*I~ stage of such enterprise as ours: since the first efforts of la.rge scale foimaliza.tion of clel1ltwt i\ry mathematics (p.s. Russell and LVhitehead's "Principia"), it, became obvious that the amount. of illuocent presup-Imsitions hidden in intuitive axguments grows to the size of tropical forest iii a l'ull formalization. Ilo\v<i.er. our esperinlents and many ot,hers show that some nontrivial results a.rc i11tleec1 provable. vllcn t 11~ basic proof libraries are reasonably furnished. It is also likely that. sinil)k iinpr0veinent.s of EKL-more semantic a,ttachments-ivill make our task easier. Alinor details in the choice of the representation and in the formulation of the wsults n\ay Ilavc nlajor consequences in terms of length of the proofs and feasibility of the project. l.'or instance, the wl)rescuta.tioi~ of perinut.ations in terms of association lists inakes most proofs easy applications of ow induction principle. induction on association lists. Hoivevcr. more \vork is nwtlcd to show that t IICW I'ac-ts on ashocia t ioil lists nctually est.ablisli t lie tlesirccl facts aboli t. perlllut.a t ions: in(lw(l t lie. represent ation I)y association list-unlike t,ha?t l)y lists of nrllkll)ers-is not uniclrtc. .A pvrt1iu t a t ioil is represent.ecl by an equivalence class of aSssociation lists. not by a sillglc associatioil list. IIVI~W one needs a canonical way to choose represent atives. a r,or~mcrl J~/+IH. that can he 0l)tainetl v.g. I)> ordering the field of the pernlutat~ions. It is reasoi~able to consitler other represell t a.t ious having t hc> uniqueness property.-At first sight there seems to be no question that it. is better to represent oitr opcra?tiotlh by fullct ions rat her t hai1 by predicakes. One ca.n test this assumption l),v conlparing our t iv0 versions PERMP and PERMF: t.o find a confirnmtion. one just looks at the trea.tnlcnt. of conlpositioll of permutations and the proof that conlposition is a.ssocia.tive. The operation 011 lists that reprewllts composition of functions is better represented as a. binary function. defined by recursion on tile first list. rather than a terimry predica.tc. Indeed in the first case \ve can use a stra.ightfor\vard proof I)>induction on the recursive definition of t.lle functions, whereas in the secontl case predicates req1Lil.f' some relaoively conlplicat,ed substitutions. Fintling these substitutions \voulcl require a huge nun~lw1 of random a.tterripts if they were done without liunmn direction. Interestingly enough. many other proofs employing list represent&ion axe easier when the notions in question are formulated using predica.tes rather than functions. This is true especiall! of proofs about the i(1entit.y and the inverse of a perinut.a.tion. In the version PERMP. such proofs are simply obtained by tlspa.nding the a.ssunlpt,ions and the clefinitions. In the version PERMF. t hc recursive definitions Inay be quit,e complicated. a.ntl t.he ilitluctive proofs become quite iuvolve(l. This situation is in many mxys analogous to problem in various areas of nla,tllernatics. In I hc representat,ioli through functions the intensional fea.tures of our programs are closely rcpresentc4. 011 the cant rary. iii the represent.at.ion through predicates only t,he extensional properties of o11r-functions are relevant. It is well I~iio\vn that in most n~athernatical pmctice only extensional facts are considered. \I'e Inay say tllat, pre(licates allow slightly more a.bstract definitions of the olwrations than functions. 111 nmtlieinatics often a small progress t.o\vartls a.bst ractioll simplifies t Iit\ prcscnt a,t ion considerably.. If we start. our proof of correctness with the definitions conta.inetl in the version PERMF, \ve 111a.1. find it convenient to look at the clefinitions of the operations in PERMP and to prove them first as lemmata. One cam then use these facts in different cont,ests insteacl of going through longer diwct proofs. Abstmcting hnnia.ta, and breaking arguments into suitable paxts is the basis for niathenlatical c.olilllliinicatioil: it niakcs proof 'easy to t,a.ke a,nd easy to reinember'. This remark by Iireiwl (a variation on a tlleiue by LVibtgenstein) is highly appropriate here. The readability of nlecllanical proofs depwds on such tlevices even nlore t,lian the reatlability of 'pencil and paper' proofs. .\ 11 autoiiiatic proof of correctness of previously written program may be too long a.ntl ttvliouh for human coiisunlption.-1 better organization of the proI>leni. based on more ahtra.ct considerat ioll of the facts in question, inay .c Some objections may be sigiiifica,nt ly increase the reada,bility of such proofs. raised to our reiiia?.rl;s. On one side. one niay a,rgric that it is \vhat counts as evidence in favour of our cla.inls: isn't it a.fter all just a quest ion of nlatlkcniat ic.aI 1(/.4lC? On the ot ller side, even granting our claims. one ma?.y he (I ~jr*iovi skeptical about tllc rclt~vi~~~w of our invcstigat ioll. IIaven't [ve silnply verified. t lirough n1cchailical proof checking of inat 11e111atically trivial emnlples the w-cl1 kno\vn fact tl1a.t there arc good and ba.d .~lylcs of lu,?tllcnlali(.aI I)rcsentation ? C'ari Fve expect any interestin, 0. theoretical discovers to result from espcrinlc~u15 of 1 his Iiilltl'I do not immediately provide all the relevant information. Proof checking is a. practice of interact ivii between a. user and a. giveu technology, in which human ca.pa.cities. t.echnica.1 possibilities. linguist iv fea,tures and methods of interaction are all relevant,. For instance, we know front the Xorlnalizat ioll Theorem in Proof Theory that direct proofs are generally longer t1la.u those using lcniniat a. It is very well possible that, different languages or different theorem provers n1a.y suggest different strategies of proof checking. In particular. we cannot. rule out the possibility that language inay be creamted, a technology produced a.nd experiment esibited in which7 say, most of our Ieiiirnatn have convenient direct proofs. Only experience ca.n decide. But given a certa.in technology. practiw tlocs indeed show what directions a.re convenient and wl1a.t projects feasible. Strategies and methods of proofs, not only the subjective qualities of the user, are decisive in determining the success of a project. On the other ha.nd. no matter how plausible the reasons of t.he slceptic n1a.y look. t 11~ pvrformance of a.utomatic proof checkers has been remarkably improved since the first esperiirieut s. Instrume,nts are availa.ble t1ia.t allow a. 'microscopic analysis of mathematical proofs: a cert air\ a.mont of experimentation has alrea.dy been performed....
2001
ǫ-terms, introduced by David Hilbert , have the form ǫx.φ, where x is a variable and φ is a formula. Their syntactical structure is thus similar to that of a quantified formulae, but they are terms, denoting 'an element for which φ holds, if there is any'. The topic of this paper is an investigation into the possibilities and limits of using ǫ-terms for automated theorem proving. We discuss the relationship between ǫ-terms and Skolem terms (which both can be used alternatively for the purpose of ∃-quantifier elimination), in particular with respect to efficiency and intuition. We also discuss the consequences of allowing ǫ-terms in theorems (and cuts). This leads to a distinction between (essentially two) semantics and corresponding calculi, one enabling efficient automated proof search, and the other one requiring human guidance but enabling a very intuitive (i.e. semantic) treatment of ǫ-terms. We give a theoretical foundation of the usage of both variants in a single framework. Finally, we argue that these two approaches to ǫ are just the extremes of a range of ǫ-treatments, corresponding to a range of different possible Skolemization variants.
Journal of Symbolic Computation, 2006
This paper presents some fundamental aspects of the design and the implementation of an automated prover for Zermelo-Fraenkel set theory within the Theorema system. The method applies the "Prove-Compute-Solve"-paradigm as its major strategy for generating proofs in a natural style for statements involving constructs from set theory.
1996
ABSTRACT We present the implementation of a term rewriting procedure based on congruence closure. The procedure can be used with arbitrary equational theories. It uses context free grammars to represent equivalence classes of terms. This representation is motivated by the need to handle equational theories where confluence cannot be achieved under traditional term rewriting. Context free grammars provide concise representation of arbitrary-sized equivalence classes of terms.
1990
We show that the familiar explanation-based generalization (EBG) procedure is applicable to a large family of programming languages, including three families of importance to AI: logic programming (such as Prolog); lambda calculus (such as LISP);
2004
We address the problem of an efficient rewriting strategy for general term rewriting systems. Several strategies have been proposed over the last two decades for rewriting, the most efficient of all being the natural rewriting strategy . All the strategies so far, including natural rewriting, assume that the given term rewriting system is a left-linear constructor system. Although these restrictions are reasonable for some functional programming languages, they limit the expressive power of equational languages, and they preclude certain applications of rewriting to equational theorem proving and to languages combining equational and logic programming. In this paper, we propose a conservative generalization of natural rewriting that does not require the rules to be left-linear and constructor-based. We also establish the soundness and completeness of this generalization.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (9)
- 's term rewriting facility produced the following set of input clauses: Clauses derived from hom(ahl,asl,afl,as2,af2): eq(apply(ah1,app1y(afl,ord_pair(O 1,02))), apply(af2,ord_pair(apply(ahl,O l),apply(ah 1,02)))) :- el(01,as1), el(02,as1). maps(ah1,as 1,as2). closed(as2,af2). closed(asl,afl). Clauses derived from hom(ah2,as2,af2,as3,af3): eq(apply(ah2,apply(at2,ord_pair(03,04))), apply(af3,ord_pair(apply(ah2,03),apply(ah2,04)))) :- e1(03,as2), e1(04,as2). maps(ah2,as2,as3). closed(as3,af3). closed(as2,af2). Clauses derived from not(hom( compose( ah2,ah 1 ),as 1 ,afl ,as3,af3) ): el(g5,asl). el(g6,as 1 ). false:- eq(apply(ah2,apply(ah 1,apply(afl,ord_pair(g5 ,g6
- maps(compose(ah2,ahl),asl,as3), closed(as3,af3), closed( as 1,afl ). Note that our top-level goal has become: false:- eq(apply(ah2,apply(ah1,apply(afl,ord_pair(g5,g6
- In addition to these input clauses, we added three axioms. The first two of these are trivial while the third, although non-trivial, can be derived by the prover in 24.63 cpu seconds after 15 inferences. References
- Plaisted, D.A., 'A simplified problem reduction format', Artificial Intelli- gence 18 (1982) 227-261
- Boyer, Robert, Lusk, Ewing, McCune, William, Overbeek, Ross, Stickel, Mark, and Wos, Lawrence, 'Set theory in first-order logic: clauses for Godel 's axioms', Journal of Automated Reasoning 2 ( 1986) 287-327
- Plaisted, D.A., 'Another extension of Hom clause logic programming to non-Hom clauses', Lecture Notes 1987
- Plaisted, D.A., 'Non-Hom clause logic programming without contraposi- tives', unpublished manuscript 1987
- Loveland, D.W., Automated Theorem Proving: A Logical Base, North- Holland Publishing Co., 1978, Chapter 6.
- Korf, R.E., 'Depth-first iterative-deepening: an optimal admissible tree search', Artificial Intelligence 27 (1985) 97-109.