On the complexity of beta-reduction
1996
https://doi.org/10.1145/237721.237742…
9 pages
1 file
Sign up for access to the world's latest research
Abstract
We prove that the complexity of Lamping's optimal graph reduction technique for the A-calculus can be exponential in the number of L6vy 's family reductions. Starting from this consideration, we propose a new measure for what could be considered as "the intrinsic complexity" of A-terms.
Related papers
Fundamenta Informaticae, 1995
The paper discusses, in a categorical perspective, some recent works on optimal graph reduction techniques for the-calculus. In particular, we relate the two \brackets" in GAL92a] to the two operations associated with the comonad \!" of Linear Logic. The rewriting rules can be then understood as a \local implementation" of naturality laws, that is as the broadcasting of some information from the output to the inputs of a term, following its connected structure.
1997
Dedicated to the memory of Professor Helena Rasiowa Abstract. We define a new unification problem, which we call β-unification and which can be used to characterize the β-strong normalization of terms in the λ-calculus. We prove the undecidability of β-unification, its connection with the system of intersection types, and several of its basic properties.
Lecture Notes in Computer Science, 2002
In classical logic, existential and universal quantifiers express that there exists at least one individual satisfying a formula, or that all individuals satisfy a formula. In many logics, these quantifiers have been generalized to express that, for a non-negative integer § , at least § individuals or all but § individuals satisfy a formula. In modal logics, graded modalities generalize standard existential and universal modalities in that they express, e.g., that there exist at least § accessible worlds satisfying a certain formula. Graded modalities are useful expressive means in knowledge representation; they are present in a variety of other knowledge representation formalisms closely related to modal logic. A natural question that arises is how the generalization of the existential and universal modalities affects the decidability problem for the logic and its computational complexity, especially when the numbers in the graded modalities are coded in binary. In this paper we study the graded¨-calculus, which extens graded modal logic with fixed-point operators, or, equivalently, extends classical¨-calculus with graded modalities. We prove that the decidability problem for graded¨-calculus is EXPTIME-complete-not harder than the decidability problem for¨-calculus, even when the numbers in the graded modalities are coded in binary. © , at least © individuals or all but © individuals satisfy a formula. For example, predicate logic has been extended with so-called counting quantifiers and [GOR97,GMV99,PST00]. In modal logics, graded modalities [Fin72,vdHD95,Tob01] generalize standard existential and universal modalities in that they express, e.g., that there exist at least © accessible worlds satisfying a certain formula. In description logics, number restrictions have always played a central role; e.g., they are present in almost all knowledge-representation systems based on description logic [PSMB 91,BFH 94,Hor98,HM01]. Indeed, in a typical such system, one can describe cars as those vehicles having at least four wheels, and bicyles as those vehicles having exactly two wheels. A natural question that arises is how the generalization of the existential and universal quantifiers affects the decidability problem for the logic and its computational complexity. The complexity of a variety of description logics with different forms of number restrictions has been investigated; see, e.g. [DLNdN91,HB91,DL94b,BBH96,BS99,Tob00]. It turned out that, in many cases, one can extend a logic with these forms of counting quantifiers without increasing its computational complexity. On the other hand, in some cases the extension makes the logic much more complex. A prominent example is the guarded fragment of first order logic, which becomes undecidable when extended with a very weak form of counting quantifiers (global functionality conditions on binary relations) [Grä99]. When investigating the complexity of a logic with a form of counting quantifiers, one must decide how the numbers in these quantifiers contribute to the length of a formula, i.e., to the input of a decision procedure. Assuming that these numbers are coded in unary (i.e.,
SIAM Journal on Discrete Mathematics, Vol. 8, 464-483, 1995
The Hajds calculus is a simple, nondeterministic procedure that generates the class of non-3-colorable graphs. Mansfield and Welsch posed the question of whether there exist graphs that require exponential-sized Hajbs constructions. Unless NP coNP, there must exist graphs that require exponential-sized constructions, but to date, little progress has been made on this question, despite considerable effort. In this paper, we prove that the Hajbs calculus generates polynomial-sized constructions for all non-3-colorable graphs if and only if extended Frege systems are polynomially bounded. Extended Frege systems are a very powerful family of proof systems for proving tautologies, and proving superpolynomial lower bounds for these systems is a long-standing, important problem in logic and complexity theory. We also establish a .relationship between a complete subsystem of the Hajbs calculus and bounded-depth Frege systems; this enables us to prove exponential lower bounds on this subsystem of the Hajbs calculus. 1. Introduction. The Hajbs calculus (or Hajbs construction) is a simple, nondeterministic procedure for generating the class of graphs that are not k-colorable [Haj]. Mansfield and Welsh [MW] posed the problem of determining the complexity of this procedure; in particular, it is an open problem whether or not there exists a polynomial-sized Hajbs construction for every non-3-colorable graph. Because graph 3-colorability is NP-complete, if there were polynomial-sized Hajbs constructions of all non-3-colorable graphs, then NP-coNP, so we expect that the Hajbs calculus is not polynomially bounded. However, there has been very little progress toward a proof of this conjecture, despite considerable effort. The main result of this paper is a proof that the Hajbs calculus is polynomially bounded if and only if extended Frege proof systems are polynomially bounded. This result links an open problem in graph theory to an important open problem in the complexity of propositional proof systems. It also shows that the complexity problem for the Hajbs calculus is very difficult, since extended Frege systems are a very powerful class of proof systems for the propositional calculus and no techniques that appear adequate to prove superpolynomial lower bounds for them currently exist. In addition, we study a subsystem of the Hajbs calculus, which is still powerful enough to generate all non-3-colorable graphs. Our results, together with recent lower bounds for bounded-depth Frege proofs [Ajtl], [PBI], [KPW], enable us to prove an exponential lower bound for this subsystem of the Hajds calculus.
Calculi of Generalized β-Reduction (Info) The Journal of Functional and Logic Programming is a peer-reviewed and electronically published scholarly journal that covers a broad scope of topics from functional and logic programming. In particular, it focuses on the integration of the functional and the logic paradigms as well as their common foundations.
2010
We investigate the complexity of cut-reduction on proof notations, in particular identifying situations where cut-reduction operates feasibly, i.e., sub-exponential, on proof notations. We then apply the machinery to characterise definable search problem in Bounded Arithmetic.
1993
In this dissertation Lambda Calculus reduction is studied as a means of improving the support for declarative computing. We consider systems having reduction semantics; i.e., systems in which computations consist of equivalence-preserving transformations between expressions. The approach becomes possible by reducing expressions beyond weak normal form, allowing expression-level output values, and avoiding compilation-centered transformations. In particular, we develop reduction algorithms which, although not optimal, are highly efficient. A minimal linear notation for lambda expressions and for certain runtime structures is introduced for explaining operational features. This notation is related to recent theories which formalize the notion of substitution. Our main reduction technique is Berkling's Head Order Reduction (HOR), a delayed substitution algorithm which emphasizes the extended left spine. HOR uses the de Bruijn representation for variables and a mechanism for artificially binding relatively free variables. HOR produces a lazy variant of the head normal form, the natural midway point of reduction. It is shown that beta reduction in the scope of relative free variables is not hard. Full normalization suggests new applications by not relegating partial evaluation to a meta level. Variations of HOR are presented, including a conservative "breadth-first" one which takes advantage of the inherent parallelism of the head normal form. A reduction system must be capable of sharing intermediate results. Sharing under HOR has not received attention to date. In this dissertation variations of HOR which achieve sharing are described. Sharing is made possible via the special treatment of expressions referred to by head variables. The reduction strategy is based on normal order, achieves low reduction counts, but is shown to be incomplete. Head Order Reduction with and without sharing, as well as other competing algorithms are evaluated on several test sets. Our results indicate that reduction rates in excess of one million reductions/second can be achieved on current processors in interpretive mode and with minimal pre-and post-processing. By extending the efficient algorithms for the pure calculus presented in this dissertation with primitives and data structures it is now possible to build useful reduction systems. We present some suggestions on how such systems can be designed. v Dedication vii Preface Foremost among all my teachers, Klaus Berkling taught me to question every assumption and not to rest until all aspects of a problem have been considered. For the last five years he has guided me through the maze of the practical aspects of Lambda Calculus reduction. With his judgment he reinforced my conviction that minimality and elegance are not luxuries but bare necessities. Improving the practicality of Michael Hilton's THOR/HORSE was a strong influence and a well-defined first goal. It turned out that the problems are of a greater scope than I anticipated. Our interaction during the early stages of my study is greatly appreciated. Inspiration, and the incentive for edging a bit closer to the "science" of computer science, was provided by the classes, seminars and talks that I was exposed to during my long career as a graduate student at Syracuse University.
Information Processing Letters, 1995
The classical notion of P-reduction in the A-calculus has an arbitrary syntactically-imposed sequentiality. A new notion of reduction fi' is defined which is a generalization of P-reduction. This notion of reduction is shown to satisfy the Church-Rosser property as well as some other fundamental properties, leading to the conclusion that this generalized notion of P'-reduction can be used in place of p-reduction without sacrificing any of the fundamental properties.
Rewriting Techniques and …, 2002
Archive for Mathematical Logic, 2003
For a fixed q ∈ N and a given 1 definition φ (d, x), where d is a parameter, we construct a model M of I 0 + ¬ exp and a non standard d ∈ M such that in M either φ has no witness smaller than d or φ is equivalent to a formula ϕ (d, x) having no more than q alternations of blocks of quantifiers.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.