Lecture Notes on CS8602 - Compiler Design - Unit5(R2017)
Sign up for access to the world's latest research
Abstract
AI
AI
This document covers optimization techniques in compiler design, focusing on classifications of optimizations into machine-independent and machine-dependent categories. It discusses the criteria for effective code-improving transformations, emphasizing the importance of preserving program semantics, improving performance at various abstraction levels, and the potential use of data flow analysis algorithms. Furthermore, it explores the concept of Directed Acyclic Graphs (DAG) for enhancing optimization through representation and management of basic blocks to eliminate redundant computations.
Related papers
2011
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
International Journal of Engineering and Advanced Technology, 2020
Ever switched programming languages? If yes, you know how difficult it is to learn the syntax and get familiar with new language. But what if we write the code in our preferred language and it can run as any other language’s code. The thing is, whatever we write ultimately gets converted to 0’s and 1’s, the only difference is how these 0’s and 1’s is shown to our machine. We may need different languages, but what if the code with the syntax of one language, runs reasonably well as if it was written with syntax of some other language. This is where a compiler comes in[1]. The aim of this paper is to develop a compiler which could create a new code for another language, based on the machine code developed by other languages. This compiler solves two problems Syntax issue and Universal Compiler.
Like its ancestor, it is intended as a text for a first course in compiler design. The emphasis is on solving p b l c m s universally cnwuntered in designing s language' translator, regardless of the source or target machine.
Proceedings of the 1982 SIGPLAN symposium on Compiler construction - SIGPLAN '82, 1982
We are developing an optimizing compiler for a dialect of the LISP language. The current target architecture is the S-I, a multiprocessing supercomputer designed at Lawrence Livermore National Laboratory. While LISP is usually thought of as a language primarily for symbolic processing and list manipulation, this compiler is also intended to compete with the S-1 PASCAL and FORTIG~.N compilers for quality of compiled numerical code. The S-1 is designed for extremely high-speed signal processing as well as for symbolic computation; it provides primitive operations on vectors of floating-point and complex numbers. The LISP compiler is designed to exploit the architecture heavily. The compiler is structurally and conceptually similar to the BLISS-11 compiler and the compilers produced by PQO:. In particular, the TNBIND technique has been borrowed and extended. Particularly interesting properties of the compiler are: • Extensive use of source-to-source transformations. • Use of an intermediate form that is expression-oriented rather than statement-oriented. • Exploitation of tail-recursive function calls to represent complex control structures. • Efficient compilation of code that can manipulate procedural objects that require heap-aUocated environments, • Smooth run-time interfacing between the "numerical world" and "LISP pointer world", including automatic stack allocation of objects that ordinarily must be hcap-allocated. Each of these techniques has been used before, but we believe their synthesis to be original and unique. The compiler is table-driven to a great extent, more so than BLISS-11 but less so than a ~ compiler. We expect to be able to to redirect the compiler to other target architectures such as the VAX or PDP-10 with relatively little effort. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwiae, or to republish, requires a fee and/or specific permission.
2009
As we move through the multi-core era into the many-core era it becomes obvious that thread-based programming is here to stay. This trend in the development of general purpose hardware is augmented by the fact that while writing sequential programs is considered a non-trivial task, writing parallel applications to take advantage of the advances in the number of cores in a processor severely complicates the process. Writing parallel applications requires programs and functions to be reentrant. Therefore, we cannot use globals and statics. However, globals and statics are useful in certain contexts. Globals allow an easy programming mechanism to share data between several functions. Statics provide the only mechanism of data hiding in C for variables that are global in scope. Writing parallel programs restricts users from using globals and statics in their programs, as doing so would make the program non-reentrant. Moreover, there is a large existing legacy code base of sequential programs that are non-reentrant, since they rely on statics and globals. Several of these sequential programs display significant amounts of data parallelism by operating on independent chunks of input data, and therefore can be easily converted into parallel versions to exploit multi-core processors. Indeed, several such programs have been manually converted into parallel versions. However, manually eliminating all globals and statics to make the program reentrant is tedious, time-consuming, and error-prone. In this paper we describe a system to provide a semi-automated mechanism for users to still be able to use statics and globals in their programs, and to let the compiler automatically convert them into their semantically-equivalent reentrant versions enabling their parallelization later.
2015
Many software evolution and maintenance problems can be addressed through techniques of program transformation. To facilitate development of language tools assisting software evolution and maintenance, we created a Domain-Specific Language (DSL), named SPOT (Specifying PrOgram Transformation), which can be used to raise the abstraction level of code modification. The design goal is to automate source-tosource program transformations through techniques of code generation, so that developers only need to specify desired transformations using constructs provided by the DSL while being oblivious to the details about how the transformations are performed. The paper provides a general motivation for using program transformation techniques and explains the design details of SPOT. In addition, we present a case study to illustrate how SPOT can be used to build a code coverage tool for applications implemented in different programming languages.
Lecture Notes in Computer Science, 2006
The world of program optimization and transformation takes on a new fascination when viewed through the lens of program calculation. Unlike the traditional fold/unfold approach to program transformation on arbitrary programs, the calculational approach imposes restrictions on program structures, resulting in some suitable calculational forms such as homomorphisms and mutumorphisms that enjoy a collection of generic algebraic laws for program manipulation. In this tutorial, we will explain the basic idea of program calculation, demonstrate that many program optimizations and transformations, such as the optimization technique known as loop fusion and the parallelization transformation, can be concisely reformalized in calculational form, and show that program transformation in calculational forms is of higher modularity and more suitable for efficient implementation.
IRJET, 2021
Research in compiler construction has been one of the core research areas in computing. Researchers in this domain try to understand how a computer system and computer languages associate. A compiler translates code written in human-readable form (source code) to target code (machine code) that is efficient and optimized in terms of time and space without altering the meaning of the source program. This paper aims to explain what a compiler is and give an overview of the stages involved in translating computer programming languages.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.