Towards A Virtual Parakll Inference Engine
1988
Sign up for access to the world's latest research
Abstract
Parallel processing systems offer a major improvement in capabilities to AI programmers. However, at the moment, all such systems require the programmer to manage the control of parallelism explicitly, leading to an unfortunate intermixing of knowledge-level and control-level information. Furthermore, parallel processing systems differ radically, making a control regime that is effective in one environment less so in another. We present a means for overcoming these problems within a unifying framework in which 1) Knowledge level information can be expressed effectively 2) Information regarding the control of parallelism can be factored out and 3) Different regimes of parallelism can be efficiently supported without modification of the knowledge-level information. The Protocol of Inference introduced in [Rowley et al., 19871 forms the basis for our approach.
Related papers
2012
We introduce Dimple, a fully open-source API for probabilistic modeling. Dimple allows the user to specify probabilistic models in the form of graphical models, Bayesian networks, or factor graphs, and performs inference (by automatically deriving an inference engine from a variety of algorithms) on the model. Dimple also serves as a compiler for GP5, a hardware accelerator for inference.
2007
We present new algorithms which perform automatic parallelization via source-to-source transformations. The objective is to exploit goal-level, unrestricted independent andparallelism. The proposed algorithms use as targets new parallel execution primitives which are simpler and more flexible than the well-known &/2 parallel operator, which makes it possible to generate better parallel expressions by exposing more potential parallelism among the literals of a clause than is possible with &/2. The main differences between the algorithms stem from whether the order of the solutions obtained is preserved or not, and on the use of determinacy information. We briefly describe the environment where the algorithms have been implemented and the runtime platform in which the parallelized programs are executed. We also report on an evaluation of an implementation of our approach. We compare the performance obtained to that of previous annotation algorithms and show that relevant improvements can be obtained.
WIT Transactions on Information and Communication Technologies, 1970
Most of the available restructuring compilers use program transformations techniques to improve and enhance parallelism in scientific programs. Different sequences of program transformations lead to programs with different performance characteristics. One of the major tasks of a parallelizing compiler is to choose an appropriate sequence of program transformations so as to effectively map a program onto a target machine. In this paper, essential requirements for intelligent parallelization and ways of meeting these requirements are discussed. A new knowledge-based parallelization model and a framework for realizing this model is also presented. This model is machine independent and can dynamically determine the sequence of program transformations depending on the program being parallelized and the target machine. The implementation of an experimental system (called InParS) based on this model and results of this experiment are also discussed.
Proceedings of 4th Euromicro Workshop on Parallel and Distributed Processing, 1996
Current compilation systems for distributedmemory computers have to integrate new techniques to support the highly complex task of producing efficient programs for parallel systems. Two techniques, program comprehension and ezpert systems, although developed outside the scope of parallelization domain, are extremely useful to improve the quality of the parallel code generated and to make the parallelization process more convenient and automatic. In this paper we describe a parallelization environment consisting of three main components: Vienna Fortran Compilution System (VFCS), a tool for recognition of Parallelizable Algorithmic Patterns (PAP Recognizer), and a knowled e-based parallelization support tool (Expert Advise$ After these main components are introduced, the paper focuses on integration issues of PAP Recognizer and Expert Adviser within the framework o j VFCS. We outline the salient features of a new parallelization environment. The design of the XPA knowledge base for recognized program concepts (patterns) as presented, and the methodology of knowledge acquisition for program patterns is outlined.
2014
Performing large, intensive or non-trivial computing on array like datastructures is one of the most common task in scientific computing, video gamedevelopment and other fields. This matter of fact is backed up by the large numberof tools, languages and libraries to perform such tasks. If we restrict ourselves toC++ based solutions, more than a dozen such libraries exists from BLAS/LAPACKC++ binding to template meta-programming based Blitz++ or Eigen.If all of these libraries provide good performance or good abstraction, none ofthem seems to fit the need of so many different user types. Moreover, as parallelsystem complexity grows, the need to maintain all those components quicklybecome unwieldy. This thesis explores various software design techniques - likeGenerative Programming, MetaProgramming and Generic Programming - and theirapplication to the implementation of various parallel computing libraries in such away that abstraction and expressiveness are maximized while efficiency ...
Expert Systems, 1995
IFAC Proceedings Volumes, 1988
COALA (Actor-Oriented Computer for Logic and its Applications) is a multiprocessor architecture project. The aim of the project is to exploit pa r allelism inherent in logic programs automatically, i.e. without any programmer intervention. First , the paper summarizes the work presented earlier : the definition of a parallel interpreting model. Extensions are then presented which allow the model to take into account the execution of independent subgoals. The last part of the paper comments dynamic measureQents obtained from the different versions of the model .
Proceedings of the 1992 International Conference on Computer Languages
Control abstraction is the process by which programmers de ne new control constructs, specifying a statement ordering separately from an implementation of that ordering. We argue that control abstraction can and should play a central role in parallel programming. Control abstraction can be used to build new control constructs for the expression of parallelism. A control construct can have several implementations, representing the varying degrees of parallelism to be exploited on different architectures. Control abstraction also reduces the need for explicit synchronization, since it admits a precise speci cation of control ow. Using several examples, we illustrate these bene ts of control abstraction. We also show that we can e ciently implement a parallel programming language based on control abstraction. We conclude that the enormous bene ts and reasonable costs of control abstraction argue for its inclusion in explicitly parallel programming languages.
New Generation Computing, 1996
The Agents Kernel Language (AKL) is a general purpose concurrent constraint language. It combines the programming paradigms of search-oriented languages such as Prolog and process-oriented languages such as GHC.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.