The report describes an environment for performing experiments in distributed processing. It repl... more The report describes an environment for performing experiments in distributed processing. It replaces an earlier (1985) report, reflecting changes in the system and terminology. Our system offers researchers an easy way to design, implement, and test parallel algorithms. It provides software tools which make possible a variety of connection structures between processes. These process structures are said to form a "Network Multiprocessor" (implemented on a local area network of VAX 11/780’s, Sun workstations, dedicated MC68000 processor boards, and a MIPS M/1000). We show how these tools have been used both to aid parallel algorithm development and to explore different computer interconnection methods.
Heuristic search effectiveness depends directly upon the quality of heuristic evaluations of stat... more Heuristic search effectiveness depends directly upon the quality of heuristic evaluations of states in a search space. Given the large amount of research effort devoted to computer chess throughout the past half-century, insufficient attention has been paid to the issue of determining if a proposed change to an evaluation function is beneficial. We argue that the mapping of an evaluation function from chess positions to heuristic values is of ordinal, but not interval, scale. We identify a robust metric suitable for assessing the quality of an evaluation function, and present a novel method for computing this metric efficiently. Finally, we apply an empirical gradient ascent procedure, also of our design, over this metric to optimize feature weights for the evaluation function of a computer chess program. Our experiments demonstrate that evaluation function weights tuned in this manner give equivalent performance to hand-tuned weights.
At first sight, the 1 st Computer Olympiad (London, August 1989) appeared to be a difficult act t... more At first sight, the 1 st Computer Olympiad (London, August 1989) appeared to be a difficult act to follow, and yet the associated conference at the 2 nd Olympiad was unquestionably better. Not only was the audience of 25-40 people more actively involved and more enthusiastic about attending the evening sessions, but also the speakers benefitted from last year's experience and produced more polished, professional and substantive contributions. That such strides could be made in only one year, despite the significantly reduced budget for advertising and the somewhat off-centre location ofthe conference facility, was impressive. Gone were the comfortable, central and expensive surroundings of the Park Lane Hotel, replaced by the magnificence of the Octagon Room Library at Queen Mary and Westfield College (QMW), and the nearby lecture hall in one of the college's classical buildings.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1994
Iterative-deepening searches mimic a breadth-first node expansion with a series of depth-first se... more Iterative-deepening searches mimic a breadth-first node expansion with a series of depth-first searches that operate with successively extended search horizons. They have been proposed as a simple way to reduce the space complexity of best-first searches like A* from exponential to linear in the search depth. But there is more to iterative-deepening than just a reduction of storage space. As we show, the search efficiency can be greatly improved by exploiting previously gained node information. The information management techniques considered here owe much to their counterparts from the domain of two-player games, namely the use of fast-execution memory functions to guide the search. Our methods not only save node expansions, but are also faster and easier to implement than previous proposals.
In our companion report on overheads in loosely coupled parallel systems, the need for a better s... more In our companion report on overheads in loosely coupled parallel systems, the need for a better sequential vertex covering algorithm, and for more complex graphs was demonstrated. Here we have explored schemes designed for the parallel search of skewed trees that arise from the use of an improved sequential algorithm. Three different biased binary multi-processor tree configurations are compared on a basis of time speedup, node count, and overheads in covering a representative set of computationally large graphs.
The vertex cover problem is identified as a task less complicated than chess that exhibits simila... more The vertex cover problem is identified as a task less complicated than chess that exhibits similar overheads when solved using loosely coupled parallel systems. For both problems, pruning can be applied to the search trees, causing the trees to be skewed. Skewed trees lead to overheads in parallel search because scheduling work to keep all processors productive is difficult. The combined overheads comprise solution time overrun, the amount by which a solution time is slower than linear speedup. Here, results from another study of parallel vertex cover solutions are replicated, and additional experiments are done to gain insight specifically into communication and synchronization losses. An improvement to the simple vertex cover algorithm used in the original study is presented, and discussed with respect to its parallel adaptation.
A new sequential tree searching algorithm (PS*) is presented. Based on SSS*, Phased Search (PS*) ... more A new sequential tree searching algorithm (PS*) is presented. Based on SSS*, Phased Search (PS*) di vides each MAX node into k partitions, which are then searched in sequence.By this means two major disadvantages of SSS*, storage demand and maintenance o verhead, are significantly reduced, although the corresponding increase in nodes visited is less apparent e ven in the random tree case. The performance of PS* is compared theoretically as well as e xp rimentally to the well kno wn and SSS* algorithms, on a basis of the storage needs and the number of bottom positions visited. Acknowledgement Financial support was provided by Canadian Natural Sciences and Engineering Research Council Grant A7902. Technical Report TR 86-2 ______________________ N. Srimani is now at: Computer Science Department, Southern Illinois Uni vers ty, Carbondale, IL 62901
INFOR: Information Systems and Operational Research, 1978
Through the design and construction of an emulator for DEC PDP-11 computers, the extent to which ... more Through the design and construction of an emulator for DEC PDP-11 computers, the extent to which the Nanodata QM-f can serve as a universal host is being explored. The principal results show the extent to which emulation is possible without excessive sacrifices in speed. In addition to insights into the construction of a complete emulator, the paper identifies important problems associated with concurrent emulation of different target hardware, and describes solutions to some of them. RESUME Par le design et la construction d'un emulateur pour les ordinateurs PDP-11, les auteurs examinent la possibilite de se servir du Nanodata QM-1 comme hSte universel. Les resultats de l'etude indiquent jusqu'i quel point l'^mulation est possible sans trop sacrificier la rapidite. En plus des details sur la construction d'un emulateur complet, quelques uns des problemes importants associes a l'emulation simulantee de differents equipements sont identifies et des solutions propoSees. •-*
INFOR: Information Systems and Operational Research, 1973
The purpose of this paper is to discuss ideas used in current chess playing programs. A short his... more The purpose of this paper is to discuss ideas used in current chess playing programs. A short history of events leading to the present state of the art is given and a survey made of present day programs. The Newell, Shaw, and Simon program of 1958 is included since it embodies useful ideas that other programs appear not to employ. The possible performance limits for current techniques will be considered, including reasons for these beliefs. A summary of the major ideas contained in these programs is then presented and suggestions made for the improvement and development of future chess-playing programs. RESUME Le but de cet article est de discuter certaines iddes utilisdes dans les programmes pour le jeu d'echecs. On pr&ente un bref historique des ^vfenements qui ont abouti a l'dtat actuel. On fait la revue des programmes actuels. Le programme de Newell, Shaw, et Simon, ^crit en 1958, est inclus. En effet, il incorpore certaines id^es utiles qui ne semblent pas Stre encore exploit^es dans les programmes actuels. Les limitations des mdthodes courantes ainsi que leurs causes sont considdr^es. Finalement, on resume les id^es principales contenues dans ces programmes et on pr^sente des suggestions pour l'am^lioration et le d^veloppement des programmes qui jouent aux dchecs.
Although most of today's chess playing programs still adopt a brute-force approach in their searc... more Although most of today's chess playing programs still adopt a brute-force approach in their search region, much has been done on search extensions to make the effort spent more worthwhile. In this paper, we discuss some successful search extension heuristics in the domain of Chinese Chess, a game that bears much resemblance to chess. We restrict our experiments to the following: knowledge search extensions, singular extensions, null move search (both in the brute-force and the quiescence search phase) and futility cutoffs. These heuristics have been implemented in Abyss, a Chinese Chess program participating in the 3rd Computer Olympiad. From the algorithmic point of view, since Chinese Chess differs most from chess in its repetition rules, some discussion is also devoted to that matter.
In game-tree search, a point value is customarily used to measure position evaluation. If the unc... more In game-tree search, a point value is customarily used to measure position evaluation. If the uncertainty about the value is to be reflected in the evaluation, which is described with probabilistic distribution or probabilities, the search process must back up distributions from leaf nodes to the root. It is shown that even though the merit value of a node is described by probabilities, α-β bounded windows can still be used to cut off some subtrees from search when a space-efficient depth-first traversal is applied to the game tree. Several variations of probability-based α-β game tree pruning are presented. We also show that probability-based α-β pruning can be viewed as a generalization of the standard α-β game tree search algorithm and that it inherits some good properties from its point-value version.
International Journal of Man-Machine Studies, 1988
Capture search, an expensive part of any chess program, is conducted at every leaf node of the ap... more Capture search, an expensive part of any chess program, is conducted at every leaf node of the approximating game tree. Often an exhaustive capture search is not feasible, and yet limiting the search depth compromises the result. Our experiments confirm that for chess a deeper search results in less error, and show that a shallow search does not provide significant savings. It is therefore better to do an arbitrary depth capture search. If a limit is used for search termination, an odd depth is preferable.
This chapter provides a brief historical overview of how variabledepth-search methods have evolve... more This chapter provides a brief historical overview of how variabledepth-search methods have evolved in the last half a century of computer chess work. We focus mainly on techniques that have not only withstood the test of time but also embody ideas that are still relevant in contemporary game-playing programs. Pseudo code is provided for a special formulation of the PVS/ZWS alpha-beta search algorithm, as well for an implementation of the method of singular extensions. We provide some data from recent experiments with Abyss’99, an updated Chinese Chess program. We also pinpoint the current research in forward pruning, since this is where the greatest performance improvements are possible. The work closes with a short summary of the impact of computer chess work on Chinese Chess, Shogi and Go.
In the half century since minimax was first suggested as a strategy for adversary game search, va... more In the half century since minimax was first suggested as a strategy for adversary game search, various search algorithms have been developed. The standard approach has been to use improvements to the Alpha-Beta (-¡) algorithm. Some of the more powerful improvements examine continuations beyond the nominal search depth if they are of special interest, while others terminate the search early. The latter case is referred to as forward pruning. In this paper we discuss some important aspects of forward pruning, especially regarding risk-management, and propose ways of making risk-assessment. Finally, we introduce two new pruning methods based on some of the principles discussed here, and present experimental results from application of the methods in an established chess program.
The thinking-process for playing chess by computer is significantly different from that used by h... more The thinking-process for playing chess by computer is significantly different from that used by humans. Also, computer hardware/software has evolved considerably in the half century since minimax was first, proposed as a method for computers to play chess. In this paper we look at the technology behind today's chess programs, how it has developed, its current, status, and explore some directions for the future.
This short paper defines the terminology used to support computer chess work, and introduces the ... more This short paper defines the terminology used to support computer chess work, and introduces the basic concpets behind chess programs. It is intended to be of general interest, providing background information ot new ideas.
Article prepared for the 2nd edition of the ENCYCLOPEDIA OF ARTIFICIAL INTELLIGENCE, S. Shapiro (... more Article prepared for the 2nd edition of the ENCYCLOPEDIA OF ARTIFICIAL INTELLIGENCE, S. Shapiro (editor), to be published by John Wiley, 1992. This report is for information and review only.
Uploads
Papers by T. Marsland