5_3 '<=', strings, character constants, and so on. Some of the characters read from the input can... more 5_3 '<=', strings, character constants, and so on. Some of the characters read from the input can be thrown away by the lexical analyser. For example, white space (spaces, tabs, newlines, etc.) may, in most but not all circumstances, be ignored. In general, comments can also be ignored.
The generation of high-quality code is a key objective in the development of compilers. It is of ... more The generation of high-quality code is a key objective in the development of compilers. It is of course true that a usable compiler can be written with little or no provision for optimisation, but the performance of the generated code may be disappointing. Today's production compilers can generate code of outstanding quality, normally much better than even handwritten target assembly code produced by an expert. However, producing this high-quality code using a compiler is not at all easy. The optimisation algorithms used are often complex and costly to run, they interact with each other in unpredictable ways and the choice of which techniques to use for best results will be influenced by the precise structure of the code being optimised. There is no easy solution and an approach to optimisation has to be adopted which works well with the "average" program. Is optimisation important? How hard should a compiler work to produce highly optimised code? The answer depends on the nature of the program being compiled and on the needs of the programmer. For many programs, optimisation is an irrelevance but performing it does no real harm. For other programs resource constraints may mean that optimisation is essential. An argument often heard against the need for optimisation, both by the programmer and by the compiler, is that by waiting for processors to get fast enough the problem will disappear. As the growth in processor speeds seems to be slowing, the wait for sufficiently fast processors may be longer than expected. This makes it increasingly important for both the programmer and the compiler writer to be aware of the importance of optimisation. Before examining practical techniques, it is important to remember that the term optimisation when used in the context of compiler design does not refer to the search for the "best" code. Instead it is concerned with the generation of "better" code. We will return to this issue later in Chap. 8 when covering the topic of superoptimisation. Another related issue is the choice of criteria for code improvement. Are we looking for fast code, small code (in terms of total memory footprint, code size, data usage or some other criterion), code that generates the least i/o traffic, code that minimises power consumption, or what? Some optimisations trade speed improvements against code size and so it is essential to know the real optimisation goal. In most of the techniques described in this chapter, the optimisation goal is to reduce the
The heart of the analysis phase of the compiler is the syntax analyser. It takes a stream of lexi... more The heart of the analysis phase of the compiler is the syntax analyser. It takes a stream of lexical tokens from the lexical analyser and groups them together according to the rules of the language, thus determining the syntactic structure of the compiler's input. The syntax analyser creates data structures reflecting this syntactic structure and then it is up to later phases of compilation to traverse these structures and finally to generate target code. Section 2.3.3 introduced the idea of parsing where the syntax rules of the language guide the grouping of lexical tokens into larger syntactic structures. Parsing requires the repeated matching of the input with the right-hand sides of the production rules, replacing the matched tokens with the corresponding left-hand side of the production. But as we have seen, the order in which this matching is done and also the choice of which productions to use is fundamentally important. We need to develop standard algorithms for this task, and as a first step, examining the reverse process of derivation may help with this.
Implementation Issues This book has concentrated on a traditional and intuitive view of a compile... more Implementation Issues This book has concentrated on a traditional and intuitive view of a compiler as a program to translate from a high-level source language to a low-level target machine language, with a potentially visible intermediate representation between a front-end and a back-end. This form of compiler is in essence specified by the source and target languages and also by the language in which the compiler should be coded. But this book has also stressed that the view of a compiler as a single, monolithic piece of code is not helpful. Instead, regarding it as a collection of phases, at least by separating a front-end from a back-end, is very helpful. These issues become particularly important when considering a strategy for a programming language implementation project.
In this chapter we adopt a practical approach to syntax analysis and we look in detail at the two... more In this chapter we adopt a practical approach to syntax analysis and we look in detail at the two most popular techniques used for the construction of syntax analysers for programming language compilers and similar tools.
The semantic analysis phase of a compiler is the last phase directly concerned with the analysis ... more The semantic analysis phase of a compiler is the last phase directly concerned with the analysis of the source program. The syntax analyser has produced a syntax tree or some equivalent data structure and the next step is to deal with all those remaining analysis tasks that are difficult or impossible to do in a conventional syntax analyser. These tasks are principally concerned with context-sensitive analysis.
Before looking at the details of programming language implementation, we need to examine some of ... more Before looking at the details of programming language implementation, we need to examine some of the characteristics of programming languages to find out how they are structured and defined.
Introduction: How to handle blood pressure in very elderly patients (> 80 years) is still debatab... more Introduction: How to handle blood pressure in very elderly patients (> 80 years) is still debatable. Many are frail, dependent, and susceptible to drug interactions, and have not been included in blood pressure trials. Thus, how blood pressure levels in these patients predict future events remains unclear. Methods: We studied a cohort of 339 elderly patients with a mean age of 83 years that visited the emergency department and were subsequently admitted into the hospital. We divided the cohort into two groups: 144 patients with blood pressure ≥ 140/90 mm Hg (HBP-group) and 195 patients with blood pressure < 140/90 mm Hg (NBP-group). Mean blood pressure in the HBP-group was 158/83 mm Hg and 122/70 mm Hg in the NBP-group. Furthermore, we also did a subgroup analysis on a total of 178 patients with heart failure, totaling 69 with high blood pressure with a mean of 155/85 mm Hg (HBP HF-group) and 109 without high blood pressure with a mean of 119/71 mm Hg (NBP HFgroup). Results: After 6 months 20 patients were dead in the HBP-group compared to 54 patients in the NBP-group (p < 0.01). In the subgroup analysis, 6 patients were dead in the HBP HF-group and 26 patients were dead in the NBP HF-group after 6 months (p = 0.01). Conclusions: We found that very elderly patients in general but also patients with heart failure in particular that presented with high blood pressure when enrolling into the hospital had significantly lower 6-month mortality than very elderly with normal blood pressure.
2008 8th IEEE International Conference on Bioinformatics and BioEngineering, 2008
Five different texture methods are used to investigate their susceptibility to subtle noise occur... more Five different texture methods are used to investigate their susceptibility to subtle noise occurring in lung tumor Computed Tomography (CT) images caused by acquisition and reconstruction deficiencies. Noise of Gaussian and Rayleigh distributions with varying mean and variance was encountered in the analyzed CT images. Fisher and Bhattacharyya distance measures were used to differentiate between an original extracted lung tumor region of interest (ROI) with a filtered and noisy reconstructed versions. Through examining the texture characteristics of the lung tumor areas by five different texture measures, it was determined that the autocovariance measure was least affected and the gray level co-occurrence matrix was the most affected by noise. Depending on the selected ROI size, it was concluded that the number of extracted features from each texture measure increases susceptibility to noise.
Recent studies have shown that MRS can substantially improve the non-invasive categorization of h... more Recent studies have shown that MRS can substantially improve the non-invasive categorization of human brain tumours. However, in order for MRS to be used routinely by clinicians, it will be necessary to develop reliable automated classification methods that can be fully validated. This paper is in two parts: the first part reviews the progress that has been made towards this goal, together with the problems that are involved in the design of automated methods to process and classify the spectra. The second part describes the development of a simple prototype system for classifying 1 H single voxel spectra, obtained at an echo time (TE) of 135 ms, of the four most common types of brain tumour (meningioma (MM), astrocytic (AST), oligodendroglioma (OD) and metastasis (ME)) and cysts. This system was developed in two stages: firstly, an initial database of spectra was used to develop a prototype classifier, based on a linear discriminant analysis (LDA) of selected data points. Secondly, this classifier was tested on an independent test set of 15 newly acquired spectra, and the system was refined on the basis of these results. The system correctly classified all the non-astrocytic tumours. However, the results for the the astrocytic group were poorer (between 55 and 100%, depending on the binary comparison). Approximately 50% of high grade astrocytoma (glioblastoma) spectra in our data base showed very little lipid signal, which may account for the poorer results for this class. Consequently, for the refined system, the astrocytomas were subdivided into two subgroups for comparison against other tumour classes: those with high lipid content and those without.
A Knowledge Base for Classification of Normal Breast ³¹P MRS
Abstract: This paper describesthe knowledge acquisition (KA) process by which relevant knowledge ... more Abstract: This paper describesthe knowledge acquisition (KA) process by which relevant knowledge was gatheredin the form of hierarchically structured If...Then inference rules. Rules about dependenciescontained in the knowledge base were used to define the topology of aknowledge--based artificial neural network (KBANN)
— Java is a general-purpose, popular, concurrent, object-oriented programming language. One of it... more — Java is a general-purpose, popular, concurrent, object-oriented programming language. One of its key interesting features is that it is platform independent, and this is particularly important when applied in the area of embedded systems. Unfor-tunately, this characteristic of platform independence introduces a trade-off in the execution performance, which is especially highlighted in a resource-constrained environment. This paper introduces an alternative way to provide a dynamic compilation service for a resource-constrained environment with-out requiring a massive resource overhead. The design of a remote compilation scheme is presented to demonstrate the usefulness of dynamic compilation to assist Java execution in a resource-constrained environment. The paper also proposes several experiments to examine how this service, implemented over networks, affects the overall performance of Java execution in such an environment. I.
Implementing Policies in Programs using Labelled Transition Systems
This paper describes our current work on programming language support for policy specification an... more This paper describes our current work on programming language support for policy specification and implementation. The aim of this work is to design language mechanisms that enable program behaviour to be controlled by policies, and to develop tools that implement these features as extensions of a general purpose programming language
We are designing an innovative decision support tool to assist radiologists in the evaluation of ... more We are designing an innovative decision support tool to assist radiologists in the evaluation of brain tumours. The system combines Magnetic Resonance Spectroscopy (MRS) and pattern recognition techniques to provide radiologists with additional information about brain tumours. Initial user studies involved workplace interviews, software prototyping, and multidisciplinary design discussions. We have gained several insights relevant to system design. Also many important issues for future studies were raised. This paper describes these issues and how they inform ongoing user studies. Issues include; 1) the validity of findings, 2) their translation into requirements, 3) their communication within a multidisciplinary development team.
AIM The fractal dimension (FD) of a structure provides a measure of its complexity. This pilot st... more AIM The fractal dimension (FD) of a structure provides a measure of its complexity. This pilot study aims to determine FD values for lung cancers visualised on Computed Tomography (CT) and to assess the potential for tumour FD measurements to provide an index of tumour aggression. METHOD Pre-and post-contrast CT images of the thorax acquired from 15 patients with lung cancers of greater than 10mm were transformed to fractal dimension images using a box-counting algorithm at various scales. A region of interest (ROI) was determined covering tumour locations, which were more apparent on FD images as compared to images before processing. The average tumour FD (FDavg) was computed and compared with the intensity average before FD processing. FD values were correlated with 2 markers of tumour aggression: tumour stage and tumour uptake of fluorodeoxyglucose (FDG) as determined by Positron Emission Tomography. RESULTS For pre-contrast images, the tumour FDavg correlated with tumour stage (...
Uploads
Papers by Des Watson