A method for generating an index of data available from a server, including processing data on t... more A method for generating an index of data available from a server, including processing data on the server to access data items for a central index, the data items including network addresses and terms, compiling an index file including the data items, and transmitting the index file to the central index. The processing may include locating database query statements in the data, and the data items then include input tuples for the statements. The index is accessible from servers, and includes page entries including a program address for a program for generating a dynamic page and input tuples for submission to the program to generate the page, and search entries identifying the dynamic pages and identifying the tuples corresponding to search terms. A search engine operable on the index, is able to access the Search entries to identify dynamic pages Corresponding to search terms of a search query, and access the page entries to generate addresses for the dynamic pages identified, the addresses being generated on the basis of the program address and the tuples.
Our subject is the acquisition of Natural Language (NL) by computers. NL is not, in our view, a s... more Our subject is the acquisition of Natural Language (NL) by computers. NL is not, in our view, a surface expression, or epiphenomenon, of a deeper, underlying cognitive process in the human brain. It is rather fundamental to, and pervasive of, cognition itself. For this reason we think that language is not the sole preserve of linguistics, but is pivotal in all our interactions with the world, in our science, and in our thought.
International Journal of Fuzzy Systems, Aug 3, 2017
Software requirement selection is to find an optimal set of requirements that gives the highest v... more Software requirement selection is to find an optimal set of requirements that gives the highest value for a release of software while keeping the cost within the budget. However, value-related dependencies among software requirements may impact the value of an optimal set. Moreover, value-related dependencies can be of varying strengths. Hence, it is important to consider both the existence and the strengths of valuerelated dependencies during a requirement selection. The existing selection models however, either assume that software requirements are independent or they ignore strengths of requirement dependencies. This paper presents a cost-value optimization model that considers the impacts of value-related requirement dependencies on the value of selected requirements (optimal set). We have exploited algebraic structure of fuzzy graphs for modeling value-related requirement dependencies and their strengths. Validity and practicality of the work are verified through carrying out several simulations and studying a real world software project.
Considering user preferences is a determining factor in optimizing the value of a software releas... more Considering user preferences is a determining factor in optimizing the value of a software release. This is due to the fact that user preferences for software features specify the values of those features and consequently determine the value of the release. Certain features of a software however, may encourage or discourage users to prefer (select or use) other features. As such, value of a software feature could be positively or negatively influenced by other features. Such influences are known as Value-related Feature (Requirement) Dependencies. Value-related dependencies need to be considered in software release planning as they influence the value of the optimal subset of the features selected by the release planning models. Hence, we have proposed considering value-related feature dependencies in software release planning through mining user preferences for software features. We have demonstrated the validity and practicality of the proposed approach by studying a real world software project.
Geometrical illusions are a subclass of optical illusions in which the geometrical characteristic... more Geometrical illusions are a subclass of optical illusions in which the geometrical characteristics of patterns such as orientations and angles are distorted and misperceived as the result of low-to high-level retinal/cortical processing. Modelling the detection of tilt in these illusions and their strengths as they are perceived is a challenging task computationally and leads to development of techniques that match with human performance. In this study, we present a predictive and quantitative approach for modeling foveal and peripheral vision in the induced tilt in Café Wall illusion in which parallel mortar lines between shifted rows of black and white tiles appear to converge and diverge. A bioderived filtering model for the responses of retinal/cortical simple cells to the stimulus using Difference of Gaussians is utilized with an analytic processing pipeline introduced in our previous studies to quantify the angle of tilt in the model. Here we have considered visual characteristics of foveal and peripheral vision in the perceived tilt in the pattern to predict different degrees of tilt in different areas of the fovea and periphery as the eye saccades to different parts of the image. The tilt analysis results from several sampling sizes and aspect ratios, modelling variant foveal views are used from our previous investigations on the local tilt, and we specifically investigate in this work, different configurations of the whole pattern modelling variant Gestalt views across multiple scales in order to provide confidence intervals around the predicted tilts. The foveal sample sets are verified and quantified using two different sampling methods. We present here a precise and quantified comparison contrasting local tilt detection in the foveal sets with a global average across all of the Café Wall configurations tested in this work.
This article presents a compression-based adaptive algorithm for Chinese Pinyin input. There are ... more This article presents a compression-based adaptive algorithm for Chinese Pinyin input. There are many different input methods for Chinese character text and the phonetic Pinyin input method is the one most commonly used. Compression by Partial Match (PPM) is an adaptive statistical modelling technique that is widely used in the field of text compression. Compression-based approaches are able to build models very efficiently and incrementally. Experiments show that adaptive compressionbased approach for Pinyin input outperforms modified Kneser-Ney smoothing method implemented by SRILM language tools .
Both empirical and mathematical demonstrations of the importance of chance-corrected measures are... more Both empirical and mathematical demonstrations of the importance of chance-corrected measures are discussed, and a new model of learning is proposed based on empirical psychological results on association learning. Two forms of this model are developed, the Informatron as a chance-corrected Perceptron, and AdaBook as a chance-corrected AdaBoost procedure. Computational results presented show chance correction facilitates learning.
The Loebner Prize is the first, and only regular, competition based on the Turing Test, but in or... more The Loebner Prize is the first, and only regular, competition based on the Turing Test, but in order to stage the competition various modifications to the original test have been made. In particular, the Grand Prize has a controversial and as yet undefined Audio-Visual condition attached to it. This paper discusses the value of the test with and without the A/V condition, and makes a proposal about what the general nature of the A/V test should be.
It has been widely recognized that uncertainty is an inevitable aspect of diagnosis and treatment... more It has been widely recognized that uncertainty is an inevitable aspect of diagnosis and treatment of medical disorders. Such uncertainties hence, need to be considered in computerized medical models. The existing medical modeling techniques however, have mainly focused on capturing uncertainty associated with diagnosis of medical disorders while ignoring uncertainty of treatments. To tackle this issue, we have proposed using a fuzzy-based modeling and description technique for capturing uncertainties in treatment plans. We have further contributed a formal framework which allows for goal-oriented modeling and analysis of medical treatments.
This paper explores the tilt illusion effect in the Café Wall pattern using a classical Gaussian ... more This paper explores the tilt illusion effect in the Café Wall pattern using a classical Gaussian Receptive Field model. In this illusion, the mortar lines are misperceived as diverging or converging rather than horizontal. We examine the capability of a simple bioplausible filtering model to recognize different degrees of tilt in the Café Wall illusion based on different characteristics of the pattern. Our study employed a Difference of Gaussians model of retinal to cortical "ON" center and/or "OFF" center receptive fields. A wide range of parameters of the stimulus, for example mortar width, luminance, tiles contrast, phase of the tile displacement, have been studied for their effects on the induced tilt in the Café Wall illusion. Our model constructs an edge map representation at multiple scales that reveals tilt cues and clues involved in the illusory perception of the Café Wall pattern. We show here that our model can not only detect the tilt in this pattern, but also allows us to predict the strength of the illusion and quantify the degree of tilt. For the first time, quantitative predictions of a model are reported for this stimulus considering different characteristics of the pattern. The results of our simulations are consistent with previous psychophysical findings across the full range of Café Wall variations tested. Our
The Impact of PSO based Dimension Reduction in EEG study
Abstract. High dimensionality nature of EEG data caused by the use of high number of electrodes a... more Abstract. High dimensionality nature of EEG data caused by the use of high number of electrodes and long periods of task time is one of the drawbacks in EEG study. Evolutionary based approaches are alternative methodologies to conventional dimension reduction methods with the advantage of not requiring the entire recording sessions for operation. Particle Swarm Optimization (PSO) is an Evolutionary method that achieves performance through evaluation of several generations of possible solutions. This study investigates ...
The history of robotics is older than the invention and exploitation of robots. The term 'robot' ... more The history of robotics is older than the invention and exploitation of robots. The term 'robot' came from the Czech and was first used in a play a century ago. The term 'robotics' and the ethical considerations captured by 'The Three Laws of Robotics' come from a SciFi author born a century ago. SF leads the way! Similarly, the idea of Artificial Intelligence as a thinking machine goes back to the earliest days of computing, and in this paper we follow some of the key ideas through the work of the pioneers in the field. We've come a long way since then, but are we there yet? Could we now build a conscious sentient thinking computer? What would it be like? Will it take over the world?
Software requirements selection aims to find an optimal subset of the requirements with the highe... more Software requirements selection aims to find an optimal subset of the requirements with the highest value while respecting the project constraints. But the value of a requirement may depend on the presence or absence of other requirements in the optimal subset. Such Value Dependencies, however, are imprecise and hard to capture. In this paper, we propose a method based on integer programming and fuzzy graphs to account for value dependencies and their imprecision in software requirements selection. The proposed method, referred to as Dependency-Aware Software Requirements Selection (DARS), is comprised of three components: (i) an automated technique for the identification of value dependencies from user preferences, (ii) a modeling technique based on fuzzy graphs that allows for capturing the imprecision of value dependencies, and (iii) an Integer Linear Programming (ILP) model that takes into account user preferences and value dependencies identified from those preferences to reduce the risk of value loss in software projects. Our work is verified by studying a realworld software project. The results show that our proposed method reduces the value loss in software projects and is scalable to large requirement sets.
Considering user preferences is a determining factor in optimizing the value of a software releas... more Considering user preferences is a determining factor in optimizing the value of a software release. This is due to the fact that user preferences for software features specify the values of those features and consequently determine the value of the release. Certain features of a software however, may encourage or discourage users to prefer (select or use) other features. As such, value of a software feature could be positively or negatively influenced by other features. Such influences are known as Value-related Feature (Requirement) Dependencies. Value-related dependencies need to be considered in software release planning as they influence the value of the optimal subset of the features selected by the release planning models. Hence, we have proposed considering value-related feature dependencies in software release planning through mining user preferences for software features. We have demonstrated the validity and practicality of the proposed approach by studying a real world software project.
Promoting the levels of autonomy facilitates the vehicle in performing long-range operations with... more Promoting the levels of autonomy facilitates the vehicle in performing long-range operations with minimum supervision. The capability of Autonomous Underwater Vehicles (AUVs) to fulfill the mission objectives is directly influenced by route planning and task assignment system performance. This paper proposes an efficient task-assign route planning model in a semi-dynamic operation network, where the location of some waypoints are changed by time in a bounded area. Two popular meta-heuristic algorithms named biogeography-based optimization (BBO) and particle swarm optimization (PSO) are adopted to provide real-time optimal solutions for task sequence selection and mission time management. To examine the performance of the method in a context of mission productivity, mission time management and vehicle safety, a series of Monte Carlo simulation trials are undertaken. The results of simulations declare that the proposed method is reliable and robust particularly in dealing with uncertainties and changes of the operation network topology; as a result, it can significantly enhance the level of vehicle's autonomy by relying on its reactive nature and capability of providing fast feasible solutions.
This paper introduces a parallel sorting algorithm based on QuickSort and having an n-input, npro... more This paper introduces a parallel sorting algorithm based on QuickSort and having an n-input, nprocessor, time complexity of O(log n) exhibited using a CRCW PRAM model. Although existing algorithms of similar complexity are known, this approach leads to a family of algorithms with a considerably lower constant. It is also significant in its close relationship to a standard sequential algorithm. 1.1 Sorting Knuth (1973,pp2-3) notes that sorting is estimated to take up 25% of the world's computer time. With the advent of the microcomputer this may well have changed, but it is nonetheless a both practically and theoretically interesting task. Sorting, in the sense of bringing together related things, has now been subsumed by.the more specific task of ordering, and has spawned an enormous number of serial sorting algorithms. Whilst for specific cases, faster algorithms are known, in general sorting requires O(n log n) comparisons, and hence time. The algorithms meeting this expected complexity are based on ideas of either partitioning or merging, usually with the aid of explicit or implicit list and/or tree data structures. These are prototypically represented by the algorithms QuickSort and MergeSort (Knuth,1973). Logically these algorithms require two phases: placement into the tree, and extraction from the tree. In some cases, one or other of these phases can be left implicit. As the tree has logarithmic depth, and each element needs to be placed and/or extracted, the O(n log n) complexity follows immediately.
There are several problems encountered for Chinese language processing as Chinese is written with... more There are several problems encountered for Chinese language processing as Chinese is written without word delimiters. The difficulty in defining a word makes it even harder. This paper explores the possibility of automatically segmenting Chinese character sequences into words and classifying these words through distributional analysis in contrast with the usual approaches that depends on dictionaries.
Journal of Intelligent & Robotic Systems, 2018
This paper presents a hybrid route-path planning model for an Autonomous Underwater Vehicle's tas... more This paper presents a hybrid route-path planning model for an Autonomous Underwater Vehicle's task assignment and management while the AUV is operating through the variable littoral waters. Several prioritized tasks distributed in a large scale terrain is defined first; then, considering the limitations over the mission time, vehicle's battery, uncertainty and variability of the underlying operating field, appropriate mission timing and energy management is undertaken. The proposed objective is fulfilled by incorporating a route-planner that is in charge of prioritizing the list of available tasks according to available battery and a pathplaner that acts in a smaller scale to provide vehicle's safe deployment against environmental sudden changes. The synchronous process of the task assign-route and path planning is simulated using a specific composition of Differential Evolution and Firefly Optimization (DEFO) Algorithms. The simulation results indicate that the proposed hybrid model offers efficient performance in terms of completion of maximum number of assigned tasks while perfectly expending the minimum energy, provided by using the favorable current flow, and controlling the associated mission time. The Monte-Carlo test is also performed for further analysis. The corresponding results show the significant robustness of the model against uncertainties of the operating field and variations of mission conditions.
2015 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), 2015
This paper presents a solution to Autonomous Underwater Vehicles (AUVs) large scale route plannin... more This paper presents a solution to Autonomous Underwater Vehicles (AUVs) large scale route planning and task assignment joint problem. Given a set of constraints (e.g., time) and a set of task priority values, the goal is to find the optimal route for underwater mission that maximizes the sum of the priorities and minimizes the total risk percentage while meeting the given constraints. Making use of the heuristic nature of genetic and swarm intelligence algorithms in solving NP-hard graph problems, Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) are employed to find the optimum solution, where each individual in the population is a candidate solution (route). To evaluate the robustness of the proposed methods, the performance of the all PS and GA algorithms are examined and compared for a number of Monte Carlo runs. Simulation results suggest that the routes generated by both algorithms are feasible and reliable enough, and applicable for underwater motion planning. However, the GA-based route planner produces superior results comparing to the results obtained from the PSO based route planner.
Uploads
Books by David Powers
servers, and includes page entries including a program address for a program for generating a dynamic page and
input tuples for submission to the program to generate the page, and search entries identifying the dynamic pages and
identifying the tuples corresponding to search terms. A search engine operable on the index, is able to access the
Search entries to identify dynamic pages Corresponding to search terms of a search query, and access the page entries to generate addresses for the dynamic pages identified, the addresses being generated on the basis of the program address and the tuples.
Papers by David Powers