Prof Ian Cloete led the analytics and business intelligence functions for strategic planning and decision-making at Stellenbosch University (SU), executing the typical functions of a Chief Data Officer. Please see my LinkedIn profile.
less
Interests
Uploads
Papers by I. Cloete
Modular neural networks subroutines for knowledge extraction
Current research in modular neural networks (MNNs) have essentially two aims; to model systematic... more Current research in modular neural networks (MNNs) have essentially two aims; to model systematic methods for constructing neural networks of high complexity and secondly, to provide building blocks for hybrid symbolic- and connectionist knowledge based implementations. The principal benefit of MNNs is that it combines the desirable features of different neural network architectures while compensating for their individual weaknesses. This paper reviews several models of modular neural networks and describes a method for constructing modular neural network subroutines that facilitate easier knowledge extraction. We explore this feature and further consider the generalization abilities of network subroutines as compared with conventional neural network architectures
Reduction of symbolic rules from artificial neural networks using sensitivity analysis
This paper shows how sensitivity analysis identifies and eliminates redundant conditions from the... more This paper shows how sensitivity analysis identifies and eliminates redundant conditions from the rules extracted from trained neural networks, by eliminating irrelevant inputs. This leads to a reduction in the number and size of the rules. The reduced rule set accurately and minimally reflect the classification problems presented. Also, the elimination of redundant input units significantly reduces the combinatorics of
Variance analysis of sensitivity information for pruning multilayer feedforward neural networks
This paper presents an algorithm for pruning feedforward neural network architectures using sensi... more This paper presents an algorithm for pruning feedforward neural network architectures using sensitivity analysis. Sensitivity Analysis is used to quantify the relevance of input and hidden units. A new statistical pruning heuristic is proposed, based on the variance analysis, to decide which units to prune. Results are presented to show that the pruning algorithm correctly prunes irrelevant input and hidden
A new incremental learning algorithm for function approximation problems is presented where the n... more A new incremental learning algorithm for function approximation problems is presented where the neural network learner dynamically selects during training the most informative patterns from a candidate training set. The incremental learning algorithm uses its current knowledge about the function to be approximated, in the form of output sensitivity information, to incrementally grow the training set with patterns that have
A study of the difference between partial derivative and stochastic neural network sensitivity analysis for applications in supervised pattern classification problems
This work provides a brief development roadmap of the neural network sensitivity analysis, from 1... more This work provides a brief development roadmap of the neural network sensitivity analysis, from 1960's to now on. The two main streams of the sensitivity measures: partial derivative and stochastic sensitivity measures are compared. The partial derivative sensitivity (PD-SM) finds the rate of change of the network output with respect to parameter changes, while the stochastic sensitivity (ST-SM) finds the
Input sample selection for RBF neural network classification problems using sensitivity measure
Large data sets containing irrelevant or redundant input samples reduce the performance of learni... more Large data sets containing irrelevant or redundant input samples reduce the performance of learning and increases storage and labeling costs. This work compares several sample selection and active learning techniques and proposes a novel sample selection method based on the stochastic radial basis function neural network sensitivity measure (SM). The experimental results for the UCI IRIS data set show that
... 0288, 0506, 1623, 2238, 2969, 3212 Castro, Felix 2077 Castro, Juan R. 0506 Castro, Paula M. 1... more ... 0288, 0506, 1623, 2238, 2969, 3212 Castro, Felix 2077 Castro, Juan R. 0506 Castro, Paula M. 1265 Castro-Bleda, MJ 4136 Caudill, Matthew ... 1743 Chalikias, Miltiadis 1273 Chalup, Stephan 0090 Chandana, Sandeep 1338 Chang, Chuan-Yu 1068, 3277 Chang, Fang-Jung ...
Automatic scaling using gamma learning for feedforward neural networks
Standard error back-propagation requires output data that is scaled to lie within the active area... more Standard error back-propagation requires output data that is scaled to lie within the active area of the activation function. We show that normalizing data to conform to this requirement is not only a time-consuming process, but can also introduce inaccuracies in modelling of the data. In this paper we propose the gamma learning rule for feedforward neural networks which eliminates
A sensitivity analysis algorithm for pruning feedforward neural networks
A pruning algorithm, based on sensitivity analysis, is presented in this paper. We show that the ... more A pruning algorithm, based on sensitivity analysis, is presented in this paper. We show that the sensitivity analysis technique efficiently prunes both input and hidden layers. Results of the application of the pruning algorithm to various N-bit parity problems agree with well-known published results
Determining the significance of input parameters using sensitivity analysis
... the best known real-world problems are in the medical (cancer, diabetes, thyroid), agricultur... more ... the best known real-world problems are in the medical (cancer, diabetes, thyroid), agriculture (soybean) and ... We also compare the results from sensitivity analysis to that of C4.5. We conclude in ... In most real-world applications it is difficult to identify all input variables that have an ...
Heuristic functions for learning fuzzy conjunctive rules
Systems, Man and Cybernetics, 2004 …, 2004
- When learning classiJication rules, manypossi-ble antecedents for a rule exist. These anteceden... more - When learning classiJication rules, manypossi-ble antecedents for a rule exist. These antecedents are usu-ally in the form of a conjunction, and need to be evaluated for their classification performance on a training set of in-stances. We present an algorithm for induction of ...
Prior knowledge for fuzzy knowledge-based artificial neural networks from fuzzy set covering
Neural Networks, 2004. Proceedings. …, 2004
Knowledge Based Neurocomputing (KBN) concerns the encoding, extraction, and refinement of knowled... more Knowledge Based Neurocomputing (KBN) concerns the encoding, extraction, and refinement of knowledge in a neuro-computing paradigm [l]. Prior knowledge in a symbolic form, ie a domain theory can serve to initialize an artificial neural network (ANN) so that this knowledge can ...
Evaluation function guided search for fuzzy set covering
Fuzzy Systems, 2004. Proceedings. 2004 …, 2004
Abstract-Fuzzy set covering was introduced as an extended counterpart of crisp machine learning m... more Abstract-Fuzzy set covering was introduced as an extended counterpart of crisp machine learning methods using a separate andsonquer approach to concept learning. This approach follows a general-to-specific search through a space of partially ordered conjunctive descriptions. ...
A machine learning framework for fuzzy set covering algorithms
Systems, Man and Cybernetics, 2004 …, 2004
- Many machine learning algorithms for concept learning have been developed using description lan... more - Many machine learning algorithms for concept learning have been developed using description languages based on propositional logic. In this paper we show how to extend the so-called set covering approach to learn das-siJication rules based on fuzzy sets ...
Uploads
Papers by I. Cloete