Genetic and Evolutionary Computation Conference, Jul 10, 2000
This paper introduces a new collective learning genetic algorithm (CLGA) which employs individual... more This paper introduces a new collective learning genetic algorithm (CLGA) which employs individual learning to do intelligent recombination based on a cooperative exchange of knowledge between interacting chromosomes. Each individual in the population observes a unique set of features in the chromosomes with which it interacts in order to explicitly estimate the average fitnesses of schemata in the population, and to use that information to guide recombination. The stages of evolution are still controlled by a global algorithm, but much of the control in the CLGA is distributed among chromosomes that are individually responsible for recombination, mutation and selection. The effectiveness of the approach is demonstrated on random problems generated by an NK-Landscape problem generator. Preliminary results suggest that the CLGA may be especially effective for searching for solutions to highly epistatic, non-separable problems, a class of problems traditionally difficult for regular GAs.
This paper presents a novel technique for improving face recognition performance by predicting sy... more This paper presents a novel technique for improving face recognition performance by predicting system failure, and, if necessary, perturbing eye coordinate inputs and repredicting failure as a means of selecting the optimal perturbation for correct classification. This relies on a method that can accurately identify patterns that can lead to more accurate classification, without modifying the classification algorithm itself. To this end, a neural network is used to learn 'good' and 'bad' wavelet transforms of similarity score distributions from an analysis of the gallery. In production, face images with a high likelihood of having been incorrectly matched are reprocessed using perturbed eye coordinate inputs, and the best results used to "correct" the initial results. The overall approach suggest a more general approach involving the use of input perturbations for increasing classifier performance in general. Results for both commercial and research face-based biometrics are presented using both simulated and real data. The statistically significant results show the strong potential for this to improve system performance, especially with uncooperative subjects.
This paper evaluates the impact of eye localization on face recognition accuracy. To investigate ... more This paper evaluates the impact of eye localization on face recognition accuracy. To investigate its importance, we present an eye perturbation sensitivity analysis, as well as empirical evidence that reinforces the notion that eye localization plays a key role in the accuracy of face recognition systems. In particular, correct measurement of eye separation is shown to be more important than correct eye location, highlighting the critical role of eye separation in the scaling and normalization of face images. Results suggest that significant gains in recognition accuracy may be achieved by focussing more effort on the eye localization stage of the face recognition process.
Recent effort has been expended toward the development of a methodology for semantic conformance ... more Recent effort has been expended toward the development of a methodology for semantic conformance testing of standardized biometric interchange records. Specifically, the primary motivation is to evaluate the degree to which a generated compact interchange record such as a minutia template is a faithful representation of the original digital representation of the biometric characteristic (i.e. image of a finger pattern). In and of itself, such an evaluation would seem to have intrinsic value, as it is almost obvious that this would be of paramount importance to fingerprint minutia extractors. However, in this paper, we provide empirical data that suggests some caution in coming to such a conclusion. We also propose several other approaches to evaluating minutia extractors that might augment their characterization for the purpose of comparison and evaluation.
A reading-based CAPTCHA, called 'ScatterType,' designed to resist character-segmentation attacks,... more A reading-based CAPTCHA, called 'ScatterType,' designed to resist character-segmentation attacks, is described. Its challenges are pseudorandomly synthesized images of text strings rendered in machine-print typefaces: within each image, characters are fragmented using horizontal and vertical cuts, and the fragments are scattered by vertical and horizontal displacements. This scattering is designed to defeat all methods known to us for automatic segmentation into characters. As in the BaffleText CAPTCHA, English-like but unspellable text-strings are used to defend against known-dictionary attacks. In contrast to the PessimalPrint and Baf-fleText CAPTCHAs (and others), no physics-based image degradations, occlusions, or extraneous patterns are employed. We report preliminary results from a human legibility trial with 57 volunteers that yielded 4275 CAPTCHA challenges and responses. ScatterType human legibility remains remarkably high even on extremely degraded cases. We speculate that this is due to Gestalt perception abilities assisted by style-specific (here, typeface-specific) consistency among primitive shape features of character fragments. Although recent efforts to automate style-consistent perceptual skills have reported progress, the best known methods do not yet pose a threat to ScatterType. The experimental data also show that subjective rating of difficulty is strongly (and usefully) correlated with illegibility. In addition, we present early insights emerging from these data as we explore the ScatterType design space-choice of typefaces, 'words', cut positioning, and displacements-with the goal of locating regimes in which ScatterType challenges remain comfortably legible to almost all people but strongly resist mahine-vision methods for automatic segmentation into characters.
IEEE Engineering in Medicine and Biology Magazine, 1996
Fluorescence in situ hybridization (FISH) is a rapidly expanding imaging technique in medical res... more Fluorescence in situ hybridization (FISH) is a rapidly expanding imaging technique in medical research and clinical diagnosis. Both researchers and clinicians find it helpful to employ quantitative digital imaging techniques with FISH images. This technique is of particular interest for multi-probe mixtures and for the automated analysis of large numbers of specimens. In the preparation of FISH specimens, multiple probes, each tagged with a different fluorophore, are often used in combination. This permits simultaneous visualization of several different molecular components of the cell. Usually, the relative positions of these components within the specimen are of scientific or clinical interest. The authors discuss these techniques and their applications. FISH dot counting is increasingly used in research and clinical studies. Research procedures and clinical tests using FISH almost certainly have an increasingly significant role to play in the future of biology and medicine. In much the same way as cytogenetics has adopted digital imaging, the techniques described here, and similar ones, will become a routine part of research and clinical practice as the use of FISH techniques expand. As in radiology, one can expect digital image processing to become an indispensable part of the activity.
A new paradigm for genetic search referred to as the Collective Learning Genetic Algorithm (CLGA)... more A new paradigm for genetic search referred to as the Collective Learning Genetic Algorithm (CLGA) has been demonstrated for combinatorial optimization problems which utilizes genotypic learning to do recombination based on a cooperative exchange of knowledge between interacting chromosomes. Recent evidence suggests that the success of the CLGA is not due to a capacity to do linkage learning, but due to the CLGA's high resistance to convergence and its ability to modify its recombinative behavior based on the consistency of the information in its environment, specifically, the observed fitness landscape. By analyzing the structure of the evolving individuals, a problem-difficulty metric is extracted a posteriori and then plotted for various types of example problems. This paper presents results that show that the CLGA chooses a search strategy appropriate to the fitness landscape induced by the CLGA itself. This is reflected in an empirical measurement of problem difficulty that is a natural byproduct of CLGA search. * FIFO here means that the chromosome that hasn't been repeated in the longest time is replaced first.
All in-text references underlined in blue are linked to publications on ResearchGate, letting you... more All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.
A new paradigm for genetic search referred to as the Collective Learning Genetic Algorithm (CLGA)... more A new paradigm for genetic search referred to as the Collective Learning Genetic Algorithm (CLGA) has been demonstrated for combinatorial optimization problems which utilizes genotypic learning to do recombination based on a cooperative exchange of knowledge between interacting chromosomes. Recent evidence suggests that the success of the CLGA is not due to a capacity to do linkage learning, but due to the CLGA's high resistance to convergence and its ability to modify its recombinative behavior based on the consistency of the information in its environment, specifically, the observed fitness landscape. By analyzing the structure of the evolving individuals, a problem-difficulty metric is extracted a posteriori and then plotted for various types of example problems. This paper presents results that show that the CLGA chooses a search strategy appropriate to the fitness landscape induced by the CLGA itself. This is reflected in an empirical measurement of problem difficulty that...
2010 Fourth IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), 2010
Recent effort has been expended toward the development of a methodology for semantic conformance ... more Recent effort has been expended toward the development of a methodology for semantic conformance testing of standardized biometric interchange records. Specifically, the primary motivation is to evaluate the degree to which a generated compact interchange record such as a minutia template is a faithful representation of the original digital representation of the biometric characteristic (i.e. image of a finger pattern). In and of itself, such an evaluation would seem to have intrinsic value, as it is almost obvious that this would be of paramount importance to fingerprint minutia extractors. However, in this paper, we provide empirical data that suggests some caution in coming to such a conclusion. We also propose several other approaches to evaluating minutia extractors that might augment their characterization for the purpose of comparison and evaluation.
IEEE Engineering in Medicine and Biology Magazine, 1996
Fluorescence in situ hybridization (FISH) is a rapidly expanding imaging technique in medical res... more Fluorescence in situ hybridization (FISH) is a rapidly expanding imaging technique in medical research and clinical diagnosis. Both researchers and clinicians find it helpful to employ quantitative digital imaging techniques with FISH images. This technique is of particular interest for multi-probe mixtures and for the automated analysis of large numbers of specimens. In the preparation of FISH specimens, multiple probes, each tagged with a different fluorophore, are often used in combination. This permits simultaneous visualization of several different molecular components of the cell. Usually, the relative positions of these components within the specimen are of scientific or clinical interest. The authors discuss these techniques and their applications. FISH dot counting is increasingly used in research and clinical studies. Research procedures and clinical tests using FISH almost certainly have an increasingly significant role to play in the future of biology and medicine. In much the same way as cytogenetics has adopted digital imaging, the techniques described here, and similar ones, will become a routine part of research and clinical practice as the use of FISH techniques expand. As in radiology, one can expect digital image processing to become an indispensable part of the activity.
Proceedings of the 2003 ACM SIGMM workshop on Biometrics methods and applications - WBMA '03, 2003
This paper evaluates the impact of eye localization on face recognition accuracy. To investigate ... more This paper evaluates the impact of eye localization on face recognition accuracy. To investigate its importance, we present an eye perturbation sensitivity analysis, as well as empirical evidence that reinforces the notion that eye localization plays a key role in the accuracy of face recognition systems. In particular, correct measurement of eye separation is shown to be more important than correct eye location, highlighting the critical role of eye separation in the scaling and normalization of face images. Results suggest that significant gains in recognition accuracy may be achieved by focussing more effort on the eye localization stage of the face recognition process.
A reading-based CAPTCHA, called ‘ScatterType, ’ designed to resist character–segmentation attacks... more A reading-based CAPTCHA, called ‘ScatterType, ’ designed to resist character–segmentation attacks, is described. Its challenges are pseudorandomly synthesized images of text strings rendered in machine-print typefaces: within each image, characters are fragmented using horizontal and vertical cuts, and the fragments are scattered by vertical and horizontal displacements. This scattering is designed to defeat all methods known to us for automatic segmentation into characters. As in the BaffleText CAPTCHA, English-like but unspellable text-strings are used to defend against known-dictionary attacks. In contrast to the PessimalPrint and BaffleText CAPTCHAs (and others), no physics-based image degradations, occlusions, or extraneous patterns are employed. We report preliminary results from a human legibility trial with 57 volunteers that yielded 4275 CAPTCHA challenges and responses. ScatterType human legibility remains remarkably high even on extremely degraded cases. We speculate that...
In this paper we will address several architectural decisions in defining a software control arch... more In this paper we will address several architectural decisions in defining a software control architecture for mobile robots. Our system is a collection of control primitives that enables the development of simulations or control algorithms for autonomous agents. Its computational capabilities are determined by an object-oriented constraint-based architecture. We discuss how high level knowledge, skills, goal-driven and reactive behavior are integrated within such an architecture. Our goal is to design a framework that enables the merging of classic and reactive implementation ideas. We will show, that each such type of control can be implemented in our system. The issues of task decomposition and granularity are given special attention, as they lie at the basis of our architecture. We discuss two learning methods supported by our system. The first is based on environment exploration, while the second copes with skill acquisition. Our robot, cyclops , is a LEGO mini-robot based on th...
Abstract. This paper presents a novel technique for improving face recognition performance by pre... more Abstract. This paper presents a novel technique for improving face recognition performance by predicting system failure, and, if necessary, perturbing eye co-ordinate inputs and repredicting failure as a means of selecting the optimal per-turbation for correct classification. This relies on a method that can accurately identify patterns that can lead to more accurate classification, without modify-ing the classification algorithm itself. To this end, a neural network is used to learn 'good ' and 'bad ' wavelet transforms of similarity score distributions from an analysis of the gallery. In production, face images with a high likelihood of having been incorrectly matched are reprocessed using perturbed eye coordi-nate inputs, and the best results used to "correct " the initial results. The overall approach suggest a more general approach involving the use of input perturba-tions for increasing classifier performance in general. Results for both commer-ci...
This paper evaluates the impact of eye localization on face recognition accuracy. To investigate ... more This paper evaluates the impact of eye localization on face recognition accuracy. To investigate its importance, we present an eye perturbation sensitivity analysis, as well as empirical evidence that reinforces the notion that eye localization plays a key role in the accuracy of face recognition systems. In particular, correct measurement of eye separation is shown to be more important than correct eye location, highlighting the critical role of eye separation in the scaling and normalization of face images. Results suggest that significant gains in recognition accuracy may be achieved by focussing more effort on the eye localization stage of the face recognition process. 1
Abstract. Fitness landscape complexity in the context of evolutionary algorithms can be considere... more Abstract. Fitness landscape complexity in the context of evolutionary algorithms can be considered to be a relative term due to the complex interaction between search strategy, problem difficulty and problem representation. A new paradigm for genetic search referred to as the Collective Learning Genetic Algorithm (CLGA) has been demonstrated for combinatorial optimization problems which utilizes genotypic learning to do recombination based on a cooperative exchange of knowledge (instead of symbols) between interacting chromosomes. There is evidence to suggest that the CLGA is able to modify its recombinative behavior based on the consistency of the information in its environment, specifically, the observed fitness landscape. By analyzing the structure of the evolving individuals, a landscape-complexity metric is extracted a posteriori and then plotted for various types of example problems. This paper presents preliminary results that show that the CLGA appears to adapt its search st...
Intelligent Recombination Using Individual Learning in a Collective Learning Genetic Algorithm
This paper introduces a new collective learning genetic algorithm (CLGA) which employs individual... more This paper introduces a new collective learning genetic algorithm (CLGA) which employs individual learning to do intelligent recombination based on a cooperative exchange of knowledge between interacting chromosomes. Each individual in the population observes a unique set of features in the chromosomes with which it interacts in order to explicitly estimate the average fitnesses of schemata in the population, and to use that information to guide recombination. The stages of evolution are still controlled by a global algorithm, but much of the control in the CLGA is distributed among chromosomes that are individually responsible for recombination, mutation and selection. The effectiveness of the approach is demonstrated on random problems generated by an NK-Landscape problem generator. Preliminary results suggest that the CLGA may be especially effective for searching for solutions to highly epistatic, non-separable problems, a class of problems traditionally difficult for regular GAs.
This paper evaluates the impact of eye localization on face recognition accuracy. To investigate ... more This paper evaluates the impact of eye localization on face recognition accuracy. To investigate its importance, we present an eye perturbation sensitivity analysis, as well as empirical evidence that reinforces the notion that eye localization plays a key role in the accuracy of face recognition systems. In particular, correct measurement of eye separation is shown to be more important than correct eye location, highlighting the critical role of eye separation in the scaling and normalization of face images. Results suggest that significant gains in recognition accuracy may be achieved by focussing more effort on the eye localization stage of the face recognition process.
Pr ro oc c.. o of f t th he e 6 6 t th h I In nt te er rn na at ti io on na al l C Co on nf fe er... more Pr ro oc c.. o of f t th he e 6 6 t th h I In nt te er rn na at ti io on na al l C Co on nf fe er re en nc ce e o on n A Ar rt ti if fi ic ci ia al l I In nt te el ll li ig ge en nc ce e A Ap pp pl li ic ca at ti io on ns s,
Uploads
Papers by Terry Riopka