Augmented Iterations
Sign up for access to the world's latest research
Abstract
The principle of Augmented Iterations is to create shapes of progressively higher complexity, thanks to a fast neuronal selection of shapes among several possible evolving designs. Such a process is made possible by the use of a brain signal known as P300, which appears when a user perceives a rare and relevant stimulus and can be used for intricate pattern recognition and human computation systems. We aim at using this P300 signal to identify the (re)cognition of shapes or designs that a user finds almost instantaneously relevant and noticeable, when exposed to a rapid visual flow of variations of such shapes or designs. Using evolutionary algorithms, the shapes identified as those triggering a P300 in the user’s EEG signals is selected and combined to give rise to geometrical aggregations of a higher complexity. These new shapes replace the previous ones in the rapid flow of variations presented to the user, hence iterating the evolutionary design.
Related papers
Artificial Intelligence XXXVI, 2019
Full bibliographic details must be given when referring to, or quoting from full items including the author's name, the title of the work, publication details where relevant (place, publisher, date), pagination, and for theses or dissertations the awarding institution, the degree type awarded, and the date of the award.
2009
We report the design and performance of a brain-computer interface (BCI) system for real-time single-trial binary classification of viewed images based on participant-specific dynamic brain response signatures in high-density (128-channel) electroencephalographic (EEG) data acquired during a rapid serial visual presentation (RSVP) task. We propose a braincomputer interface (BCI) system for evolving images in real-time based on subject feedback produced by EEG. The goal of this system is to produce a picture best resembling a subject's 'imagined' image. This system evolves images using Compositional Pattern Producing Networks (CPPNs) via the NeuroEvolution of Augmenting Topologies (NEAT) genetic algorithm. Fitness values for NEAT-based evolution are derived from a real-time EEG classifier as images are presented using a rapid serial visual presentation (RSVP) paradigm.
International Journal of Architectural Computing, 2019
This article will focus on abstracting and generalising a well-studied paradigm in visual, event-related potential based brain–computer interfaces, for the spelling of characters forming words, into the visually encoded discrimination of shape features forming design aggregates. After identifying typical technologies in neuroscience and neuropsychology of high interest for integrating fast cognitive responses into generative design and proposing the machine learning model of an ensemble of linear classifiers in order to tackle the challenging features that electroencephalography data carry, it will present experiments in encoding shape features for generative models by a mechanism of visual context updating and the computational implementation of vision as inverse graphics, to suggest that discriminative neural phenomena of event-related potentials such as P300 may be used in a visual articulation strategy for modelling in generative design.
How does the brain find objects in cluttered visual environments? For decades researchers have employed the classic visual search paradigm to answer this question using factorial designs. Although such approaches have yielded important information, they represent only a tiny fraction of the possible parametric space. Here we use a novel approach, by using a genetic algorithm (GA) to discover the way the brain solves visual search in complex environments, free from experimenter bias. Participants searched a series of complex displays, and those supporting fastest search were selected to reproduce (survival of the fittest). Their display properties (genes) were crossed and combined to create a new generation of “evolved” displays. Displays evolved quickly over generations towards a stable, efficiently searched array. Color properties evolved first, followed by orientation. The evolved displays also contained spatial patterns suggesting a coarse-to-fine search strategy. We argue that this behavioral performance-driven GA reveals the way the brain selects information during visual search in complex environments. We anticipate that our approach can be adapted to a variety of sensory and cognitive questions that have proven too intractable for factorial designs.
IEEE Transactions on Evolutionary Computation, 2000
Self-organization of connection patterns within brain areas of animals begins prenatally, and has been shown to depend on internally generated patterns of neural activity. The neural structures continue to develop postnatally through externally driven patterns, when the sensory systems are exposed to stimuli from the environment. The internally generated patterns have been proposed to give the neural system an appropriate bias so that it can learn reliably from complex environmental stimuli. This paper evaluates the hypothesis that complex artificial learning systems can benefit from a similar approach, consisting of initial training with patterns from an evolved pattern generator, followed by training with the actual training set. To test this hypothesis, competitive learning networks were trained for recognizing handwritten digits. The results demonstrate how the approach can improve learning performance by discovering the appropriate initial weight biases, thereby compensating for weaknesses of the learning algorithm. Because of the smaller evolutionary search space, this approach was also found to require much fewer generations than direct evolution of network weights. Since discovering the right biases efficiently is critical for solving large-scale problems with learning, these results suggest that internal training pattern generation is an effective method for constructing complex systems.
2002
This paper is an exploration of an interdisciplinary nature. Through studies in fine art, pattern formation in nature, on cellular, organism and ethological levels, and artificial life; a mechanism for a generic process of design is presented within the context of aesthetic pattern formation. Evolved random asynchronous updating schemes implemented in cellular automata and agent swarm systems with pheromonal signalling were compared favourably to deterministic and hand designed alternatives and the curious adaptive properties of the resulting evolved patterns were investigated. Aesthetic production should not be considered in isolation from aesthetic sense and thus reactions and opinions when the work was exhibited at the ICA London and Blip sci-art discussion group are included. Copious future extensions are outlined for this exciting new facet of artificial life.
Lecture Notes in Computer Science, 2005
This paper shows how Evolutionary Algorithm (EA) robustness help to solve a difficult problem with a minimal expert knowledge about it. The problem consist in the design of a Brain-Computer Interface (BCI), which allows a person to communicate without using nerves and muscles. Input electroencephalographic (EEG) activity recorded from the scalp must be translated into outputs that control external devices. Our BCI is based in a Multilayer Perceptron (MLP) trained by an EA. This kind of training avoids the main problem of MLPs training algorithms: overfitting. Experimental results produce MLPs with a classification ability better than those in the literature.
| Because of the increasing portability and wearability of noninvasive electrophysiological systems that record and process electrical signals from the human brain, automated systems for assessing changes in user cognitive state, intent, and response to events are of increasing interest. Braincomputer interface (BCI) systems can make use of such knowledge to deliver relevant feedback to the user or to an observer, or within a human-machine system to increase safety and enhance overall performance. Building robust and useful BCI models from accumulated biological knowledge and available data is a major challenge, as are technical problems associated with incorporating multimodal physiological, behavioral, and contextual data that may in the future be increasingly ubiquitous. While performance of current BCI modeling methods is slowly increasing, current performance levels do not yet support widespread uses. Here we discuss the current neuroscientific questions and data processing challenges facing BCI designers and outline some promising current and future directions to address them.
2011
This is a theoretical, modeling, and algorithmic paper about the spatial aspect of brain-like information processing, modeled by the Developmental Network (DN) model. The new brain architecture allows the external environment (including teachers) to interact with the sensory ends S and the motor ends M of the skull-closed brain B through development. It does not allow the human programmer to hand-pick extra-body concepts or to handcraft the concept boundaries inside the brain B. Mathematically, the brain spatial processing performs real-time mapping from S(t)×B(t)×M (t) to S(t+1)×B(t+1)×M (t+1), through network updates, where the contents of S, B, M all emerge from experience. Using its limited resource, the brain does increasingly better through experience. A new principle is that the effector ends in M serve as hubs for concept learning and abstraction. The effector ends B serve also as input and the sensory ends S serve also as output. As DN embodiments, the Where-What Networks (WWNs) present three major function novelties -new concept abstraction, concept as emergent goals, and goal-directed perception. The WWN series appears to be the first general purpose emergent systems for detecting and recognizing multiple objects in complex backgrounds. Among others, the most significant new mechanism is general-purpose top-down attention.
Proceedings of the IEEE, 2012
| Because of the increasing portability and wearability of noninvasive electrophysiological systems that record and process electrical signals from the human brain, automated systems for assessing changes in user cognitive state, intent, and response to events are of increasing interest. Braincomputer interface (BCI) systems can make use of such knowledge to deliver relevant feedback to the user or to an observer, or within a human-machine system to increase safety and enhance overall performance. Building robust and useful BCI models from accumulated biological knowledge and available data is a major challenge, as are technical problems associated with incorporating multimodal physiological, behavioral, and contextual data that may in the future be increasingly ubiquitous. While performance of current BCI modeling methods is slowly increasing, current performance levels do not yet support widespread uses. Here we discuss the current neuroscientific questions and data processing challenges facing BCI designers and outline some promising current and future directions to address them.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.