As robots leave the controlled environments of factories to autonomously function in more complex... more As robots leave the controlled environments of factories to autonomously function in more complex, natural environments 1,2,3 , they will have to respond to the inevitable fact that they will become damaged 4,5 . However, while animals can quickly adapt to a wide variety of injuries, current robots cannot "think outside the box" to find a compensatory behavior when damaged: they are limited to their pre-specified selfsensing abilities, can diagnose only anticipated failure modes 6 , and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots 4,5 .
Creating gaits for physical robots is a longstanding and open challenge. Recently, the HyperNEAT ... more Creating gaits for physical robots is a longstanding and open challenge. Recently, the HyperNEAT generative encoding was shown to automatically discover a variety of gait regularities, producing fast, coordinated gaits, but only for simulated robots. A follow-up study found that HyperNEAT did not produce impressive gaits when they were evolved directly on a physical robot. A simpler encoding hand-tuned to produce regular gaits was tried on the same robot, and outperformed Hy-perNEAT, but these gaits were first evolved in simulation before being transferred to the robot. In this paper, we tested the hypothesis that the beneficial properties of HyperNEAT would outperform the simpler encoding if HyperNEAT gaits are first evolved in simulation before being transferred to reality. That hypothesis was confirmed, resulting in the fastest gaits yet observed for this robot, including those produced by nine different algorithms from three previous papers describing gaitgenerating techniques for this robot. This result is important because it confirms that the early promise shown by generative encodings, specifically HyperNEAT, are not limited to simulation, but work on challenging real-world engineering challenges such as evolving gaits for real robots.
2013 IEEE Congress on Evolutionary Computation, 2013
Ongoing, rapid advances in three-dimensional (3D) printing technology are making it inexpensive f... more Ongoing, rapid advances in three-dimensional (3D) printing technology are making it inexpensive for lay people to manufacture 3D objects. However, the lack of tools to help nontechnical users design interesting, complex objects represents a significant barrier preventing the public from benefiting from 3D printers. Previous work has shown that an evolutionary algorithm with a generative encoding based on developmental biology-a compositional pattern-producing network (CPPN)-can automate the design of interesting 3D shapes, but users collectively had to start each act of creation from a random object, making it difficult to evolve preconceived target shapes. In this paper, we describe how to modify that algorithm to allow the further evolution of any uploaded shape. The technical insight is to inject the distance to the surface of the object as an input to the CPPN. We show that this seeded-CPPN technique reproduces the original shape to an arbitrary resolution, yet enables morphing the shape in interesting, complex ways. This technology also raises the possibility of two new, important types of science: (1) It could work equally well for CPPN-encoded neural networks, meaning neural wiring diagrams from nature, such as the mouse or human connectome, could be injected into a neural network and further evolved via the CPPN encoding.
of connectionist neural networks, while others use mathematical models of decision processes or v... more of connectionist neural networks, while others use mathematical models of decision processes or view intelligence as symbol manipulation. Similarly, researchers focus on different processes for generating intelligence, such as learning through reinforcement, natural evolution, logical inference, and statistics. The result is a panoply of approaches and subfi elds.
The embodied cognition paradigm emphasizes that both bodies and brains combine to produce complex... more The embodied cognition paradigm emphasizes that both bodies and brains combine to produce complex behaviors, in contrast to the traditional view that the only seat of intelligence is the brain. Despite recent excitement about embodied cognition, brains and bodies remain thought of, and implemented as, two separate entities that merely interface with one another to carry out their respective roles. Previous research co-evolving bodies and brains has simulated the physics of bodies that collect sensory information and pass that information on to disembodied neural networks, which then processes that information and return motor commands. Biological animals, in contrast, produce behavior through physically embedded control structures and a complex and continuous interplay between neural and mechanical forces. In addition to the electrical pulses flowing through the physical wiring of the nervous system, the heart elegantly combines control with actuation, as the physical properties of the tissue itself (or defects therein) determine the actuation of the organ. Inspired by these phenomena from cardiac electrophysiology (the study of the electrical properties of heart tissue), we introduce electrophysiological robots, whose behavior is dictated by electrical signals flowing though the tissue cells of soft robots. Here we describe these robots and how they are evolved. Videos and images of these robots reveal lifelike behaviors despite the added challenge of having physically embedded control structures. We also provide an initial experimental investigation into the impact of different implementation decisions, such as alternatives for sensing, actuation, and locations of central pattern generators. Overall, this paper provides a first step towards removing the chasm between bodies and brains to encourage further research into physically realistic embodied cognition.
Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variet... more Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects. Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.
Creating gaits for legged robots is an important task to enable robots to access rugged terrain, ... more Creating gaits for legged robots is an important task to enable robots to access rugged terrain, yet designing such gaits by hand is a challenging and time-consuming process. In this paper we investigate various algorithms for automating the creation of quadruped gaits. Because many robots do not have accurate simulators, we test gait-learning algorithms entirely on a physical robot. We compare the performance of two classes of gait-learning algorithms: locally searching parameterized motion models and evolving artificial neural networks with the HyperNEAT generative encoding. Specifically, we test six different parameterized learning strategies: uniform and Gaussian random hill climbing, policy gradient reinforcement learning, Nelder-Mead simplex, a random baseline, and a new method that builds a model of the fitness landscape with linear regression to guide further exploration. While all parameter search methods outperform a manually-designed gait, only the linear regression and Nelder-Mead simplex strategies outperform a random baseline strategy. Gaits evolved with HyperNEAT perform considerably better than all parameterized local search methods and produce gaits nearly 9 times faster than a hand-designed gait. The best HyperNEAT gaits exhibit complex motion patterns that contain multiple frequencies, yet are regular in that the leg movements are coordinated.
In 1994 Karl Sims showed that computational evolution can produce interesting morphologies that r... more In 1994 Karl Sims showed that computational evolution can produce interesting morphologies that resemble natural organisms. Despite nearly two decades of work since, evolved morphologies are not obviously more complex or natural, and the field seems to have hit a complexity ceiling. One hypothesis for the lack of increased complexity is that most work, including Sims', evolves morphologies composed of rigid elements, such as solid cubes and cylinders, limiting the design space. A second hypothesis is that the encodings of previous work have been overly regular, not allowing complex regularities with variation. Here we test both hypotheses by evolving soft robots with multiple materials and a powerful generative encoding called a compositional pattern-producing network (CPPN). Robots are selected for locomotion speed. We find that CPPNs evolve faster robots than a direct encoding and that the CPPN morphologies appear more natural. We also find that locomotion performance increases as more materials are added, that diversity of form and behavior can be increased with di↵erent cost functions without stifling performance, and that organisms can be evolved at di↵erent levels of resolution. These findings suggest the ability of generative soft-voxel systems to scale towards evolving a large diversity of complex, natural, multi-material creatures. Our results suggest that future work that combines the evolution of CPPNencoded soft, multi-material robots with modern diversityencouraging techniques could finally enable the creation of creatures far more complex and interesting than those produced by Sims nearly twenty years ago.
A meta-GA (GA within a GA) is used to investigate evolving the parameter settings of genetic oper... more A meta-GA (GA within a GA) is used to investigate evolving the parameter settings of genetic operators for genetic and evolutionary algorithms (GEA) in the hope of creating a selfadaptive GEA. We report three findings. First, the meta-GA can adapt its genetic operators to different problems and thereby perform well on average across diverse problems. Second, the meta-GA can change its parameters during the course of a run-seemingly a good idea-but this behavior may actually decrease performance. Finally, the genetic operator configurations the meta-GA evolves are far from optimal. We conclude that, while meta-GAs show promise for automating some parameter configurations, they are not likely to replace manually configured genetic and evolutionary algorithms without innovative alteration.
IEEE Transactions on Evolutionary Computation, 2011
This paper investigates how an evolutionary al- gorithm with an indirect encoding exploits the pr... more This paper investigates how an evolutionary al- gorithm with an indirect encoding exploits the property of phenotypic regularity, an important design principle found in natural organisms and engineered designs. We present the first comprehensive study showing that such phenotypic regularity enables an indirect encoding to outperform direct encoding con- trols as problem regularity increases. Such an ability to produce regular
In the natural world, individual organisms can adapt as their environment changes. In most in sil... more In the natural world, individual organisms can adapt as their environment changes. In most in silico evolution, however, individual organisms tend to consist of rigid solutions, with all adaptation occurring at the population level. If we are to use artificial evolving systems as a tool in understanding biology or in engineering robust and intelligent systems, however, they should be able to generate solutions with fitness-enhancing phenotypic plasticity. Here we use Avida, an established digital evolution system, to investigate the selective pressures that produce phenotypic plasticity. We witness two different types of fitness-enhancing plasticity evolve: static-execution-flow plasticity, in which the same sequence of actions produces different results depending on the environment, and dynamic-execution-flow plasticity, where organisms choose their actions based on their environment. We demonstrate that the type of plasticity that evolves depends on the environmental challenge the population faces. Finally, we compare our results to similar ones found in vastly different systems, which suggest that this phenomenon is a general feature of evolution.
We use digital evolution to study the division of labor among heterogeneous organisms under multi... more We use digital evolution to study the division of labor among heterogeneous organisms under multiple levels of selection. Although division of labor is practiced by many social organisms, the labor roles are typically associated with different individual fitness effects. This fitness variation raises the question of why an individual organism would select a less desirable role. For this study, we
Uploads
Papers by Jeff Clune