Papers by gianluca di muro
Are interconnected compartmental models more effective at predicting decompression sickness risk?
Informatics in Medicine Unlocked
A constrained-optimization approach to training neural networks for smooth function approximation and system identification
2008 Ieee International Joint Conference on Neural Networks, Jun 1, 2008
A constrained-backpropagation training technique is presented to suppress interference and preser... more A constrained-backpropagation training technique is presented to suppress interference and preserve prior knowl- edge in sigmoidal neural networks, while new information is learned incrementally. The technique is based on constrained optimization, and minimizes an error function subject to a set of equality constraints derived via an algebraic training approach. As a result, sigmoidal neural networks with long term procedural memory

Spiking neural networks have been shown capable of simulating sigmoidal artificial neural network... more Spiking neural networks have been shown capable of simulating sigmoidal artificial neural networks providing promising evidence that they too are universal function approximators. Spiking neural networks offer several advantages over sigmoidal networks, because they can approximate the dynamics of biological neuronal networks, and can potentially reproduce the computational speed observed in biological brains by enabling temporal coding. On the other hand, the effectiveness of spiking neural network training algorithms is still far removed from that exhibited by backpropagating sigmoidal neural networks. This paper presents a novel algorithm based on reward-modulated spike-timing-dependent plasticity that is biologically plausible and capable of training a spiking neural network to learn the exclusive-or (XOR) computation, through rate-based coding. The results show that a spiking neural network model with twenty-three nodes is able to learn the XOR gate accurately, and performs the computation on time scales of milliseconds. Moreover, the algorithm can potentially be verified in light-sensitive neuronal networks grown in vitro by determining the spikes patterns that lead to the desired synaptic weights computed in silico when induced by blue light in vitro.
A constrained-optimization approach to training neural networks for smooth function approximation and system identification
A constrained-backpropagation training technique is presented to suppress interference and preser... more A constrained-backpropagation training technique is presented to suppress interference and preserve prior knowl- edge in sigmoidal neural networks, while new information is learned incrementally. The technique is based on constrained optimization, and minimizes an error function subject to a set of equality constraints derived via an algebraic training approach. As a result, sigmoidal neural networks with long term procedural memory

A constrained backpropagation approach to solving Partial Differential Equations in non-stationary environments
A constrained-backpropagation (CPROP) training technique is presented to solve partial differenti... more A constrained-backpropagation (CPROP) training technique is presented to solve partial differential equations (PDEs). The technique is based on constrained optimization and minimizes an error function subject to a set of equality constraints, provided by the boundary conditions of the differential problem. As a result, sigmoidal neural networks can be trained to approximate the solution of PDEs avoiding the discontinuity in the derivative of the solution, which may affect the stability of classical methods. Also, the memory provided to the network through the constrained approach may be used to solve PDEs on line when the forcing term changes over time, learning different solutions of the differential problem through a continuous nonlinear mapping. The effectiveness of this method is demonstrated by solving a nonlinear PDE on a circular domain. When the underlying process changes subject to the same boundary conditions, the CPROP network is capable of adapting online and approximate the new solution, while memory of the boundary conditions is maintained virtually intact at all times.
A constrained penalty function method for exploratory adaptive-critic neural network (NN) control... more A constrained penalty function method for exploratory adaptive-critic neural network (NN) control is presented. While constrained approximate dynamic programming has been effective to guarantee closed-loop system performance and stability objectives, in the presence of a change in the plant dynamics it may not have the necessary plasticity to explore and fully adapt to the new behaviors of the plant, if these violate the constraints. A generalized constrained approach is introduced to overcome these limitations. Through this methodology it is shown that NNs are not only capable to acquire new plasticity when necessary, but also can adjust their parametric structure reducing their hidden nodes and becoming more computationally efficient.
Uploads
Papers by gianluca di muro