Academia.eduAcademia.edu

Ridge Polynomial Neural Network

description11 papers
group0 followers
lightbulbAbout this topic
A Ridge Polynomial Neural Network is a type of artificial neural network that incorporates ridge regression techniques to enhance generalization and reduce overfitting. It utilizes polynomial activation functions to model complex relationships in data while maintaining stability through regularization, thereby improving predictive performance in various machine learning tasks.
lightbulbAbout this topic
A Ridge Polynomial Neural Network is a type of artificial neural network that incorporates ridge regression techniques to enhance generalization and reduce overfitting. It utilizes polynomial activation functions to model complex relationships in data while maintaining stability through regularization, thereby improving predictive performance in various machine learning tasks.

Key research themes

1. How do Ridge Polynomial Neural Networks (RPNNs) improve time series forecasting, and what forms of feedback enhance their performance?

This research area investigates the design and training of Ridge Polynomial Neural Networks adapted for time series prediction tasks. The focus is on how incorporating recurrent feedback mechanisms—such as network output feedback or error feedback—affects forecasting accuracy. It is significant because RPNNs provide a single-layer higher order network structure that balances powerful nonlinear mapping with efficient training, addressing limitations of multilayer perceptrons in forecasting complex real-world temporal datasets.

Key finding: The study proposes the RPNN with network error feedback (RPNN-EF), demonstrating that incorporating error signals as recurrent inputs yields significant prediction accuracy improvements (+23.34% RMSE improvement compared to... Read more
Key finding: This work introduces a novel RPNN variant that combines both error and output feedback mechanisms (RPNN-EOFs) to enable multi-step ahead forecasting. Tested on the Mackey-Glass chaotic time series, it achieves lower error... Read more
Key finding: Addressing training stability and complexity in dynamic RPNNs (DRPNNs), this study proposes a Lyapunov-function-based adaptive learning rate approach. The method provides sufficient conditions for network convergence and... Read more
Key finding: Applying RPNNs to noisy financial time series, the research demonstrates that RPNNs outperform multilayer perceptrons (MLPs), functional link neural networks, and pi-sigma neural networks in forecasting accuracy and training... Read more

2. What theoretical approximation properties do ridge function-based neural networks possess, and how do these underpin their expressiveness?

This research theme centers on the rigorous mathematical foundation of ridge polynomial neural networks via approximation theory. It examines how ridge functions and their linear combinations densely approximate continuous functions on compact domains. Such theoretical analysis clarifies the representational capabilities and limitations of networks with one hidden layer and establishes existence and density results pivotal for understanding the universal approximation properties of RPNNs and related structures.

Key finding: Although focusing on B-spline networks, this paper provides relevant insights on the structured approximation of nonlinear mappings by parameterized networks, emphasizing that appropriate architecture selection combined with... Read more
Key finding: This work rigorously proves that feedforward neural networks with ridge basis functions can uniformly approximate any continuous multivariate function on compact domains. It shows the existence of an analytic, strictly... Read more
Key finding: The paper establishes that the algebraic span of ridge functions with directions from a specific infinite set is dense in the space of continuous functions on compact sets in high dimensions. It extends classic approximation... Read more
Key finding: This study provides statistical generalization error bounds for estimators formed by linear combinations of ridge functions with Lipschitz nonlinearities, such as single-hidden-layer neural networks. The risk bounds scale... Read more

3. How can polynomials and tensor-based representations approximate neural networks, and what benefits does this bring for interpretability and constraints?

This theme explores polynomial and tensor expansions of neural networks, transforming networks into explicit polynomial forms or tensor networks to gain analytical tractability. Such approaches illuminate connections between neural nets and classical polynomial approximation theory, facilitating incorporation of physical or dynamical constraints. This enhances interpretability, analysis of system properties like stability, and enables computational advantages in system identification and verification.

Key finding: The authors propose a method to approximate trained neural networks (fully connected, convolutional, and recurrent) by least-squares optimal Taylor polynomials of arbitrary order, enabling analytic representations that... Read more
Key finding: The work identifies how tensor networks, through tensorization of univariate functions, define function classes with finite representation complexity corresponding to neural networks with sum-product sparse architectures. It... Read more
Key finding: This paper presents a novel neural network-based solver for eigenvalue problems of general self-adjoint differential operators, representing eigenfunctions via neural nets trained end-to-end. The approach produces smooth... Read more
Key finding: The study introduces Differential Polynomial Neural Networks (D-PNN) that construct and approximate unknown differential equations encoding variable dependencies through fractional partial differential polynomials. This... Read more

All papers in Ridge Polynomial Neural Network

Purpose: The purpose of this research was to provide a model for predicting time series of financial information based on the Lyapunov representation of information using chaos theory. Method: This research is applied in its purpose,... more
An artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. This hybrid ANN+PSO algorithm was applied on Mackey-Glass series in the short-term prediction x(t + 6) and the... more
In this paper, a new self-organizing fuzzy neural network model is presented which is able to learn and reproduce different sequences accurately. Sequence learning is important in performing skillful tasks, such as writing and playing... more
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It... more
The breaking down of a particular problem through problem decomposition has enabled complex problems to be solved efficiently. The two major problem decomposition methods used in cooperative coevolution are synapse and neuron level. The... more
Time series forecasting gets much attention due to its impact on many practical applications. Higher-order neural network with recurrent feedback is a powerful technique which used successfully for forecasting. It maintains fast learning... more
Time series forecasting gets much attention due to its impact on many practical applications. Higher-order neural network with recurrent feedback is a powerful technique which used successfully for forecasting. It maintains fast learning... more
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It... more
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It... more
The breaking down of a particular problem through problem decomposition has enabled complex problems to be solved efficiently. The two major problem decomposition methods used in cooperative coevolution are synapse and neuron level. The... more
In this paper, a new self-organizing fuzzy neural network model is presented which is able to learn and reproduce different sequences accurately. Sequence learning is important in performing skillful tasks, such as writing and playing... more
Currency exchange is the trading of one currency against another. FOREX rates are influenced by many correlated economic, political and psychological factors and hence predicting it is an uphill task. Some methods to predict the FOREX... more
In this paper, we develop an indirect adaptive control structure based on recurrent neural networks. An adaptive emulator inspired from the Real Time recurrent Learning algorithm is presented. Neural network does not learn the plant... more
In order to train artificial neural networks, we used a new stochastic optimization algorithm that simulate the plant growing process. It designs two artificial photosynthesis operator and phototropism operator to mimic photosynthesis and... more
This study proposes a novel neural-network-based fuzzy group forecasting model for foreign exchange rates prediction. In the proposed model, some single neural network models are first used as predictors for foreign exchange rates... more
The version presented here may differ from the published version or from the version of the record. Please see the repository URL above for details on accessing the published version and note that access may require a subscription.
The breaking down of a particular problem through problem decomposition has enabled complex problems to be solved efficiently. The two major problem decomposition methods used in cooperative coevolution are synapse and neuron level. The... more
The ability to model the behaviour of arbitrary dynamic system is one of the most useful properties of recurrent networks. Dynamic ridge polynomial neural network (DRPNN) is a recurrent neural network used for time series forecasting.... more
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It... more
Download research papers for free!