Academia.eduAcademia.edu

In an ordinary artificial neural network, for example, yl is the output value of a layer | and zl is the input value at layer | where WI and Dl are the weight and bias of layer 1, with units then the calculation of the feedforward process using the activation function f can be done on Equations 4 and 5 [25]. Meanwhile, in a network that implements the dropout technique, the variable rl represents the vector along j which stores the value obtained from the Bernoulli distribution. The feedforward process is carried out in Equations 6, 7, and 8.

Figure 6 In an ordinary artificial neural network, for example, yl is the output value of a layer | and zl is the input value at layer | where WI and Dl are the weight and bias of layer 1, with units then the calculation of the feedforward process using the activation function f can be done on Equations 4 and 5 [25]. Meanwhile, in a network that implements the dropout technique, the variable rl represents the vector along j which stores the value obtained from the Bernoulli distribution. The feedforward process is carried out in Equations 6, 7, and 8.