Academia.eduAcademia.edu

The validation error is used to ensure the validity of the model. It checks whether our model is too simple to learn underlying patterns or too complex to learn everything in the training set. The learning algorithm that performs poorly on the training set by producing large training error (with reference to a baseline/human-level performance) is said to underfit. This indicates high bias, which comes from developing a simpler model using fewer parameters and lesser degrees of freedom. The learning algorithm that performs well on the training data (with reference to a baseline/human-level performance), but not well on the validation set is said to overfit the training data. This indicates high variance, which comes from developing a complex model using too many parameters and higher degrees of freedom. Thus, the bias-variance tradeoff is to be taken care of for a balanced learning. That

Figure 54 The validation error is used to ensure the validity of the model. It checks whether our model is too simple to learn underlying patterns or too complex to learn everything in the training set. The learning algorithm that performs poorly on the training set by producing large training error (with reference to a baseline/human-level performance) is said to underfit. This indicates high bias, which comes from developing a simpler model using fewer parameters and lesser degrees of freedom. The learning algorithm that performs well on the training data (with reference to a baseline/human-level performance), but not well on the validation set is said to overfit the training data. This indicates high variance, which comes from developing a complex model using too many parameters and higher degrees of freedom. Thus, the bias-variance tradeoff is to be taken care of for a balanced learning. That