Academia.eduAcademia.edu

FIGURE 10. Resnet50: Comparison of training and validation accuracy of our proposed RMAF to two baseline (ReLU and Tanh) activation functions on CIFAR100. (a) Shows training and validation accuracy of ReLU achieving 75.7% and 61.2% respectively, (b) Training and Validation of Tanh had 64.1% and 54.2% respectively. We show that our proposed function (c) achieves higher performance training and validation accuracy (i.e. 79.8% and 66.3%) compared to ReLU (a) and Tanh (b) on CIFAR100 dataset.

Figure 10 Resnet50: Comparison of training and validation accuracy of our proposed RMAF to two baseline (ReLU and Tanh) activation functions on CIFAR100. (a) Shows training and validation accuracy of ReLU achieving 75.7% and 61.2% respectively, (b) Training and Validation of Tanh had 64.1% and 54.2% respectively. We show that our proposed function (c) achieves higher performance training and validation accuracy (i.e. 79.8% and 66.3%) compared to ReLU (a) and Tanh (b) on CIFAR100 dataset.