580 California St., Suite 400
San Francisco, CA, 94104
Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Figure 10 Resnet50: Comparison of training and validation accuracy of our proposed RMAF to two baseline (ReLU and Tanh) activation functions on CIFAR100. (a) Shows training and validation accuracy of ReLU achieving 75.7% and 61.2% respectively, (b) Training and Validation of Tanh had 64.1% and 54.2% respectively. We show that our proposed function (c) achieves higher performance training and validation accuracy (i.e. 79.8% and 66.3%) compared to ReLU (a) and Tanh (b) on CIFAR100 dataset.
Discover breakthrough research and expand your academic network
Join for free