Evaluation of Sigmoid and ReLU Activation Functions Using Asymptotic Method
Abstract
The performance of two algorithms may be compared using an asymptotic technique in algorithm analysis, which focuses largely on the growth rate as the number of inputs grows. Sigmoid activation and ReLU activation functions are widely employed in ANNs (Yingying, 2020), and each has advantages and disadvantages that should be considered when designing ANN solutions for a given issue. This study aimed to compare the performance of sigmoid activation and ReLU activation function during training using an asymptotic approach. The work focuses on training time complexity as the basis of comparison of the two activation functions using an asymptotic approach. The result derived from this study showed that sigmoid activation function takes more computation time in performing forward path, loss computation and backward propagation than ReLU activation functions. The computation cost will become significant when dealing with deep neural networks with hundreds to thousands of neurons. Overall, the training time for ReLU based Neural network will be better than that of sigmoid based one. Sigmoid have higher computational cost compared to ReLU but the two algorithms have a linear growth rate.
Keyword: Back propagation, Loss computation, Sigmoid Activation, ReLU activation, ANNs
DOI: 10.7176/NCS/13-05
Publication date:June 30th 2022
To list your conference here. Please contact the administrator of this platform.
Paper submission email: NCS@iiste.org
ISSN (Paper)2224-610X ISSN (Online)2225-0603
Please add our address "contact@iiste.org" into your email contact list.
This journal follows ISO 9001 management standard and licensed under a Creative Commons Attribution 3.0 License.
Copyright © www.iiste.org