This article was published as a part of the Data Science Blogathon. Table of Contents Gradient Descent Importance of Non-Linearity/Activation Functions Activation Functions (Sigmoid, Tanh, ReLU, Leaky ReLU, ELU, Softmax) and their implementation Problems Associated with Activation Functions (Vanishing Gradient and Exploding Gradient) Endnotes Gradient Descent The work of the gradient descent algorithm is to update […]
The post Activation Functions for Neural Networks and their Implementation in Python appeared first on Analytics Vidhya.