Satwik PM
4 min readApr 10, 2020

Stochastic Gradient Descent (SGD)

Its good to have a understanding of Gradient Descent(Refer my pervious post) before proceeding to Stochastic Gradient Descent (SGD).

Gradient Descent: It is a very popular optimization technique in Machine Learning and Deep Learning and it can be used with most, if not all, of the learning algorithms. A gradient is basically the slope of a function; the degree of change of a parameter with the amount of change in another parameter.

Mathematically, it can be described as the partial derivatives of a set of parameters with respect to its inputs. The more the gradient, the steeper the slope. Gradient Descent is a convex function.

Gradient Descent can be described as an iterative method which is used to find the values of the parameters of a function that minimizes the cost function as much as possible. The parameters are initially defined a particular value and from that, Gradient Descent is run in an iterative fashion to find the optimal values of the parameters, using calculus, to find the minimum possible value of the given cost function.

Stochastic Gradient Descent (SGD):

The word ‘stochastic‘ means a system or a process that is linked with a random probability. Hence, in Stochastic Gradient Descent, a few samples are selected randomly instead of the whole data set for each iteration. In Gradient Descent, there is a term called “batch” which denotes the total number of samples from a dataset that is used for calculating the gradient for each iteration

SGD is a simple yet very efficient approach to discriminative learning of linear classifiers under convex loss functions such as (linear) Support Vector Machine and logistic Regression. Even though SGD has been around in the machine learning community for a long time, it has received a considerable amount of attention just recently in the context of large-scale learning.

Note:

· In typical Gradient Descent optimization, like Batch Gradient Descent, the batch is taken to be the whole dataset. Although, using the whole dataset is really useful for getting to the minima in a less noisy or less random manner, but the problem arises when our datasets get really huge.

· In SGD, it uses only a single sample, i.e., a batch size of one, to perform each iteration. The sample is randomly shuffled and selected for performing the iteration.

· In SGD, we find out the gradient of the cost function of a single example at each iteration instead of the sum of the gradient of the cost function of all the examples.

· In SGD, since only one sample from the dataset is chosen at random for each iteration, the path taken by the algorithm to reach the minima is usually noisier than your typical Gradient Descent algorithm.

· SGD is generally noisier than typical Gradient Descent, it usually took a higher number of iterations to reach the minima, because of its randomness in its descent

· Even though SGD requires a higher number of iterations to reach the minima than typical Gradient Descent, it is still computationally much less expensive than typical Gradient Descent.

Mathematical Formulation:

The advantages of Stochastic Gradient Descent are:

· Efficiency.

· Ease of implementation (lots of opportunities for code tuning).

The disadvantages of Stochastic Gradient Descent include:

· SGD requires a number of hyperparameters such as the regularization parameter and the number of iterations.

· SGD is sensitive to feature scaling.

Conclusion: Gradient descent can often have slow convergence because each iteration requires calculation of the gradient for every single training example. If we update the parameters each time by iterating through each training example, we can actually get excellent estimates despite the fact that we’ve done less work.

Courtesy :scikit-learn.org, Rahul Roy

GitHub Repository: https://github.com/SatwikPM/Gradient-Descent.git