Why does regularizing the bias lead to underfitting in neural networks?

Why does regularizing the bias lead to underfitting in neural networks?

This article explains the reason behind the taboo of not including the bias parameter in regularization while also explaining the substantial role it plays in algorithms such as linear regression and neural networks. So read on!

Read More
Why Random Shuffling improves Generalizability of Neural Nets

Why Random Shuffling improves Generalizability of Neural Nets

You must have both heard and observed that randomly shuffling your data improves a neural network's performance and its generalization capabilities. But what is the possible reason behind this phenomenon? In this blog, we provide an intuitive explanation for the same.

Read More
Deep Learning or Machine Learning?

Deep Learning or Machine Learning?

In this introductory blog to the Simply Deep series, we provide a complete understanding of the three major areas where Machine Learning fails and how Deep Learning overcomes them. This blog and the series in general is inspired by The Deep Learning book by Ian Goodfellow, Yoshua Bengio and Aaron Courville.

Read More