Why does regularizing the bias lead to underfitting in neural networks?
This article explains the reason behind the taboo of not including the bias parameter in regularization while also explaining the substantial role it plays in algorithms such as linear regression and neural networks. So read on!
Why Random Shuffling improves Generalizability of Neural Nets
You must have both heard and observed that randomly shuffling your data improves a neural network's performance and its generalization capabilities. But what is the possible reason behind this phenomenon? In this blog, we provide an intuitive explanation for the same.