Regularization
Definition
Techniques that prevent overfitting by adding constraints or penalties to the learning process, encouraging simpler models.In-Depth Explanation
Regularization adds a penalty term to the loss function that discourages complex models. L1 regularization (Lasso) encourages sparsity by driving some weights to zero. L2 regularization (Ridge) shrinks weights toward zero without eliminating them. Dropout randomly disables neurons during training. These techniques improve generalization to new data.
Real-World Example
Adding L2 regularization to a neural network to prevent weights from growing too large and memorizing training examples.