Ridge regression

Supporting Technique

Ridge regression is a technique that adds a quadratic penalty term to the loss function to prevent overfitting.

Ridge regression is used in scenarios where multicollinearity is present among predictor variables, and it helps in improving the model’s generalization by shrinking the coefficients. It is particularly useful in regression tasks with a large number of predictor variables.

Ridge regression works by adding a penalty term, which is the sum of the squared coefficients, to the loss function. This penalty term discourages large coefficients, thus reducing the model’s complexity and preventing overfitting. The objective function for ridge regression is the sum of the squared residuals plus the penalty term, which is controlled by a regularization parameter.

For example, in a linear regression model with predictors X1 and X2, the ridge regression objective function would be minimize (sum of squared residuals + lambda * (beta1^2 + beta2^2)), where lambda is the regularization parameter.

Note that ridge regression, unlike LASSO, cannot be used for dimensionality reduction because the procedure never yields zero coefficients.

Alias
Tikhonov regularization Weight decay Linear regularization Tikhonov-Miller method
Related terms
LASSO Elastic net Regularization