- Regularization and variable selection method.
- Sparse Representation
- Exihibits grouping effect.
- Prticulary useful when number of predictors (p) >> number of observations (n).
- LARS-EN algorithm to compute elastic net regularization path.
- Link to paper.
- Least square method with L1-penalty on regression coefficient.
- Does continuous shrinkage and automatic variable selection
- If p >> n, lasso can select at most n variables.
- In the case of a group of variables exhibiting high pairwise correlation, lasso doesn't care about which variable is selected.
- If n > p and there is a high correlation between predictors, ridge regression outperforms lasso.
- Least square method.
- Penalty on regression cofficients is a convex combination of lasso and ridge penalty.
- penalty = (1−α)*|β| + α*|β|2 where β refers to the coefficient matrix.
- α = 0 => lasso penalty
- α = 1 => ridge penalty
- Naive elastic net can be solved by transforming to lasso on augmeneted data.
- Can be viewed as redge type shrinkage followed by lasso type thresholding.
- The two-stage procedure incurs double amount of shrinkage and introduces extra bias without reducing variance.
- Generalization of lasso and ridge regression.
- Can not produce sparse solutions.
- Rescaled naive elastic net coefficients to undo shrinkage.
- Retains good properties of the naive elastic net.
- Elastic net becomes minimax optimal.
- Scaling reverses the shrinkage control introduced by ridge regression.
- Based on LARS (used to solve lasso).
- Elastic net can be transformed to lasso on augmented data so can reuse pieces of LARS algorithm.
- Use sparseness to save on computation.
Elastic net performs superior to lasso.
Dear Shagun,
I'm looking for closed form formula for Elastic net method for selecting variable.
Do you have code to guide me how to emplement Elastic net in matlab?