DSPA Chapter 17 (Regularized Linear Modeling and Controlled Variable Selection)
From Tina Chang
views
comments
From Tina Chang
DSPA Chapter 17 (Regularized Linear Modeling and Controlled Variable Selection)
Classical techniques for choosing important covariates to include in a model of complex multivariate data relied on various types of stepwise variable selection processes, see Chapter 16. These tend to improve prediction accuracy in certain situations, e.g., when a small number of features are strongly predictive, or associated, with the clinical outcome or biosocial trait. However, the prediction error may be large when the model relies purely on a fidelity term. Including a regularization term in the optimization of the cost function improves the prediction accuracy. For example, below we show that by shrinking large regression coefficients, ridge regularization reduces overfitting and improves prediction error. Similarly, the least absolute shrinkage and selection operator (LASSO) employs regularization to perform simultaneous parameter estimation and variable selection. LASSO enhances the prediction accuracy and provides a natural interpretation of the resulting model. Regularization refers to forcing certain characteristics of model-based scientific inference, e.g., discouraging complex models or extreme explanations, even if they fit the data, enforcing model generalizability to prospective data, or restricting model overfitting of accidental samples.
In this chapter, we extend the mathematical foundation we presented in Chapter 4 and (1) discuss computational protocols for handling complex high-dimensional data, (2) illustrate model estimation by controlling the false-positive rate of selection of salient features, and (3) derive effective forecasting models.