# Local regression

Algorithm

Local regression is the most popular type of nonparametric smoother. A separate least squares regression is performed for each individual item within the training data. Each regression takes into account the point itself and a certain number of its nearest neighbours. Each resulting regression equation is then incorporated into the overall calculation in such a way that the effect of a regression equation on a given new item has an inverse relationship to that item’s distance from the item that was used to train the equation. Local regression is shown as an animation here.

Local regression has the advantage that it can be used to model any known or unknown relationship between predictive variables and a dependent variable. Especially when there is only one predictor variable (simple x-y plot) it has a second use in producing a best-fit curve for relationships that cannot be expressed with mathematical functions (e.g. the unemployment rate over the last 30 years).

The available hyperparameters include:

• The smoothing parameter, which determines what proportion of the total training items will be included in the results of each nearest-neighbour search and is expressed as a decimal between 0 and 1, with typical values lying between 0.25 and 0.5. Higher values yield a smoother curve, while lower values yield a curve that is more sensitive to localized relationships between the variables. Lower values necessarily have the disadvantage that they make the procedure more sensitive to the presence of outliers and more likely to overfit.

• The polynomial degree used in the individual regressions. Normal values are 1 (y=ax) or 2 (y=ax2). The higher the polynomial degree, the more exactly the curve will fit the training data.

The danger that local regression will lead to overfitting means that it is only suitable for prediction when used with a large amount of training data. At the same time, the procedure is very computationally intensive and has only become feasible for large training datasets over the past few years. It is also important to recognise that the training output is a huge number of functions that cannot easily be expressed, understood or exported from one statistical tool to another.

alias
Locally Weighted Scatterplot Smoothing LOESS LOWESS nonparametric smoother
subtype
has functional building block
FBB_Value prediction
has input data type
IDT_Vector of quantitative variables
has internal model
INM_Function
has output data type
ODT_Quantitative variable
has learning style
LST_Supervised
has parametricity
PRM_Nonparametric with hyperparameter(s)
has relevance
REL_Relevant
uses
ALG_Least Squares Regression ALG_Nearest Neighbour
sometimes supports
mathematically similar to