- In probability and statistics, density estimation is the construction of an estimate, based on observed data, of an unobservable underlying probability density function.
- The unobservable density function is thought of as the density according to which a large population is distributed; the data are usually thought of as a random sample from that population.
- A kernel is a non-negative real-valued integrable function K satisfying the following two requirements:
- The density estimates are kernel density estimates using a Gaussian kernel. That is, a Gaussian density function is placed at each data point, and the sum of the density functions is computed over the range of the data.
- KMs approach the problem by mapping the data into a high dimensional feature space, where each coordinate corresponds to one feature of the data items, transforming the data into a set of points in a Euclidean space. In that space, a variety of methods can be used to find relations in the data. Since the mapping can be quite general (not necessarily linear, for example), the relations found in this way are accordingly very general. This approach is called the kernel trick.
KMs owe their name to the use of kernel functions, which enable them to operate in the feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space.
A kernel smoother is a statistical technique for estimating a real valued function
Kernel Functions, K(u) Uniform Triangular Epanechnikov Quartic
(biweight)Triweight
(tricube)Gaussian Cosine
Nearest neighbor smoother
The idea of the nearest neighbor smoother is the following. For each point X0, take m nearest neighbors and estimate the value of Y(X0) by averaging the values of these neighbors.Formally,
In this example, X is one-dimensional. For each X0, the
Kernel average smoother
The idea of the kernel average smoother is the following. For each data point X0, choose a constant distance size λ (kernel radius, or window width for p = 1 dimension), and compute a weighted average for all data points that are closer than λ to X0 (the closer to X0 points get higher weights).Formally, hλ(X0) = λ = constant, and D(t) is one of the popular kernels.
Example:
For each X0 the window width is constant, and the weight of each point in the window is schematically denoted by the yellow figure in the graph. It can be seen that the estimation is smooth, but the boundary points are biased. The reason for that is the non-equal number of points (from the right and from the left to the X0) in the window, when the X0 is close enough to the boundary.
Local linear regression
Main article: Local regression
In the two previous sections we assumed that the underlying Y(X) function is locally constant, therefore we were able to use the weighted average for the estimation. The idea of local linear regression is to fit locally a straight line (or a hyperplane for higher dimensions), and not the constant (horizontal line). After fitting the line, the estimation For one dimension (p = 1):
The closed form solution is given by:
The resulting function is smooth, and the problem with the biased boundary points is solved.
Local polynomial regression
Instead of fitting locally linear functions, one can fit polynomial functions.For p=1, one should minimize:
with
In general case (p>1), one should minimize: