블로그 내부검색
Kernel methods
Labels:
Informatics
Email ThisBlogThis!Share to XShare to Facebook
Kernel method (wikipedia)
- In probability and statistics, density estimation is the construction of an estimate, based on observed data, of an unobservable underlying probability density function.
- The unobservable density function is thought of as the density according to which a large population is distributed; the data are usually thought of as a random sample from that population.
- A kernel is a non-negative real-valued integrable function K satisfying the following two requirements:


- The density estimates are kernel density estimates using a Gaussian kernel. That is, a Gaussian density function is placed at each data point, and the sum of the density functions is computed over the range of the data.
- KMs approach the problem by mapping the data into a high dimensional feature space, where each coordinate corresponds to one feature of the data items, transforming the data into a set of points in a Euclidean space. In that space, a variety of methods can be used to find relations in the data. Since the mapping can be quite general (not necessarily linear, for example), the relations found in this way are accordingly very general. This approach is called the kernel trick.
KMs owe their name to the use of kernel functions, which enable them to operate in the feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space.
In the table below, 1{…} is the indicator function.
Kernel Functions, K(u) 

Uniform 



Triangular 



Epanechnikov 



Quartic
(biweight) 



Triweight
(tricube) 



Gaussian 



Cosine 



A kernel smoother is a statistical technique for estimating a real valued function
by using its noisy observations, when no parametric model for this function is known. The estimated function is smooth, and the level of smoothness is set by a single parameter.
Nearest neighbor smoother
The idea of the nearest neighbor smoother is the following. For each point X0, take m nearest neighbors and estimate the value of Y(X0) by averaging the values of these neighbors.
Formally,
, where X[m] is the mth closest to X0 neighbor, and

Example:

In this example, X is one-dimensional. For each X0, the
is an average value of 16 closest to X0 points (denoted by red). The result is not smooth enough.
Kernel average smoother
The idea of the kernel average smoother is the following. For each data point X0, choose a constant distance size λ (kernel radius, or window width for p = 1 dimension), and compute a weighted average for all data points that are closer than λ to X0 (the closer to X0 points get higher weights).
Formally, hλ(X0) = λ = constant, and D(t) is one of the popular kernels.
Example:

For each X0 the window width is constant, and the weight of each point in the window is schematically denoted by the yellow figure in the graph. It can be seen that the estimation is smooth, but the boundary points are biased. The reason for that is the non-equal number of points (from the right and from the left to the X0) in the window, when the X0 is close enough to the boundary.
Local linear regression
Main article: Local regressionIn the two previous sections we assumed that the underlying Y(X) function is locally constant, therefore we were able to use the weighted average for the estimation. The idea of local linear regression is to fit locally a straight line (or a hyperplane for higher dimensions), and not the constant (horizontal line). After fitting the line, the estimation
is provided by the value of this line at X0 point. By repeating this procedure for each X0, one can get the estimation function
. Like in previous section, the window width is constant hλ(X0) = λ = constant. Formally, the local linear regression is computed by solving a weighted least square problem.
For one dimension (p = 1):

The closed form solution is given by:

where:



Example:

The resulting function is smooth, and the problem with the biased boundary points is solved.
Local polynomial regression
Instead of fitting locally linear functions, one can fit polynomial functions.
For p=1, one should minimize:

with 
In general case (p>1), one should minimize:

Email ThisBlogThis!Share to XShare to Facebook
Labels:
Informatics
KMs owe their name to the use of kernel functions, which enable them to operate in the feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space.
Kernel Functions, K(u) | ||||
---|---|---|---|---|
Uniform | | | ||
Triangular | | | ||
Epanechnikov | | | ||
Quartic (biweight) | | | ||
Triweight (tricube) | | | ||
Gaussian | | | ||
Cosine | | |
Scientist. Husband. Daddy. --- TOLLE. LEGE
외부자료의 인용에 있어 대한민국 저작권법(28조)과 U.S. Copyright Act (17 USC. §107)에 정의된 "저작권물의 공정한 이용원칙 | the U.S. fair use doctrine" 을 따릅니다. 저작권(© 최광민)이 명시된 모든 글과 번역문들에 대해 (1) 복제-배포, (2) 임의수정 및 자의적 본문 발췌, (3) 무단배포를 위한 화면캡처를 금하며, (4) 인용 시 URL 주소 만을 사용할 수 있습니다. [후원 | 운영] [대문으로] [방명록] [티스토리 (백업)] [신시내티]
