|Table of Contents|

MKLasso model algorithm and model choice(PDF)

长安大学学报(自然科学版)[ISSN:1006-6977/CN:61-1281/TN]

Issue:
2012年04期
Page:
105-110
Research Field:
Publishing date:

Info

Title:
MKLasso model algorithm and model choice
Author(s):
LIANG Bao-juan JU Yong-feng ZHANG Jing
School of Electronic and Control Engineering, Chang'an University, Xi'an 710064, Shaanxi, China
Keywords:
Lasso visual principle L1 norm regularization kernel method
PACS:
TP312
DOI:
-
Abstract:
A nonlinear Mutil-kernel Lasso model which is a generalization of classical KLasso was developed based on introducing the Mutil-kernel function and parameters. To solve MKLasso, an algorithm in terms of the gradient boosting perspective was designed. The algorithm can select kernel parameters directly, so it can reduce the complexity and running time. Moreover, in order to improve the performance of the algorithm, a new model selection strategy was proposed by one of the principles of human vision system that is person stands near the data space, they can see the clear local structure of the data, person stands far away the data space, then they only can see the clear global structure. The efficiency of the algorithm was proved by the 6 simulations designed by 3 real data set. Finally, a series of simulations indicate that the prediction capacity of MKLasso outperformance KLasso at least 10 times in terms of prediction mean square error. The algorithm is not only efficient and rubust, but also has flexible model selection strategy to select the kernel parameters directly, which can reduce the testing and running time. 5 tabs, 2 figs, 12 refs.

References:

[1] Vapnik V N.The nature of statistical learning theory[D].New York:Springer-Verlag,2000.
[2]Bühlmann P,Yu B.Boosting with the L2 Loss[J].Journal of the American Statistical Association,2003,98(462):324-339.
[3]Tibshirani R.Regression shrinkage and selection via the Lasso[J].Journal of the Royal Statistical Society:Series B(Methodological),1996,58(1):267-288.
[4]Efron B,Hastie T,Johnstone I,et al.Least angle regression[J].The Annals of Statistics,2004,32(2):407-499.
[5]Rosset S,Zhu J.Piecewise linear regularized solution paths[J].The Annals of Statistics,2007,35(3):1012-1030.
[6]Schlkopf B,Smola A J.Learning with kernels:support vector machines,regularization,optimization,and beyond[D].Cambridge:The MIT Press,2002.
[7]Muller K R,Mike S,Ratsch G.An introduction to kernel-based learning algorithms[J].IEEE Transactions on Neural Networks,2001,12(2):181-201.
[8]汪洪桥,孙富春,蔡艳宁,等.多核学习方法[J].自动化学报,2010,36(8):1037-1047. WANG Hong-qiao,SUN Fu-chun,CAI Yan-ning,et al.On multiple kernel learning methods[J].Acta Automatica Sinica,2010,36(8):1037-1047.(in Chinese)
[9]Roth V.The generalized lasso[J].IEEE Transactions on Neural Networks,2004,15(1):16-28.
[10]Gao J,Kwan P W,Shi D.Sparse kernel learning with Lasso and Bayesian inference algorithm[J].Neural Networks,2010,23(2):257-264.
[11]Bach F R.Consistency of the group Lasso and multiple kernel learning[J].The Journal of Machine Learning Research,2008(9):1179-1225.
[12]Xu Z,Dai M,Meng D.Fast and efficient strategies for model selection of Gaussian support vector machine[J].Systems,Man,and Cybernetics,Part B:Cybernetics,2009,39(5):1292-1307.

Memo

Memo:
-
Last Update: 2012-08-30