期刊文献+

基于非凸上界的ranking模型构造算法

Construction Algorithm of ranking Model Based on Non-Convex Upper Bound
下载PDF
导出
摘要 现有的ranking算法均通过最小化原目标函数的凸上界构造ranking模型,得到的模型不够精确.为此,文中提出一种基于非凸上界的ranking算法.该算法首先给出一个基于多类支持向量机(SVM)的框架,然后定义面向NDCG的目标函数,在此基础上设计一个比现有的凸上界更为紧凑的非凸上界逼近原目标函数;针对上界函数的非凸非光滑,提出使用凹-凸过程进行凸逼近,并采用割平面算法进行求解;最后,通过在基准数据集上的实验对该算法进行验证,并与现有算法进行对比.结果表明,相比现有的基于凸上界的ranking算法,文中算法得到的模型不但更为精确,而且更加稳定. As the existing ranking algorithms all construct models by minimizing the convex upper bound of the original objective functions, the constructed models are imprecise. In order to solve this problem, a ranking algo- rithm based on non-convex upper bound is proposed. In this algorithm, first, a framework based on multi-class sup- port vector machine (SVM) is constructed and an objective function directly optimizing the NDCG ( Normalized Discounted Cumulative Gain) is defined. Then, a tighter non-convex upper bound is designed to approximate the original objective function. Moreover, a concave-convex procedure is carried out and the cutting plane algorithm is used to remedy the non-convex and non-smooth characteristics of the objective function. The proposed algorithm is finally verified on the benchmark datasets. The results show that, as compared with the existing ranking algorithms based on convex upper bound, the proposed algorithm is more effective in constructing models with high accuracy and strong stability.
出处 《华南理工大学学报(自然科学版)》 EI CAS CSCD 北大核心 2012年第4期57-63,共7页 Journal of South China University of Technology(Natural Science Edition)
基金 国家自然科学基金资助项目(60875027) 安徽省自然科学基金资助项目(090412054 1104060M141 1208085QF120) 安徽省科技攻关计划重大科技专项项目(08010201002) 安徽省高校优秀青年人才资助项目(2012SQRL016) 安徽大学计算智能与信号处理教育部重点实验室开放基金资助项目 安徽大学青年科学基金资助项目(KJQN1119)
关键词 ranking算法 非凸上界 NDCG 凹-凸过程 割平面算法 多类支持向量机 ranking algorithm non-convex upper bound normalized discounted cumulative gain concave-convex procedure cutting plane algorithm multi-class support vector machine
  • 相关文献

参考文献18

  • 1Burges C, Shaked T, Renshaw E. Learning to rank using gradient descent [ C ]//Proceedings of the 22nd International Conference on Machine Learning. Bonn: ACM, 2005:89-96.
  • 2Freund Y, Iyer R, Schapire R E. An efficient boosting algorithm for combining preferences [ J ]. Journal of Machine Learning Research,2003,4(4) :933-969.
  • 3Herbrich R, Grapel T, Obermayer K. Large margin rank boundaries for ordinal regression [ M ]. Cambridge : MIT Press ,2000 : 115-132.
  • 4Joachims T. Optimizing search engines using clickthrough data [ C]///Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Edmonton : ACM ,2002 : 133-142.
  • 5Yue Y, Finley T, Radlinski F, et al. A support vector method for optimizing average precision [ C ] //Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Amsterdam : ACM, 2007 : 271 - 278.
  • 6Xu Jun, Li Hang. Adarank:a boosting algorithm for information retrieval [ C ]//Proceedings of the Workshop on Learning to Rank for Information Retrieval 2007. Amsterdam :ACM,2007:391-398.
  • 7Burges C J C, Ragno R, Le Q V. Learning to rank with nonsmooth cost functions [ C]//Proceedings of the 21st Annual Conference on Neural Information Processing Systems. Vancouver: MIT Press ,2007 : 193-200.
  • 8Chaplle O, Le Q, Smola A. Large margin optimization of ranking measures [ C ] // Proceedings of the 21st Annual Conference on Neural Information Processing Systems. Vancouver : MIT Press,2007 : 342- 348.
  • 9Guiver J, Snelson E. Learning to rank with softrank and gaussian processes [ C ]//JProceedings of the 31 th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Singapore : ACM, 2008:259-266.
  • 10Le Q V, Smola A J. Direct optimization of ranking measures [ J]. Journal of Machine Learning Research,2010, 1(1) :1-48.

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部