通过分析学生在学习、生活过程中产生的校园行为数据,如一卡通、图书借阅、学业成绩等,在RankNet算法基础上提出一种基于专业与学期的多任务排序学习算法(a multi-task learning RankNet based on major and semester,MSRN).此算法在构...通过分析学生在学习、生活过程中产生的校园行为数据,如一卡通、图书借阅、学业成绩等,在RankNet算法基础上提出一种基于专业与学期的多任务排序学习算法(a multi-task learning RankNet based on major and semester,MSRN).此算法在构建大数据分析模型的基础上,探究学生学习勤奋度、生活规律性与学业表现之间的相关性.仿真结果表明,本文算法较之其他算法可更好地用于预测学生在校期间的学业表现.展开更多
We propose a novel progressive framework to optimize deep neural networks. The idea is to try to combine the stability of linear methods and the ability of learning complex and abstract internal representations of dee...We propose a novel progressive framework to optimize deep neural networks. The idea is to try to combine the stability of linear methods and the ability of learning complex and abstract internal representations of deep leaming methods. We insert a linear loss layer between the input layer and the first hidden non-linear layer of a traditional deep model. The loss objective for optimization is a weighted sum of linear loss of the added new layer and non-linear loss of the last output layer. We modify the model structure of deep canonical correlation analysis (DCCA), i.e., adding a third semantic view to regularize text and image pairs and embedding the structure into our framework, for cross-modal retrieval tasks such as text-to-image search and image-to-text search. The experimental results show the performance of the modified model is better than similar state-of-art approaches on a dataset of National University of Singapore (NUS-WIDE). To validate the generalization ability of our framework, we apply our framework to RankNet, a ranking model optimized by stochastic gradient descent. Our method outperforms RankNet and converges more quickly, which indicates our progressive framework could provide a better and faster solution for deep neural networks.展开更多
文摘通过分析学生在学习、生活过程中产生的校园行为数据,如一卡通、图书借阅、学业成绩等,在RankNet算法基础上提出一种基于专业与学期的多任务排序学习算法(a multi-task learning RankNet based on major and semester,MSRN).此算法在构建大数据分析模型的基础上,探究学生学习勤奋度、生活规律性与学业表现之间的相关性.仿真结果表明,本文算法较之其他算法可更好地用于预测学生在校期间的学业表现.
基金supported by the National Natural Science Foundation of China (61471049, 61372169, 61532018)the Postgraduate Innovation Fund of SICE, BUPT, 2015
文摘We propose a novel progressive framework to optimize deep neural networks. The idea is to try to combine the stability of linear methods and the ability of learning complex and abstract internal representations of deep leaming methods. We insert a linear loss layer between the input layer and the first hidden non-linear layer of a traditional deep model. The loss objective for optimization is a weighted sum of linear loss of the added new layer and non-linear loss of the last output layer. We modify the model structure of deep canonical correlation analysis (DCCA), i.e., adding a third semantic view to regularize text and image pairs and embedding the structure into our framework, for cross-modal retrieval tasks such as text-to-image search and image-to-text search. The experimental results show the performance of the modified model is better than similar state-of-art approaches on a dataset of National University of Singapore (NUS-WIDE). To validate the generalization ability of our framework, we apply our framework to RankNet, a ranking model optimized by stochastic gradient descent. Our method outperforms RankNet and converges more quickly, which indicates our progressive framework could provide a better and faster solution for deep neural networks.