期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
An Ensemble Methods for Medical Insurance Costs Prediction Task 被引量:2
1
作者 Nataliya Shakhovska Nataliia Melnykova +1 位作者 Valentyna Chopiyak Michal Gregus ml 《Computers, Materials & Continua》 SCIE EI 2022年第2期3969-3984,共16页
The paper reports three new ensembles of supervised learning predictors for managing medical insurance costs.The open dataset is used for data analysis methods development.The usage of artificial intelligence in the m... The paper reports three new ensembles of supervised learning predictors for managing medical insurance costs.The open dataset is used for data analysis methods development.The usage of artificial intelligence in the management of financial risks will facilitate economic wear time and money and protect patients’health.Machine learning is associated withmany expectations,but its quality is determined by choosing a good algorithm and the proper steps to plan,develop,and implement the model.The paper aims to develop three new ensembles for individual insurance costs prediction to provide high prediction accuracy.Pierson coefficient and Boruta algorithm are used for feature selection.The boosting,stacking,and bagging ensembles are built.A comparison with existing machine learning algorithms is given.Boosting modes based on regression tree and stochastic gradient descent is built.Bagged CART and Random Forest algorithms are proposed.The boosting and stacking ensembles shown better accuracy than bagging.The tuning parameters for boosting do not allow to decrease the RMSE too.So,bagging shows its weakness in generalizing the prediction.The stacking is developed using K Nearest Neighbors(KNN),Support Vector Machine(SVM),Regression Tree,Linear Regression,Stochastic Gradient Boosting.The random forest(RF)algorithm is used to combine the predictions.One hundred trees are built forRF.RootMean Square Error(RMSE)has lifted the to 3173.213 in comparison with other predictors.The quality of the developed ensemble for RootMean Squared Error metric is 1.47 better than for the best weak predictor(SVR). 展开更多
关键词 Healthcare medical insurance prediction task machine learning ENSEMBLE data analysis
下载PDF
DenseCL:A simple framework for self-supervised dense visual pre-training 被引量:1
2
作者 Xinlong Wang Rufeng Zhang +1 位作者 Chunhua Shen Tao Kong 《Visual Informatics》 EI 2023年第1期30-40,共11页
Self-supervised learning aims to learn a universal feature representation without labels.To date,most existing self-supervised learning methods are designed and optimized for image classification.These pre-trained mod... Self-supervised learning aims to learn a universal feature representation without labels.To date,most existing self-supervised learning methods are designed and optimized for image classification.These pre-trained models can be sub-optimal for dense prediction tasks due to the discrepancy between image-level prediction and pixel-level prediction.To fill this gap,we aim to design an effective,dense self-supervised learning framework that directly works at the level of pixels(or local features)by taking into account the correspondence between local features.Specifically,we present dense contrastive learning(DenseCL),which implements self-supervised learning by optimizing a pairwise contrastive(dis)similarity loss at the pixel level between two views of input images.Compared to the supervised ImageNet pre-training and other self-supervised learning methods,our self-supervised DenseCL pretraining demonstrates consistently superior performance when transferring to downstream dense prediction tasks including object detection,semantic segmentation and instance segmentation.Specifically,our approach significantly outperforms the strong MoCo-v2 by 2.0%AP on PASCAL VOC object detection,1.1%AP on COCO object detection,0.9%AP on COCO instance segmentation,3.0%mIoU on PASCAL VOC semantic segmentation and 1.8%mIoU on Cityscapes semantic segmentation.The improvements are up to 3.5%AP and 8.8%mIoU over MoCo-v2,and 6.1%AP and 6.1%mIoU over supervised counterpart with frozen-backbone evaluation protocol. 展开更多
关键词 Self-supervised learning Visual pre-training Dense prediction tasks
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部