期刊文献+
共找到22篇文章
< 1 2 >
每页显示 20 50 100
Groundwater level prediction of landslide based on classification and regression tree 被引量:2
1
作者 Yannan Zhao Yuan Li +1 位作者 Lifen Zhang Qiuliang Wang 《Geodesy and Geodynamics》 2016年第5期348-355,共8页
According to groundwater level monitoring data of Shuping landslide in the Three Gorges Reservoir area, based on the response relationship between influential factors such as rainfall and reservoir level and the chang... According to groundwater level monitoring data of Shuping landslide in the Three Gorges Reservoir area, based on the response relationship between influential factors such as rainfall and reservoir level and the change of groundwater level, the influential factors of groundwater level were selected. Then the classification and regression tree(CART) model was constructed by the subset and used to predict the groundwater level. Through the verification, the predictive results of the test sample were consistent with the actually measured values, and the mean absolute error and relative error is 0.28 m and 1.15%respectively. To compare the support vector machine(SVM) model constructed using the same set of factors, the mean absolute error and relative error of predicted results is 1.53 m and 6.11% respectively. It is indicated that CART model has not only better fitting and generalization ability, but also strong advantages in the analysis of landslide groundwater dynamic characteristics and the screening of important variables. It is an effective method for prediction of ground water level in landslides. 展开更多
关键词 LandSLIDE Groundwater level PREDICTION classification and regression tree Three Gorges Reservoir area
下载PDF
A New Approach to Predict Financial Failure: Classification and Regression Trees (CART) 被引量:1
2
作者 Ayse Guel Yllgoer UEmit Dogrul Guelhan Orekici Temel 《Journal of Modern Accounting and Auditing》 2011年第4期329-339,共11页
The increase of competition, economic recession and financial crises has increased business failure and depending on this the researchers have attempted to develop new approaches which can yield more correct and more ... The increase of competition, economic recession and financial crises has increased business failure and depending on this the researchers have attempted to develop new approaches which can yield more correct and more reliable results. The classification and regression tree (CART) is one of the new modeling techniques which is developed for this purpose. In this study, the classification and regression trees method is explained and tested the power of the financial failure prediction. CART is applied for the data of industry companies which is trade in Istanbul Stock Exchange (ISE) between 1997-2007. As a result of this study, it has been observed that, CART has a high predicting power of financial failure one, two and three years prior to failure, and profitability ratios being the most important ratios in the prediction of failure. 展开更多
关键词 business failure financial distress PREDICTION classification and regression trees (CART)
下载PDF
Machine Learning-Driven Classification for Enhanced Rule Proposal Framework
3
作者 B.Gomathi R.Manimegalai +1 位作者 Srivatsan Santhanam Atreya Biswas 《Computer Systems Science & Engineering》 2024年第6期1749-1765,共17页
In enterprise operations,maintaining manual rules for enterprise processes can be expensive,time-consuming,and dependent on specialized domain knowledge in that enterprise domain.Recently,rule-generation has been auto... In enterprise operations,maintaining manual rules for enterprise processes can be expensive,time-consuming,and dependent on specialized domain knowledge in that enterprise domain.Recently,rule-generation has been automated in enterprises,particularly through Machine Learning,to streamline routine tasks.Typically,these machine models are black boxes where the reasons for the decisions are not always transparent,and the end users need to verify the model proposals as a part of the user acceptance testing to trust it.In such scenarios,rules excel over Machine Learning models as the end-users can verify the rules and have more trust.In many scenarios,the truth label changes frequently thus,it becomes difficult for the Machine Learning model to learn till a considerable amount of data has been accumulated,but with rules,the truth can be adapted.This paper presents a novel framework for generating human-understandable rules using the Classification and Regression Tree(CART)decision tree method,which ensures both optimization and user trust in automated decision-making processes.The framework generates comprehensible rules in the form of if condition and then predicts class even in domains where noise is present.The proposed system transforms enterprise operations by automating the production of human-readable rules from structured data,resulting in increased efficiency and transparency.Removing the need for human rule construction saves time and money while guaranteeing that users can readily check and trust the automatic judgments of the system.The remarkable performance metrics of the framework,which achieve 99.85%accuracy and 96.30%precision,further support its efficiency in translating complex data into comprehensible rules,eventually empowering users and enhancing organizational decision-making processes. 展开更多
关键词 classification and regression tree process automation rules engine model interpretability explainability model trust
下载PDF
Integrating CART Algorithm and Multi-source Remote Sensing Data to Estimate Sub-pixel Impervious Surface Coverage:A Case Study from Beijing Municipality,China 被引量:6
4
作者 HU Deyong CHEN Shanshan +1 位作者 QIAO Kun CAO Shisong 《Chinese Geographical Science》 SCIE CSCD 2017年第4期614-625,共12页
The sub-pixel impervious surface percentage(SPIS) is the fraction of impervious surface area in one pixel,and it is an important indicator of urbanization.Using remote sensing data,the spatial distribution of SPIS val... The sub-pixel impervious surface percentage(SPIS) is the fraction of impervious surface area in one pixel,and it is an important indicator of urbanization.Using remote sensing data,the spatial distribution of SPIS values over large areas can be extracted,and these data are significant for studies of urban climate,environment and hydrology.To develop a stabilized,multi-temporal SPIS estimation method suitable for typical temperate semi-arid climate zones with distinct seasons,an optimal model for estimating SPIS values within Beijing Municipality was built that is based on the classification and regression tree(CART) algorithm.First,models with different input variables for SPIS estimation were built by integrating multi-source remote sensing data with other auxiliary data.The optimal model was selected through the analysis and comparison of the assessed accuracy of these models.Subsequently,multi-temporal SPIS mapping was carried out based on the optimal model.The results are as follows:1) multi-seasonal images and nighttime light(NTL) data are the optimal input variables for SPIS estimation within Beijing Municipality,where the intra-annual variability in vegetation is distinct.The different spectral characteristics in the cultivated land caused by the different farming characteristics and vegetation phenology can be detected by the multi-seasonal images effectively.NLT data can effectively reduce the misestimation caused by the spectral similarity between bare land and impervious surfaces.After testing,the SPIS modeling correlation coefficient(r) is approximately 0.86,the average error(AE) is approximately 12.8%,and the relative error(RE) is approximately 0.39.2) The SPIS results have been divided into areas with high-density impervious cover(70%–100%),medium-density impervious cover(40%–70%),low-density impervious cover(10%–40%) and natural cover(0%–10%).The SPIS model performed better in estimating values for high-density urban areas than other categories.3) Multi-temporal SPIS mapping(1991–2016) was conducted based on the optimized SPIS results for 2005.After testing,AE ranges from 12.7% to 15.2%,RE ranges from 0.39 to 0.46,and r ranges from 0.81 to 0.86.It is demonstrated that the proposed approach for estimating sub-pixel level impervious surface by integrating the CART algorithm and multi-source remote sensing data is feasible and suitable for multi-temporal SPIS mapping of areas with distinct intra-annual variability in vegetation. 展开更多
关键词 impervious surface impervious surface percentage classification and regression tree(CART) sub-pixel sub-pixel impervious surface percentage(SPIS) time series
下载PDF
Factors associated with success of telaprevir-and boceprevir-based triple therapy for hepatitis C virus infection
5
作者 Kian Bichoupan Neeta Tandon +17 位作者 Valerie Martel-Laferriere Neal M Patel David Sachs Michel Ng Emily A Schonfeld Alexis Pappas James Crismale Alicia Stivala Viktoriya Khaitova Donald Gardenier Michael Linderman William Olson Ponni V Perumalswami Thomas D Schiano Joseph A Odin Lawrence U Liu Douglas T Dieterich Andrea D Branch 《World Journal of Hepatology》 CAS 2017年第11期551-561,共11页
To evaluate new therapies for hepatitis C virus (HCV), data about real-world outcomes are needed. METHODSOutcomes of 223 patients with genotype 1 HCV who started telaprevir- or boceprevir-based triple therapy (May 201... To evaluate new therapies for hepatitis C virus (HCV), data about real-world outcomes are needed. METHODSOutcomes of 223 patients with genotype 1 HCV who started telaprevir- or boceprevir-based triple therapy (May 2011-March 2012) at the Mount Sinai Medical Center were analyzed. Human immunodeficiency virus-positive patients and patients who received a liver transplant were excluded. Factors associated with sustained virological response (SVR24) and relapse were analyzed by univariable and multivariable logistic regression as well as classification and regression trees. Fast virological response (FVR) was defined as undetectable HCV RNA at week-4 (telaprevir) or week-8 (boceprevir). RESULTSThe median age was 57 years, 18% were black, 44% had advanced fibrosis/cirrhosis (FIB-4 ≥ 3.25). Only 42% (94/223) of patients achieved SVR24 on an intention-to-treat basis. In a model that included platelets, SVR24 was associated with white race [odds ratio (OR) = 5.92, 95% confidence interval (CI): 2.34-14.96], HCV sub-genotype 1b (OR = 2.81, 95%CI: 1.45-5.44), platelet count (OR = 1.10, per x 10<sup>4</sup> cells/μL, 95%CI: 1.05-1.16), and IL28B CC genotype (OR = 3.54, 95%CI: 1.19-10.53). Platelet counts > 135 x 10<sup>3</sup>/μL were the strongest predictor of SVR by classification and regression tree. Relapse occurred in 25% (27/104) of patients with an end-of-treatment response and was associated with non-FVR (OR = 4.77, 95%CI: 1.68-13.56), HCV sub-genotype 1a (OR = 5.20; 95%CI: 1.40-18.97), and FIB-4 ≥ 3.25 (OR = 2.77; 95%CI: 1.07-7.22). CONCLUSIONThe SVR rate was 42% with telaprevir- or boceprevir-based triple therapy in real-world practice. Low platelets and advanced fibrosis were associated with treatment failure and relapse. 展开更多
关键词 Sustained virologic response Hepatitis C virus RELAPSE TELAPREVIR BOCEPREVIR Triple-therapy classification and regression Adverse event Real-world
下载PDF
A retinal blood vessel extraction algorithm based on CART decision tree and improved AdaBoost
6
作者 DIWU Peng-peng HU Ya-qi 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2019年第1期61-68,共8页
This paper presents a supervised learning algorithm for retinal vascular segmentation based on classification and regression tree (CART) algorithm and improved adptive bosting (AdaBoost). Local binary patterns (LBP) t... This paper presents a supervised learning algorithm for retinal vascular segmentation based on classification and regression tree (CART) algorithm and improved adptive bosting (AdaBoost). Local binary patterns (LBP) texture features and local features are extracted by extracting,reversing,dilating and enhancing the green components of retinal images to construct a 17-dimensional feature vector. A dataset is constructed by using the feature vector and the data manually marked by the experts. The feature is used to generate CART binary tree for nodes,where CART binary tree is as the AdaBoost weak classifier,and AdaBoost is improved by adding some re-judgment functions to form a strong classifier. The proposed algorithm is simulated on the digital retinal images for vessel extraction (DRIVE). The experimental results show that the proposed algorithm has higher segmentation accuracy for blood vessels,and the result basically contains complete blood vessel details. Moreover,the segmented blood vessel tree has good connectivity,which basically reflects the distribution trend of blood vessels. Compared with the traditional AdaBoost classification algorithm and the support vector machine (SVM) based classification algorithm,the proposed algorithm has higher average accuracy and reliability index,which is similar to the segmentation results of the state-of-the-art segmentation algorithm. 展开更多
关键词 classification and regression tree (CART) improved adptive boosting (AdaBoost) retinal blood vessel local binary pattern (LBP) texture
下载PDF
Mechanical Eye Trauma Epidemiology, Prognostic Factors, and Management Controversies—An Update
7
作者 Sharah Rahman Ava Hossain +5 位作者 Sarwar Alam Anisur Rahman Chandana Sultana Saiful Islam Yusuf Jamal Khan Md. Amiruzzaman 《Open Journal of Ophthalmology》 2021年第4期348-363,共16页
<strong>Purpose of Review:</strong> The management of eye injuries is both difficult and argumentative. This study attempts to highlight the management of ocular trauma using currently available informatio... <strong>Purpose of Review:</strong> The management of eye injuries is both difficult and argumentative. This study attempts to highlight the management of ocular trauma using currently available information in the literature and author experience. This review presents a workable framework from the first presentation, epidemiology, classification, investigations, management principles, complications, prognostic factors, final visual outcome and management debates. <strong>Review Findings:</strong> Mechanical ocular trauma is a leading cause of monocular blindness and possible handicap worldwide. Among several classification systems, the most widely accepted is Birmingham Eye Trauma Terminology (BETT). Mechanical ocular trauma is a topic of unsolved controversy. Patching for corneal abrasion, paracentesis for hyphema, the timing of cataract surgery and intraocular lens implantation are all issues in anterior segment injuries. Regarding posterior segment controversies, the timing of vitrectomy, use of prophylactic cryotherapy, the necessity of intravitreal antibiotics in the absence of infection, the use of vitrectomy vs vitreous tap in traumatic endophthalmitis is the issues. The pediatric age group needs to be approached by a different protocol due to the risk of amblyopia, intraocular inflammation, and significant vitreoretinal adhesions. The various prognostic factors have a role in the final visual outcome. B scan is used to exclude R.D, Intraocular foreign body (IOFB), and vitreous haemorrhage in hazy media. Individual surgical strategies are used for every patient according to the classification and extent of the injuries. <strong>Conclusion:</strong> This article examines relevant evidence on the management challenges and controversies of mechanical trauma of the eye and offers treatment recommendations based on published research and the authors’ own experience. 展开更多
关键词 Mechanical Eye Trauma Bermingham Eye Trauma Terminology Prognostic Factors for Mechanical Trauma Epidemiology of Mechanical Eye Injury Open Globe Injuries (OGI) Ocular Trauma Scoring (OTS) classification and regression Tree (CART) Model Update of Mechanical Eye Trauma classification of Ocular Trauma Controversies of Ocular Trauma Challenges in Ocular Trauma Management
下载PDF
Hybrid XGBoost model with hyperparameter tuning for prediction of liver disease with better accuracy 被引量:1
8
作者 Surjeet Dalal Edeh Michael Onyema Amit Malik 《World Journal of Gastroenterology》 SCIE CAS 2022年第46期6551-6563,共13页
BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver dise... BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver disease.This could be attributed to many factors,among which are human habits,awareness issues,poor healthcare,and late detection.To curb the growing threats from liver disease,early detection is critical to help reduce the risks and improve treatment outcome.Emerging technologies such as machine learning,as shown in this study,could be deployed to assist in enhancing its prediction and treatment.AIM To present a more efficient system for timely prediction of liver disease using a hybrid eXtreme Gradient Boosting model with hyperparameter tuning with a view to assist in early detection,diagnosis,and reduction of risks and mortality associated with the disease.METHODS The dataset used in this study consisted of 416 people with liver problems and 167 with no such history.The data were collected from the state of Andhra Pradesh,India,through https://www.kaggle.com/datasets/uciml/indian-liver-patientrecords.The population was divided into two sets depending on the disease state of the patient.This binary information was recorded in the attribute"is_patient".RESULTS The results indicated that the chi-square automated interaction detection and classification and regression trees models achieved an accuracy level of 71.36%and 73.24%,respectively,which was much better than the conventional method.The proposed solution would assist patients and physicians in tackling the problem of liver disease and ensuring that cases are detected early to prevent it from developing into cirrhosis(scarring)and to enhance the survival of patients.The study showed the potential of machine learning in health care,especially as it concerns disease prediction and monitoring.CONCLUSION This study contributed to the knowledge of machine learning application to health and to the efforts toward combating the problem of liver disease.However,relevant authorities have to invest more into machine learning research and other health technologies to maximize their potential. 展开更多
关键词 Liver infection Machine learning Chi-square automated interaction detection classification and regression trees Decision tree XGBoost Hyperparameter tuning
下载PDF
A Comparative Study of Three Machine Learning Methods for Software Fault Prediction 被引量:1
9
作者 王琪 朱杰 于波 《Journal of Shanghai Jiaotong university(Science)》 EI 2005年第2期117-121,共5页
The contribution of this paper is comparing three popular machine learning methods for software fault prediction. They are classification tree, neural network and case-based reasoning. First, three different classifie... The contribution of this paper is comparing three popular machine learning methods for software fault prediction. They are classification tree, neural network and case-based reasoning. First, three different classifiers are built based on these three different approaches. Second, the three different classifiers utilize the same product metrics as predictor variables to identify the fault-prone components. Third, the predicting results are compared on two aspects, how good prediction capabilities these models are, and how the models support understanding a process represented by the data. 展开更多
关键词 software quality prediction classification and regression tree artificial neural network case-based reasoning
下载PDF
The Derivation of Nutrient Criteria for the Adjacent Waters of Yellow River Estuary in China
10
作者 LOU Qi ZHANG Xueqing +2 位作者 ZHAO Bei CAO Jing LI Zhengyan 《Journal of Ocean University of China》 SCIE CAS CSCD 2022年第5期1227-1236,共10页
Ecological protection and high-quality development of the Yellow River basin are becoming part of the national strategy in recent years.The Yellow River Estuary has been seriously affected by human activities.Especial... Ecological protection and high-quality development of the Yellow River basin are becoming part of the national strategy in recent years.The Yellow River Estuary has been seriously affected by human activities.Especially,it has been severely polluted by the nitrogen and phosphorus from land sources,which have caused serious eutrophication and harmful algal blooms.Nutrient criteria,however,was not developed for the Yellow River Estuary,which hindered nutrient management measures and eutrophication risk assessment in this key ecological function zone of China.Based on field data during 2004-2019,we adopted the frequency distribution method,correlation analysis,Linear Regression Model(LRM),Classification and Regression Tree(CART)and Nonparametric Changepoint Analysis(nCPA)methods to establish the nutrient criteria for the adjacent waters of Yellow River Estuary.The water quality criteria of dissolved inorganic nitrogen(DIN)and soluble reactive phosphorus(SRP)are recommended as 244.0μg L^(−1) and 22.4μg L^(−1),respectively.It is hoped that the results will provide scientific basis for the formulation of nutrient standards in this important estuary of China. 展开更多
关键词 water quality criteria NUTRIENT Yellow River Estuary frequency distribution classification and regression tree eutro-phication
下载PDF
The predicted effects of climate change on local species distributions around Beijing,China
11
作者 Lichun Mo Jiakai Liu +1 位作者 Hui Zhang Yi Xie 《Journal of Forestry Research》 SCIE CAS CSCD 2020年第5期1539-1550,共12页
To assist conservationists and policymakers in managing and protecting forests in Beijing from the effects of climate change,this study predicts changes for 2012–2112 in habitable areas of three tree species—Betula ... To assist conservationists and policymakers in managing and protecting forests in Beijing from the effects of climate change,this study predicts changes for 2012–2112 in habitable areas of three tree species—Betula platyphylla,Quercus palustris,Platycladus orientalis,plus other mixed broadleaf species—in Beijing using a classification and regression tree niche model under the International Panel on Climate Change’s A2 and B2 emissions scenarios(SRES).The results show that climate change will increase annual average temperatures in the Beijing area by 2.0–4.7℃,and annual precipitation by 4.7–8.5 mm,depending on the emissions scenario used.These changes result in shifts in the range of each of the species.New suitable areas for distributions of B.platyphylla and Q.palustris will decrease in the future.The model points to significant shifts in the distributions of these species,withdrawing from their current ranges and pushing southward towards central Beijing.Most of the ranges decline during the initial 2012–2040 period before shifting southward and ending up larger overall at the end of the 88-year period.The mixed broadleaf forests expand their ranges significantly.The P.orientalis forests,on the other hand,expand their range marginally.The results indicate that climate change and its effects will accelerate significantly in Beijing over the next 88 years.Water stress is likely to be a major limiting factor on the distribution of forests and the most important factor affecting migration of species into and out of existing nature reserves.There is a potential for the extinction of some species.Therefore,long-term vegetation monitoring and warning systems will be needed to protect local species from habitat loss and genetic swamping of native species by hybrids. 展开更多
关键词 Climate change classification and regression tree Plant distribution Scenario A2 and B2 Simulation analysis
下载PDF
Application of intelligent algorithms in Down syndrome screening during second trimester pregnancy
12
作者 Hong-Guo Zhang Yu-Ting Jiang +3 位作者 Si-Da Dai Ling Li Xiao-Nan Hu Rui-Zhi Liu 《World Journal of Clinical Cases》 SCIE 2021年第18期4573-4584,共12页
BACKGROUND Down syndrome(DS)is one of the most common chromosomal aneuploidy diseases.Prenatal screening and diagnostic tests can aid the early diagnosis,appropriate management of these fetuses,and give parents an inf... BACKGROUND Down syndrome(DS)is one of the most common chromosomal aneuploidy diseases.Prenatal screening and diagnostic tests can aid the early diagnosis,appropriate management of these fetuses,and give parents an informed choice about whether or not to terminate a pregnancy.In recent years,investigations have been conducted to achieve a high detection rate(DR)and reduce the false positive rate(FPR).Hospitals have accumulated large numbers of screened cases.However,artificial intelligence methods are rarely used in the risk assessment of prenatal screening for DS.AIM To use a support vector machine algorithm,classification and regression tree algorithm,and AdaBoost algorithm in machine learning for modeling and analysis of prenatal DS screening.METHODS The dataset was from the Center for Prenatal Diagnosis at the First Hospital of Jilin University.We designed and developed intelligent algorithms based on the synthetic minority over-sampling technique(SMOTE)-Tomek and adaptive synthetic sampling over-sampling techniques to preprocess the dataset of prenatal screening information.The machine learning model was then established.Finally,the feasibility of artificial intelligence algorithms in DS screening evaluation is discussed.RESULTS The database contained 31 DS diagnosed cases,accounting for 0.03%of all patients.The dataset showed a large difference between the numbers of DS affected and non-affected cases.A combination of over-sampling and undersampling techniques can greatly increase the performance of the algorithm at processing non-balanced datasets.As the number of iterations increases,the combination of the classification and regression tree algorithm and the SMOTETomek over-sampling technique can obtain a high DR while keeping the FPR to a minimum.CONCLUSION The support vector machine algorithm and the classification and regression tree algorithm achieved good results on the DS screening dataset.When the T21 risk cutoff value was set to 270,machine learning methods had a higher DR and a lower FPR than statistical methods. 展开更多
关键词 Down syndrome Prenatal screening ALGORITHMS classification and regression tree Support vector machine Risk cutoff value
下载PDF
Predicting Electric Energy Consumption for a Jerky Enterprise
13
作者 Elena Kapustina Eugene Shutov +1 位作者 Anna Barskaya Agata Kalganova 《Energy and Power Engineering》 2020年第6期396-406,共11页
Wholesale and retail markets for electricity and power require consumers to forecast electricity consumption at different time intervals. The study aims to</span><span style="font-family:Verdana;"&g... Wholesale and retail markets for electricity and power require consumers to forecast electricity consumption at different time intervals. The study aims to</span><span style="font-family:Verdana;"> increase economic efficiency of the enterprise through the introduction of algorithm for forecasting electric energy consumption unchanged in technological process. Qualitative forecast allows you to essentially reduce costs of electrical </span><span style="font-family:Verdana;">energy, because power cannot be stockpiled. Therefore, when buying excess electrical power, costs can increase either by selling it on the balancing energy </span><span style="font-family:Verdana;">market or by maintaining reserve capacity. If the purchased power is insufficient, the costs increase is due to the purchase of additional capacity. This paper illustrates three methods of forecasting electric energy consumption: autoregressive integrated moving average method, artificial neural networks and classification and regression trees. Actual data from consuming of electrical energy was </span><span style="font-family:Verdana;">used to make day, week and month ahead prediction. The prediction effect of</span><span> </span><span style="font-family:Verdana;">prediction model was proved in Statistica simulation environment. Analysis of estimation of the economic efficiency of prediction methods demonstrated that the use of the artificial neural networks method for short-term forecast </span><span style="font-family:Verdana;">allowed reducing the cost of electricity more efficiently. However, for mid-</span></span><span style="font-family:""> </span><span style="font-family:Verdana;">range predictions, the classification and regression tree was the most efficient method for a Jerky Enterprise. The results indicate that calculation error reduction allows decreases expenses for the purchase of electric energy. 展开更多
关键词 Autoregressive Integrated Moving Average Method Artificial Neural Networks classification and regression Trees Electricity Consumption Ener-gy Forecasting
下载PDF
Examining the distribution and dynamics of impervious surface in different function zones in Beijing 被引量:4
14
作者 乔琨 朱文泉 +3 位作者 胡德勇 郝明 陈姗姗 曹诗颂 《Journal of Geographical Sciences》 SCIE CSCD 2018年第5期669-684,共16页
Impervious surface(IS) is often recognized as the indicator of urban environmental changes. Numerous research efforts have been devoted to studying its spatio-temporal dynamics and ecological effects, especially for t... Impervious surface(IS) is often recognized as the indicator of urban environmental changes. Numerous research efforts have been devoted to studying its spatio-temporal dynamics and ecological effects, especially for the IS in Beijing metropolitan region. However, most previous studies primarily considered the Beijing metropolitan region as a whole without considering the differences and heterogeneity among the function zones. In this study, the subpixel impervious surface results in Beijing within a time series(1991, 2001, 2005, 2011 and 2015) were extracted by means of the classification and regression tree(CART) model combined with change detection models. Then based on the method of standard deviation ellipse, Lorenz curve, contribution index(CI) and landscape metrics, the spatio-temporal dynamics and variations of IS(1991, 2001, 2011 and 2015) in different function zones and districts were analyzed. It is found that the total area of impervious surface in Beijing increased dramatically during the study period, increasing about 144.18%. The deflection angle of major axis of standard deviation ellipse decreased from 47.15° to 38.82°, indicating the major development axis in Beijing gradually moved from northeast-southwest to north-south. Moreover, the heterogeneity of impervious surface’s distribution among 16 districts weakened gradually, but the CI values and landscape metrics in four function zones differed greatly. The urban function extended zone(UFEZ), the main source of the growth of IS in Beijing, had the highest CI values. Its lowest CI value was 1.79 that is still much higher than the highest CI value in other function zones. The core function zone(CFZ), the traditional aggregation zone of impervious surface, had the highest contagion index(CONTAG) values, but it contributed less than UFEZ due to its small area. The CI value of the new urban developed zone(NUDZ) increased rapidly, and it increased from negative to positive and multiplied, becoming animportant contributor to the rise of urban impervious surface. However, the ecological conservation zone(ECZ) had a constant negative contribution all the time, and its CI value decreased gradually. Moreover, the landscape metrics and centroids of impervious surface in different density classes differed greatly. The high-density impervious surface had a more compact configuration and a greater impact on the eco-environment. 展开更多
关键词 impervious surface landscape metrics classification and regression tree(CART) function zones Lorenz curve contribution index
原文传递
Impacts of predictor variables and species models on simulating Tamarix ramosissima distribution in Tarim Basin, northwestern China 被引量:4
15
作者 Qiang Zhang Xinshi Zhang 《Journal of Plant Ecology》 SCIE 2012年第3期337-345,共9页
Aims Preserving and restoring Tamarix ramosissima is urgently required in the Tarim Basin,Northwest China.Using species distribution models to predict the biogeographical distribution of species is regularly used in c... Aims Preserving and restoring Tamarix ramosissima is urgently required in the Tarim Basin,Northwest China.Using species distribution models to predict the biogeographical distribution of species is regularly used in conservation and other management activities.However,the uncertainty in the data and models inevitably reduces their prediction power.The major purpose of this study is to assess the impacts of predictor variables and species distribution models on simulating T.ramosissima distribution,to explore the relationships between predictor variables and species distribution models and to model the potential distribution of T.ramosissima in this basin.Methods Three models—the generalized linear model(GLM),classification and regression tree(CART)and Random Forests—were selected and were processed on the BIOMOD platform.The presence/absence data of T.ramosissima in the Tarim Basin,which were calculated from vegetation maps,were used as response variables.Climate,soil and digital elevation model(DEM)data variables were divided into four datasets and then used as predictors.The four datasets were(i)climate variables,(ii)soil,climate and DEM variables,(iii)principal component analysis(PCA)-based climate variables and(iv)PCA-based soil,climate and DEM variables.Important Findings The results indicate that predictive variables for species distribution models should be chosen carefully,because too many predictors can reduce the prediction power.The effectiveness of using PCA to reduce the correlation among predictors and enhance the modelling power depends on the chosen predictor variables and models.Our results implied that it is better to reduce the correlating predictors before model processing.The Random Forests model was more precise than the GLM and CART models.The best model for T.ramosissima was the Random Forests model with climate predictors alone.Soil variables considered in this study could not significantly improve the model’s prediction accuracy for T.ramosissima.The potential distribution area of T.ramosissima in the Tarim Basin is;3.57310^(4) km^(2),which has the potential to mitigate global warming and produce bioenergy through restoring T.ramosissima in the Tarim Basin. 展开更多
关键词 species distribution model Tamarix ramosissima generalized linear models classification and regression trees RandomForest
原文传递
Automatic Prosodic Break Detection and Feature Analysis 被引量:1
16
作者 倪崇嘉 张爱英 +1 位作者 刘文举 徐波 《Journal of Computer Science & Technology》 SCIE EI CSCD 2012年第6期1184-1196,共13页
Automatic prosodic break detection and annotation are important for both speech understanding and natural speech synthesis. In this paper, we discuss automatic prosodic break detection and feature analysis. The contri... Automatic prosodic break detection and annotation are important for both speech understanding and natural speech synthesis. In this paper, we discuss automatic prosodic break detection and feature analysis. The contributions of the paper are two aspects. One is that we use classifier combination method to detect Mandarin and English prosodic break using acoustic, lexical and syntactic evidence. Our proposed method achieves better performance on both the Mandarin prosodic annotation corpus Annotated Speech Corpus of Chinese Discourse and the English prosodic annotation corpus -- Boston University Radio News Corpus when compared with the baseline system and other researches' experimental results. The other is the feature analysis for prosodic break detection. The functions of different features, such as duration, pitch, energy, and intensity, are analyzed and compared in Mandarin and English prosodic break detection. Based on the feature analysis, we also verify some linguistic conclusions. 展开更多
关键词 prosodic break intonational phrase boundary classifier combination boosting classification and regression tree conditional random field
原文传递
Analysis of NIR spectroscopic data using decision trees and their ensembles
17
作者 Sergey Kucheryavskiy 《Journal of Analysis and Testing》 EI 2018年第3期274-289,共16页
Decision trees and their ensembles became quite popular for data analysis during the past decade.One of the main reasons for that is current boom in big data,where traditional statistical methods(such as,e.g.,multiple... Decision trees and their ensembles became quite popular for data analysis during the past decade.One of the main reasons for that is current boom in big data,where traditional statistical methods(such as,e.g.,multiple linear regression)are not very efficient.However,in chemometrics these methods are still not very widespread,first of all because of several limitations related to the ratio between number of variables and observations.This paper presents several examples on how decision trees and their ensembles can be used in analysis of NIR spectroscopic data both for regression and classification.We will try to consider all important aspects including optimization and validation of models,evaluation of results,treating missing data and selection of most important variables.The performance and outcome of the decision tree-based methods are compared with more traditional approach based on partial least squares. 展开更多
关键词 NIR spectroscopy Decision trees classification and regression trees Random forests
原文传递
Novel Prognostic Models for Predicting the 180-day Outcome for Patients with Hepatitis-B Virus-related Acute-on-chronic Liver Failure 被引量:9
18
作者 Ran Xue Jun Yang +2 位作者 Jing Wu Zhongying Wang Qinghua Meng 《Journal of Clinical and Translational Hepatology》 SCIE 2021年第4期514-520,共7页
Background and Aims:It remains difficult to forecast the 180-day prognosis of patients with hepatitis B virus-acuteon-chronic liver failure(HBV-ACLF)using existing prognostic models.The present study aimed to derive n... Background and Aims:It remains difficult to forecast the 180-day prognosis of patients with hepatitis B virus-acuteon-chronic liver failure(HBV-ACLF)using existing prognostic models.The present study aimed to derive novel-innovative models to enhance the predictive effectiveness of the 180-day mortality in HBV-ACLF.Methods:The present cohort study examined 171 HBV-ACLF patients(non-survivors,n=62;survivors,n=109).The 27 retrospectively collected parameters included the basic demographic characteristics,clinical comorbidities,and laboratory values.Backward stepwise logistic regression(LR)and the classification and regression tree(CART)analysis were used to derive two predictive models.Meanwhile,a nomogram was created based on the LR analysis.The accuracy of the LR and CART model was detected through the area under the receiver operating characteristic curve(AUROC),compared with model of end-stage liver disease(MELD)scores.Results:Among 171 HBV-ACLF patients,the mean age was 45.17 years-old,and 11.7%of the patients were female.The LR model was constructed with six independent factors,which included age,total bilirubin,prothrombin activity,lymphocytes,monocytes and hepatic encephalopathy.The following seven variables were the prognostic factors for HBV-ACLF in the CART model:age,total bilirubin,prothrombin time,lymphocytes,neutrophils,monocytes,and blood urea nitrogen.The AUROC for the CART model(0.878)was similar to that for the LR model(0.878,p=0.898),and this exceeded that for the MELD scores(0.728,p<0.0001).Conclusions:The LR and CART model are both superior to the MELD scores in predicting the 180-day mortality of patients with HBV-ACLF.Both the LR and CART model can be used as medical decision-making tools by clinicians. 展开更多
关键词 classification and regression tree Acute-on-chronic hepatitis B liver failure MELD scores Logistic regression model
原文传递
A Machine Learning Approach for Collusion Detection in Electricity Markets Based on Nash Equilibrium Theory 被引量:3
19
作者 Peyman Razmi Majid Oloomi Buygi Mohammad Esmalifalak 《Journal of Modern Power Systems and Clean Energy》 SCIE EI CSCD 2021年第1期170-180,共11页
We aim to provide a tool for independent system operators to detect the collusion and identify the colluding firms by using day-ahead data. In this paper, an approach based on supervised machine learning is presented ... We aim to provide a tool for independent system operators to detect the collusion and identify the colluding firms by using day-ahead data. In this paper, an approach based on supervised machine learning is presented for collusion detection in electricity markets. The possible scenarios of the collusion among generation firms are firstly identified. Then,for each scenario and possible load demand, market equilibrium is computed. Market equilibrium points under different collusions and their peripheral points are used to train the collusion detection machine using supervised learning approaches such as classification and regression tree(CART) and support vector machine(SVM) algorithms. By applying the proposed approach to a four-firm and ten-generator test system, the accuracy of the proposed approach is evaluated and the efficiency of SVM and CART algorithms in collusion detection are compared with other supervised learning and statistical techniques. 展开更多
关键词 Market power collusion detection machine learning support vector machine(SVM) classification and regression tree(CART) statistical method
原文传递
Comparison of Spatial Interpolation Methods for Gridded Bias Removal in Surface Temperature Forecasts 被引量:2
20
作者 Seyedeh Atefeh MOHAMMADI Majid AZADI Morteza RAHMANI 《Journal of Meteorological Research》 SCIE CSCD 2017年第4期791-799,共9页
All numerical weather prediction(NWP) models inherently have substantial biases, especially in the forecast of near-surface weather variables. Statistical methods can be used to remove the systematic error based on ... All numerical weather prediction(NWP) models inherently have substantial biases, especially in the forecast of near-surface weather variables. Statistical methods can be used to remove the systematic error based on historical bias data at observation stations. However, many end users of weather forecasts need bias corrected forecasts at locations that scarcely have any historical bias data. To circumvent this limitation, the bias of surface temperature forecasts on a regular grid covering Iran is removed, by using the information available at observation stations in the vicinity of any given grid point. To this end, the running mean error method is first used to correct the forecasts at observation stations, then four interpolation methods including inverse distance squared weighting with constant lapse rate(IDSW-CLR), Kriging with constant lapse rate(Kriging-CLR), gradient inverse distance squared with linear lapse rate(GIDS-LR), and gradient inverse distance squared with lapse rate determined by classification and regression tree(GIDS-CART), are employed to interpolate the bias corrected forecasts at neighboring observation stations to any given location. The results show that all four interpolation methods used do reduce the model error significantly,but Kriging-CLR has better performance than the other methods. For Kriging-CLR, root mean square error(RMSE)and mean absolute error(MAE) were decreased by 26% and 29%, respectively, as compared to the raw forecasts. It is found also, that after applying any of the proposed methods, unlike the raw forecasts, the bias corrected forecasts do not show spatial or temporal dependency. 展开更多
关键词 spatial interpolation bias correction lapse rate KRIGING classification and regression tree
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部