期刊文献+
共找到2,307篇文章
< 1 2 116 >
每页显示 20 50 100
Nonparametric Statistical Feature Scaling Based Quadratic Regressive Convolution Deep Neural Network for Software Fault Prediction
1
作者 Sureka Sivavelu Venkatesh Palanisamy 《Computers, Materials & Continua》 SCIE EI 2024年第3期3469-3487,共19页
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w... The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods. 展开更多
关键词 Software defect prediction feature selection nonparametric statistical Torgerson-Gower scaling technique quadratic censored regressive convolution deep neural network softstep activation function nelder-mead method
下载PDF
Partial Time-Varying Coefficient Regression and Autoregressive Mixed Model
2
作者 Hui Li Zhiqiang Cao 《Open Journal of Statistics》 2023年第4期514-533,共20页
Regression and autoregressive mixed models are classical models used to analyze the relationship between time series response variable and other covariates. The coefficients in traditional regression and autoregressiv... Regression and autoregressive mixed models are classical models used to analyze the relationship between time series response variable and other covariates. The coefficients in traditional regression and autoregressive mixed models are constants. However, for complicated data, the coefficients of covariates may change with time. In this article, we propose a kind of partial time-varying coefficient regression and autoregressive mixed model and obtain the local weighted least-square estimators of coefficient functions by the local polynomial technique. The asymptotic normality properties of estimators are derived under regularity conditions, and simulation studies are conducted to empirically examine the finite-sample performances of the proposed estimators. Finally, we use real data about Lake Shasta inflow to illustrate the application of the proposed model. 展开更多
关键词 Regression and Autoregressive Time Series Partial Time-Varying Coefficient Local Polynomial
下载PDF
Partial Time-Varying Coefficient Regression and Autoregressive Mixed Model
3
作者 Hui Li Zhiqiang Cao 《Open Journal of Endocrine and Metabolic Diseases》 2023年第4期514-533,共20页
Regression and autoregressive mixed models are classical models used to analyze the relationship between time series response variable and other covariates. The coefficients in traditional regression and autoregressiv... Regression and autoregressive mixed models are classical models used to analyze the relationship between time series response variable and other covariates. The coefficients in traditional regression and autoregressive mixed models are constants. However, for complicated data, the coefficients of covariates may change with time. In this article, we propose a kind of partial time-varying coefficient regression and autoregressive mixed model and obtain the local weighted least-square estimators of coefficient functions by the local polynomial technique. The asymptotic normality properties of estimators are derived under regularity conditions, and simulation studies are conducted to empirically examine the finite-sample performances of the proposed estimators. Finally, we use real data about Lake Shasta inflow to illustrate the application of the proposed model. 展开更多
关键词 Regression and Autoregressive Time Series Partial Time-Varying Coefficient Local Polynomial
下载PDF
Histopathology and the predominantly progressive,indeterminate and predominately regressive score in hepatitis C virus patients after direct-acting antivirals therapy 被引量:1
4
作者 Rui Huang Hui-Ying Rao +5 位作者 Ming Yang Ying-Hui Gao Jian Wang Qian Jin Dan-Li Ma Lai Wei 《World Journal of Gastroenterology》 SCIE CAS 2021年第5期404-415,共12页
BACKGROUND Histological changes after direct-acting antivirals(DAAs)therapy in hepatitis C virus(HCV)patients has not been elucidated.Whether the predominantly progressive,indeterminate and predominately regressive(P-... BACKGROUND Histological changes after direct-acting antivirals(DAAs)therapy in hepatitis C virus(HCV)patients has not been elucidated.Whether the predominantly progressive,indeterminate and predominately regressive(P-I-R)score,evaluating fibrosis activity in hepatitis B virus patients has predictive value in HCV patients has not been investigated.AIM To identify histological changes after DAAs therapy and to evaluate the predictive value of the P-I-R score in HCV patients.METHODS Chronic HCV patients with paired liver biopsy specimens before and after DAAs treatment were included.Sustained virologic response(SVR)was defined as an undetectable serum HCV RNA level at 24 wk after treatment cessation.The Ishak system and P-I-R score were assessed.Inflammation improvement and fibrosis regression were defined as a≥2-points decrease in the histology activity index(HAI)score and a≥1-point decrease in the Ishak fibrosis score,respectively.Fibrosis progression was defined as a≥1-point increase in the Ishak fibrosis score.Histologic improvement was defined as a≥2-points decrease in the HAI score without worsening of the Ishak fibrosis score after DAAs therapy.The P-I-R score was also assessed.“absolutely reversing or advancing”was defined as the same directionality implied by both change in the Ishak score and posttreatment P-I-R score;and“probably reversing or advancing”was defined as only one parameter showing directionality.RESULTS Thirty-eight chronic HCV patients with paired liver biopsy specimens before and after DAAs treatment were included.The mean age of these patients was 40.9±14.6 years and there were 53%(20/38)males.Thirty-four percent(13/38)of patients were cirrhotic.Eighty-two percent(31/38)of patients achieved inflammation improvement.The median HAI score decreased significantly after SVR(pretreatment 7.0 vs posttreatment 2.0,Z=-5.146,P=0.000).Thirty-seven percent(14/38)of patients achieved fibrosis improvement.The median Ishak score decreased significantly after SVR(pretreatment 4.0 vs posttreatment 3.0,Z=-2.354,P=0.019).Eighty-two percent(31/38)of patients showed histological improvement.The P-I-R score was evaluated in 61%(23/38)of patients.The progressive group showed lower platelet(P=0.024)and higher HAI scores(P=0.070)before treatment.In patients with stable Ishak stage after treatment:Progressive injury was seen in 22%(4/18)of patients,33%(6/18)were classified as indeterminate and regressive changes were seen in 44%(8/18)of patients who were judged as probably reversing by the Ishak and P-I-R systems.CONCLUSION Significant improvement of necroinflammation and partial remission of fibrosis in HCV patients occurred shortly after DAAs therapy.The P-I-R score has potential in predicting fibrosis in HCV patients. 展开更多
关键词 Hepatitis C virus Direct-acting antiviral agents Necroinflammation Fibrosis Predominantly progressive indeterminate and predominately regressive score HISTOPATHOLOGY
下载PDF
Pattern Analysis and Regressive Linear Measure for Botnet Detection
5
作者 B.Padmavathi B.Muthukumar 《Computer Systems Science & Engineering》 SCIE EI 2022年第10期119-139,共21页
Capturing the distributed platform with remotely controlled compromised machines using botnet is extensively analyzed by various researchers.However,certain limitations need to be addressed efficiently.The provisionin... Capturing the distributed platform with remotely controlled compromised machines using botnet is extensively analyzed by various researchers.However,certain limitations need to be addressed efficiently.The provisioning of detection mechanism with learning approaches provides a better solution more broadly by saluting multi-objective constraints.The bots’patterns or features over the network have to be analyzed in both linear and non-linear manner.The linear and non-linear features are composed of high-level and low-level features.The collected features are maintained over the Bag of Features(BoF)where the most influencing features are collected and provided into the classifier model.Here,the linearity and non-linearity of the threat are evaluated with Support Vector Machine(SVM).Next,with the collected BoF,the redundant features are eliminated as it triggers overhead towards the predictor model.Finally,a novel Incoming data Redundancy Elimination-based learning model(RedE-L)is built to classify the network features to provide robustness towards BotNets detection.The simulation is carried out in MATLAB environment,and the evaluation of proposed RedE-L model is performed with various online accessible network traffic dataset(benchmark dataset).The proposed model intends to show better tradeoff compared to the existing approaches like conventional SVM,C4.5,RepTree and so on.Here,various metrics like Accuracy,detection rate,Mathews Correlation Coefficient(MCC),and some other statistical analysis are performed to show the proposed RedE-L model's reliability.The F1-measure is 99.98%,precision is 99.93%,Accuracy is 99.84%,TPR is 99.92%,TNR is 99.94%,FNR is 0.06 and FPR is 0.06 respectively. 展开更多
关键词 BOTNET THREAT intrusion features linearity and non-linearity redundancy regressive linear measure classification redundancy eliminationbased learning model
下载PDF
Forecasting risk using auto regressive integrated moving average approach: an evidence from S&P BSE Sensex
6
作者 Madhavi Latha Challa Venkataramanaiah Malepati Siva Nageswara Rao Kolusu 《Financial Innovation》 2018年第1期344-360,共17页
The primary objective of the paper is to forecast the beta values of companies listed on Sensex,Bombay Stock Exchange(BSE).The BSE Sensex constitutes 30 top most companies listed which are popularly known as blue-chip... The primary objective of the paper is to forecast the beta values of companies listed on Sensex,Bombay Stock Exchange(BSE).The BSE Sensex constitutes 30 top most companies listed which are popularly known as blue-chip companies.To reach out the predefined objectives of the research,Auto Regressive Integrated Moving Average method is used to forecast the future risk and returns for 10 years of historical data from April 2007 to March 2017.Validation accomplished by comparison of forecasted and actual beta values for the hold back period of 2 years.Root-Mean-Square-Error and Mean-Absolute-Error both are used for accuracy measurement.The results revealed that out of 30 listed companies in the BSE Sensex,10 companies’exhibits high beta values,12 companies are with moderate and 8 companies are with low beta values.Further,it is to note that Housing Development Finance Corporation(HDFC)exhibits more inconsistency in terms of beta values though the average beta value is lowest among the companies under the study.A mixed trend is found in forecasted beta values of the BSE Sensex.In this analysis,all the p-values are less than the F-stat values except the case of Tata Steel and Wipro.Therefore,the null hypotheses were rejected leaving Tata Steel and Wipro.The values of actual and forecasted values are showing the almost same results with low error percentage.Therefore,it is concluded from the study that the estimation ARIMA could be acceptable,and forecasted beta values are accurate.So far,there are many studies on ARIMA model to forecast the returns of the stocks based on their historical data.But,hardly there are very few studies which attempt to forecast the returns on the basis of their beta values.Certainly,the attempt so made is a novel approach which has linked risk directly with return.On the basis of the present study,authors try to through light on investment decisions by linking it with beta values of respective stocks.Further,the outcomes of the present study undoubtedly useful to academicians,researchers,and policy makers in their respective area of studies. 展开更多
关键词 Akaike Information Criteria(AIC) Bombay Stock Exchange(BSE) Auto regressive Integrated Moving Average(ARIMA) BETA Time series
下载PDF
UV Index Modeling by Autoregressive Distributed Lag (ADL Model)
7
作者 Alexandre Boleira Lopo Maria Helena Constantino Spyrides +1 位作者 Paulo Sérgio Lucio Javier Sigró 《Atmospheric and Climate Sciences》 2014年第2期323-333,共11页
The objective of this work is to model statistically the ultraviolet radiation index (UV Index) to make forecast (extrapolate) and analyze trends. The task is relevant, due to increased UV flux and high rate of cases ... The objective of this work is to model statistically the ultraviolet radiation index (UV Index) to make forecast (extrapolate) and analyze trends. The task is relevant, due to increased UV flux and high rate of cases non-melanoma skin cancer in northeast of Brazil. The methodology utilized an Autoregressive Distributed Lag model (ADL) or Dynamic Linear Regression model. The monthly data of UV index were measured in east coast of the Brazilian Northeast (City of Natal-Rio Grande do Norte). The Total Ozone is single explanatory variable to model and was obtained from the TOMS and OMI/AURA instruments. The Predictive Mean Matching (PMM) method was used to complete the missing data of UV Index. The results mean squared error (MSE) between the observed UV index and interpolated data by model was of 0.36 and for extrapolation was of 0.30 with correlations of 0.90 and 0.91 respectively. The forecast/extrapolation performed by model for a climatological period (2012-2042) indicated a trend of increased UV (Seasonal Man-Kendall test scored τ = 0.955 and p-value 0.001) if the Total Ozone remain on this tendency to reduce. In those circumstances, the model indicated an increase of almost one unit of UV index to year 2042. 展开更多
关键词 UV FLUX Dynamic Linear Regression Model SEASONAL Man-Kendall Test Mean Squared ERROR RESIDUALS
下载PDF
Improved ENSO simulation in regional coupled GCM using regressive correction method 被引量:2
8
作者 FU WeiWei1 & ZHOU GuangQing2 1 Nansen-Zhu International Research Center (NZC), Institute of Atmospheric Physics, Chinese Academy of Sciences (CAS), Beijing 100029, China 2 State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics (LASG), Institute of At-mospheric Physics, CAS, Beijing 100029, China 《Science China Earth Sciences》 SCIE EI CAS 2007年第8期1258-1265,共8页
A regressive correction method is presented with the primary goal of improving ENSO simulation in regional coupled GCM. It focuses on the correction of ocean-atmosphere exchanged fluxes. On the basis of numerical expe... A regressive correction method is presented with the primary goal of improving ENSO simulation in regional coupled GCM. It focuses on the correction of ocean-atmosphere exchanged fluxes. On the basis of numerical experiments and analysis, the method can be described as follows: first, driving the ocean model with heat and momentum flux computed from a long-term observation data set; the pro-duced SST is then applied to force the AGCM as its boundary condition; after that the AGCM’s simula-tion and the corresponding observation can be correlated by a linear regressive formula. Thus the re-gressive correction coefficients for the simulation with spatial and temporal variation could be obtained by linear fitting. Finally the coefficients are applied to redressing the variables used for the calculation of the exchanged air-sea flux in the coupled model when it starts integration. This method together with the anomaly coupling method is tested in a regional coupled model, which is composed of a global grid-point atmospheric general circulation model and a high-resolution tropical Pacific Ocean model. The comparison of the results shows that it is superior to the anomaly coupling both in reducing the coupled model ‘climate drift’ and in improving the ENSO simulation in the tropical Pacific Ocean. 展开更多
关键词 anomaly coupling regressive CORRECTION METHOD REGIONAL coupled model ENSO SIMULATION
原文传递
A comparison of model choice strategies for logistic regression
9
作者 Markku Karhunen 《Journal of Data and Information Science》 CSCD 2024年第1期37-52,共16页
Purpose:The purpose of this study is to develop and compare model choice strategies in context of logistic regression.Model choice means the choice of the covariates to be included in the model.Design/methodology/appr... Purpose:The purpose of this study is to develop and compare model choice strategies in context of logistic regression.Model choice means the choice of the covariates to be included in the model.Design/methodology/approach:The study is based on Monte Carlo simulations.The methods are compared in terms of three measures of accuracy:specificity and two kinds of sensitivity.A loss function combining sensitivity and specificity is introduced and used for a final comparison.Findings:The choice of method depends on how much the users emphasize sensitivity against specificity.It also depends on the sample size.For a typical logistic regression setting with a moderate sample size and a small to moderate effect size,either BIC,BICc or Lasso seems to be optimal.Research limitations:Numerical simulations cannot cover the whole range of data-generating processes occurring with real-world data.Thus,more simulations are needed.Practical implications:Researchers can refer to these results if they believe that their data-generating process is somewhat similar to some of the scenarios presented in this paper.Alternatively,they could run their own simulations and calculate the loss function.Originality/value:This is a systematic comparison of model choice algorithms and heuristics in context of logistic regression.The distinction between two types of sensitivity and a comparison based on a loss function are methodological novelties. 展开更多
关键词 Model choice Logistic regression Logit regression Monte Carlo simulations Sensitivity SPECIFICITY
下载PDF
From partial to complete:Wing-and tail-feather moult sequence and intensity depend on species,life-cycle stage,and moult completeness in passerines
10
作者 Santi Guallar 《Avian Research》 SCIE CSCD 2024年第1期98-107,共10页
Passerines moult during various life-cycle stages.Some of these moults involve the retention of a variable quantity of wing and tail feathers.This prompts the question whether these partial moults are just arrested co... Passerines moult during various life-cycle stages.Some of these moults involve the retention of a variable quantity of wing and tail feathers.This prompts the question whether these partial moults are just arrested complete moults or follow different processes.To address it,I investigated whether three relevant features remain constant across partial and complete moults:1) moult sequence(order of activation) within feather tracts(e.g.,consecutive outward moult of primaries) and among tracts(e.g.,starting with marginal coverts,followed by greater coverts second,tertials,etc.);2) dynamics of moult intensity(amount of feathers growing along the moult progress);and 3) protection of wing quills by overlapping fully grown feathers.To study the effect of moult completeness on these three features,I classified moults of 435 individuals from 61 species in 3 groups:i) complete and partial,ii) without and iii) with retention of feathers within tracts.To study the effect of life-cycle stage,I used postbreeding,postjuvenile,and prebreeding moults.I calculated phylogenetically corrected means to establish feather-moult sequence within tracts.I applied linear regression to analyse moult sequence among tracts,and polynomial regression to study the dynamics of moult intensity as moult progresses.Sequence and intensity dynamics of partial moults tended resemble those of the complete moult as moult completeness increased.Sequence within and among feather tracts tended to shift as moult intensity within tracts and number of tracts increased.Activation of primaries advanced in relation to the other feather tracts as number of moulted primaries increased.Tertial quills were protected by the innermost greater covert regardless of moult completeness.These findings suggest that moult is a self-organised process that adjusts to the degree of completeness of plumage renewal.However,protection of quills and differences among species and between postjuvenile-and prebreeding-moult sequences also suggest an active control linked to feather function,including protection and signalling. 展开更多
关键词 Mass-gap index Moult extent Moult regulation Polynomial regression
下载PDF
Correlation between Combined Urinary Metal Exposure and Grip Strength under Three Statistical Models:A Cross-sectional Study in Rural Guangxi
11
作者 LIANG Yu Jian RONG Jia Hui +15 位作者 WANG Xue Xiu CAI Jian Sheng QIN Li Dong LIU Qiu Mei TANG Xu MO Xiao Ting WEI Yan Fei LIN Yin Xia HUANG Shen Xiang LUO Ting Yu GOU Ruo Yu CAO Jie Jing HUANG Chu Wu LU Yu Fu QIN Jian ZHANG Zhi Yong 《Biomedical and Environmental Sciences》 SCIE CAS CSCD 2024年第1期3-18,共16页
Objective This study aimed to investigate the potential relationship between urinary metals copper(Cu),arsenic(As),strontium(Sr),barium(Ba),iron(Fe),lead(Pb)and manganese(Mn)and grip strength.Methods We used linear re... Objective This study aimed to investigate the potential relationship between urinary metals copper(Cu),arsenic(As),strontium(Sr),barium(Ba),iron(Fe),lead(Pb)and manganese(Mn)and grip strength.Methods We used linear regression models,quantile g-computation and Bayesian kernel machine regression(BKMR)to assess the relationship between metals and grip strength.Results In the multimetal linear regression,Cu(β=−2.119),As(β=−1.318),Sr(β=−2.480),Ba(β=0.781),Fe(β=1.130)and Mn(β=−0.404)were significantly correlated with grip strength(P<0.05).The results of the quantile g-computation showed that the risk of occurrence of grip strength reduction was−1.007(95%confidence interval:−1.362,−0.652;P<0.001)when each quartile of the mixture of the seven metals was increased.Bayesian kernel function regression model analysis showed that mixtures of the seven metals had a negative overall effect on grip strength,with Cu,As and Sr being negatively associated with grip strength levels.In the total population,potential interactions were observed between As and Mn and between Cu and Mn(P_(interactions) of 0.003 and 0.018,respectively).Conclusion In summary,this study suggests that combined exposure to metal mixtures is negatively associated with grip strength.Cu,Sr and As were negatively correlated with grip strength levels,and there were potential interactions between As and Mn and between Cu and Mn. 展开更多
关键词 Urinary metals Handgrip strength Quantile g-computation Bayesian kernel machine regression
下载PDF
Deep Structure Optimization for Incremental Hierarchical Fuzzy Systems Using Improved Differential Evolution Algorithm
12
作者 Yue Zhu Tao Zhao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1139-1158,共20页
The optimization of the rule base of a fuzzy logic system (FLS) based on evolutionary algorithm has achievednotable results. However, due to the diversity of the deep structure in the hierarchical fuzzy system (HFS) a... The optimization of the rule base of a fuzzy logic system (FLS) based on evolutionary algorithm has achievednotable results. However, due to the diversity of the deep structure in the hierarchical fuzzy system (HFS) and thecorrelation of each sub fuzzy system, the uncertainty of the HFS’s deep structure increases. For the HFS, a largenumber of studies mainly use fixed structures, which cannot be selected automatically. To solve this problem, thispaper proposes a novel approach for constructing the incremental HFS. During system design, the deep structureand the rule base of the HFS are encoded separately. Subsequently, the deep structure is adaptively mutated basedon the fitness value, so as to realize the diversity of deep structures while ensuring reasonable competition amongthe structures. Finally, the differential evolution (DE) is used to optimize the deep structure of HFS and theparameters of antecedent and consequent simultaneously. The simulation results confirm the effectiveness of themodel. Specifically, the root mean square errors in the Laser dataset and Friedman dataset are 0.0395 and 0.0725,respectively with rule counts of rules is 8 and 12, respectively.When compared to alternative methods, the resultsindicate that the proposed method offers improvements in accuracy and rule counts. 展开更多
关键词 Hierarchical fuzzy system automatic optimization differential evolution regression problem
下载PDF
Improved Twin Support Vector Machine Algorithm and Applications in Classification Problems
13
作者 Sun Yi Wang Zhouyang 《China Communications》 SCIE CSCD 2024年第5期261-279,共19页
The distribution of data has a significant impact on the results of classification.When the distribution of one class is insignificant compared to the distribution of another class,data imbalance occurs.This will resu... The distribution of data has a significant impact on the results of classification.When the distribution of one class is insignificant compared to the distribution of another class,data imbalance occurs.This will result in rising outlier values and noise.Therefore,the speed and performance of classification could be greatly affected.Given the above problems,this paper starts with the motivation and mathematical representing of classification,puts forward a new classification method based on the relationship between different classification formulations.Combined with the vector characteristics of the actual problem and the choice of matrix characteristics,we firstly analyze the orderly regression to introduce slack variables to solve the constraint problem of the lone point.Then we introduce the fuzzy factors to solve the problem of the gap between the isolated points on the basis of the support vector machine.We introduce the cost control to solve the problem of sample skew.Finally,based on the bi-boundary support vector machine,a twostep weight setting twin classifier is constructed.This can help to identify multitasks with feature-selected patterns without the need for additional optimizers,which solves the problem of large-scale classification that can’t deal effectively with the very low category distribution gap. 展开更多
关键词 FUZZY ordered regression(OR) relaxing variables twin support vector machine
下载PDF
A performance-based hybrid deep learning model for predicting TBM advance rate using Attention-ResNet-LSTM
14
作者 Sihao Yu Zixin Zhang +2 位作者 Shuaifeng Wang Xin Huang Qinghua Lei 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第1期65-80,共16页
The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a k... The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a key parameter of TBM operation and reflects the TBM-ground interaction,for which a reliable prediction helps optimize the TBM performance.Here,we develop a hybrid neural network model,called Attention-ResNet-LSTM,for accurate prediction of the TBM advance rate.A database including geological properties and TBM operational parameters from the Yangtze River Natural Gas Pipeline Project is used to train and test this deep learning model.The evolutionary polynomial regression method is adopted to aid the selection of input parameters.The results of numerical exper-iments show that our Attention-ResNet-LSTM model outperforms other commonly-used intelligent models with a lower root mean square error and a lower mean absolute percentage error.Further,parametric analyses are conducted to explore the effects of the sequence length of historical data and the model architecture on the prediction accuracy.A correlation analysis between the input and output parameters is also implemented to provide guidance for adjusting relevant TBM operational parameters.The performance of our hybrid intelligent model is demonstrated in a case study of TBM tunneling through a complex ground with variable strata.Finally,data collected from the Baimang River Tunnel Project in Shenzhen of China are used to further test the generalization of our model.The results indicate that,compared to the conventional ResNet-LSTM model,our model has a better predictive capability for scenarios with unknown datasets due to its self-adaptive characteristic. 展开更多
关键词 Tunnel boring machine(TBM) Advance rate Deep learning Attention-ResNet-LSTM Evolutionary polynomial regression
下载PDF
Uphill or downhill?Cropland use change and its drivers from the perspective of slope spectrum
15
作者 PAN Sipei LIANG Jiale +1 位作者 CHEN Wanxu PENG Yelin 《Journal of Mountain Science》 SCIE CSCD 2024年第2期484-499,共16页
The continuous decrease of low-slope cropland resources caused by construction land crowding poses huge threat to regional sustainable development and food security.Slope spectrum analysis of topographic and geomorphi... The continuous decrease of low-slope cropland resources caused by construction land crowding poses huge threat to regional sustainable development and food security.Slope spectrum analysis of topographic and geomorphic features is considered as a digital terrain analysis method which reflects the macro-topographic features by using micro-topographic factors.However,pieces of studies have extended the concept of slope spectrum in the field of geoscience to construction land to explore its expansion law,while research on the slope trend of cropland from that perspective remains rare.To address the gap,in virtue of spatial analysis and geographically weighted regression(GWR)model,the cropland use change in the Yangtze River Basin(YRB)from 2000 to 2020 was analyzed and the driving factors were explored from the perspective of slope spectrum.Results showed that the slope spectrum curves of cropland area-frequency in the YRB showed a first upward then a downward trend.The change curve of the slope spectrum of cropland in each province(municipality)exhibited various distribution patterns.Quantitative analysis of morphological parameters of cropland slope spectrum revealed that the further down the YRB,the stronger the flattening characteristics,the more obvious the concentration.The province experienced the greatest downhill cropland climbing(CLC)was Shannxi,while province experienced the highest uphill CLC was Zhejiang.The most common cropland use change type in the YRB was horizontal expansion type.The factors affecting average cropland climbing index(ACCI)were quite stable in different periods,while population density(POP)changed from negative to positive during the study period.This research is of practical significance for the rational utilization of cropland at the watershed scale. 展开更多
关键词 Cropland climbing Land use change Slope spectrum Driving factors Geographically weighted regression Yangtze River Basin
下载PDF
Prediction and driving factors of forest fire occurrence in Jilin Province,China
16
作者 Bo Gao Yanlong Shan +4 位作者 Xiangyu Liu Sainan Yin Bo Yu Chenxi Cui Lili Cao 《Journal of Forestry Research》 SCIE EI CAS CSCD 2024年第1期58-71,共14页
Forest fires are natural disasters that can occur suddenly and can be very damaging,burning thousands of square kilometers.Prevention is better than suppression and prediction models of forest fire occurrence have dev... Forest fires are natural disasters that can occur suddenly and can be very damaging,burning thousands of square kilometers.Prevention is better than suppression and prediction models of forest fire occurrence have developed from the logistic regression model,the geographical weighted logistic regression model,the Lasso regression model,the random forest model,and the support vector machine model based on historical forest fire data from 2000 to 2019 in Jilin Province.The models,along with a distribution map are presented in this paper to provide a theoretical basis for forest fire management in this area.Existing studies show that the prediction accuracies of the two machine learning models are higher than those of the three generalized linear regression models.The accuracies of the random forest model,the support vector machine model,geographical weighted logistic regression model,the Lasso regression model,and logistic model were 88.7%,87.7%,86.0%,85.0%and 84.6%,respectively.Weather is the main factor affecting forest fires,while the impacts of topography factors,human and social-economic factors on fire occurrence were similar. 展开更多
关键词 Forest fire Occurrence prediction Forest fire driving factors Generalized linear regression models Machine learning models
下载PDF
Machine learning-assisted efficient design of Cu-based shape memory alloy with specific phase transition temperature
17
作者 Mengwei Wu Wei Yong +2 位作者 Cunqin Fu Chunmei Ma Ruiping Liu 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第4期773-785,共13页
The martensitic transformation temperature is the basis for the application of shape memory alloys(SMAs),and the ability to quickly and accurately predict the transformation temperature of SMAs has very important prac... The martensitic transformation temperature is the basis for the application of shape memory alloys(SMAs),and the ability to quickly and accurately predict the transformation temperature of SMAs has very important practical significance.In this work,machine learning(ML)methods were utilized to accelerate the search for shape memory alloys with targeted properties(phase transition temperature).A group of component data was selected to design shape memory alloys using reverse design method from numerous unexplored data.Component modeling and feature modeling were used to predict the phase transition temperature of the shape memory alloys.The experimental results of the shape memory alloys were obtained to verify the effectiveness of the support vector regression(SVR)model.The results show that the machine learning model can obtain target materials more efficiently and pertinently,and realize the accurate and rapid design of shape memory alloys with specific target phase transition temperature.On this basis,the relationship between phase transition temperature and material descriptors is analyzed,and it is proved that the key factors affecting the phase transition temperature of shape memory alloys are based on the strength of the bond energy between atoms.This work provides new ideas for the controllable design and performance optimization of Cu-based shape memory alloys. 展开更多
关键词 machine learning support vector regression shape memory alloys martensitic transformation temperature
下载PDF
Prediction of high-embankment settlement combining joint denoising technique and enhanced GWO-v-SVR method
18
作者 Qi Zhang Qian Su +2 位作者 Zongyu Zhang Zhixing Deng De Chen 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第1期317-332,共16页
Reliable long-term settlement prediction of a high embankment relates to mountain infrastructure safety.This study developed a novel hybrid model(NHM)that combines a joint denoising technique with an enhanced gray wol... Reliable long-term settlement prediction of a high embankment relates to mountain infrastructure safety.This study developed a novel hybrid model(NHM)that combines a joint denoising technique with an enhanced gray wolf optimizer(EGWO)-n-support vector regression(n-SVR)method.High-embankment field measurements were preprocessed using the joint denoising technique,which in-cludes complete ensemble empirical mode decomposition,singular value decomposition,and wavelet packet transform.Furthermore,high-embankment settlements were predicted using the EGWO-n-SVR method.In this method,the standard gray wolf optimizer(GWO)was improved to obtain the EGWO to better tune the n-SVR model hyperparameters.The proposed NHM was then tested in two case studies.Finally,the influences of the data division ratio and kernel function on the EGWO-n-SVR forecasting performance and prediction efficiency were investigated.The results indicate that the NHM suppresses noise and restores details in high-embankment field measurements.Simultaneously,the NHM out-performs other alternative prediction methods in prediction accuracy and robustness.This demonstrates that the proposed NHM is effective in predicting high-embankment settlements with noisy field mea-surements.Moreover,the appropriate data division ratio and kernel function for EGWO-n-SVR are 7:3 and radial basis function,respectively. 展开更多
关键词 High embankment Settlement prediction Joint denoising technique Enhanced gray wolf optimizer Support vector regression
下载PDF
Operational optimization of copper flotation process based on the weighted Gaussian process regression and index-oriented adaptive differential evolution algorithm
19
作者 Zhiqiang Wang Dakuo He Haotian Nie 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第2期167-179,共13页
Concentrate copper grade(CCG)is one of the important production indicators of copper flotation processes,and keeping the CCG at the set value is of great significance to the economic benefit of copper flotation indust... Concentrate copper grade(CCG)is one of the important production indicators of copper flotation processes,and keeping the CCG at the set value is of great significance to the economic benefit of copper flotation industrial processes.This paper addresses the fluctuation problem of CCG through an operational optimization method.Firstly,a density-based affinity propagationalgorithm is proposed so that more ideal working condition categories can be obtained for the complex raw ore properties.Next,a Bayesian network(BN)is applied to explore the relationship between the operational variables and the CCG.Based on the analysis results of BN,a weighted Gaussian process regression model is constructed to predict the CCG that a higher prediction accuracy can be obtained.To ensure the predicted CCG is close to the set value with a smaller magnitude of the operation adjustments and a smaller uncertainty of the prediction results,an index-oriented adaptive differential evolution(IOADE)algorithm is proposed,and the convergence performance of IOADE is superior to the traditional differential evolution and adaptive differential evolution methods.Finally,the effectiveness and feasibility of the proposed methods are verified by the experiments on a copper flotation industrial process. 展开更多
关键词 Weighted Gaussian process regression Index-oriented adaptive differential evolution Operational optimization Copper flotation process
下载PDF
Nuclear charge radius predictions by kernel ridge regression with odd-even effects
20
作者 Lu Tang Zhen-Hua Zhang 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第2期94-102,共9页
The extended kernel ridge regression(EKRR)method with odd-even effects was adopted to improve the description of the nuclear charge radius using five commonly used nuclear models.These are:(i)the isospin-dependent A^(... The extended kernel ridge regression(EKRR)method with odd-even effects was adopted to improve the description of the nuclear charge radius using five commonly used nuclear models.These are:(i)the isospin-dependent A^(1∕3) formula,(ii)relativistic continuum Hartree-Bogoliubov(RCHB)theory,(iii)Hartree-Fock-Bogoliubov(HFB)model HFB25,(iv)the Weizsacker-Skyrme(WS)model WS*,and(v)HFB25*model.In the last two models,the charge radii were calculated using a five-parameter formula with the nuclear shell corrections and deformations obtained from the WS and HFB25 models,respectively.For each model,the resultant root-mean-square deviation for the 1014 nuclei with proton number Z≥8 can be significantly reduced to 0.009-0.013 fm after considering the modification with the EKRR method.The best among them was the RCHB model,with a root-mean-square deviation of 0.0092 fm.The extrapolation abilities of the KRR and EKRR methods for the neutron-rich region were examined,and it was found that after considering the odd-even effects,the extrapolation power was improved compared with that of the original KRR method.The strong odd-even staggering of nuclear charge radii of Ca and Cu isotopes and the abrupt kinks across the neutron N=126 and 82 shell closures were also calculated and could be reproduced quite well by calculations using the EKRR method. 展开更多
关键词 Nuclear charge radius Machine learning Kernel ridge regression method
下载PDF
上一页 1 2 116 下一页 到第
使用帮助 返回顶部