The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
Regression and autoregressive mixed models are classical models used to analyze the relationship between time series response variable and other covariates. The coefficients in traditional regression and autoregressiv...Regression and autoregressive mixed models are classical models used to analyze the relationship between time series response variable and other covariates. The coefficients in traditional regression and autoregressive mixed models are constants. However, for complicated data, the coefficients of covariates may change with time. In this article, we propose a kind of partial time-varying coefficient regression and autoregressive mixed model and obtain the local weighted least-square estimators of coefficient functions by the local polynomial technique. The asymptotic normality properties of estimators are derived under regularity conditions, and simulation studies are conducted to empirically examine the finite-sample performances of the proposed estimators. Finally, we use real data about Lake Shasta inflow to illustrate the application of the proposed model.展开更多
Regression and autoregressive mixed models are classical models used to analyze the relationship between time series response variable and other covariates. The coefficients in traditional regression and autoregressiv...Regression and autoregressive mixed models are classical models used to analyze the relationship between time series response variable and other covariates. The coefficients in traditional regression and autoregressive mixed models are constants. However, for complicated data, the coefficients of covariates may change with time. In this article, we propose a kind of partial time-varying coefficient regression and autoregressive mixed model and obtain the local weighted least-square estimators of coefficient functions by the local polynomial technique. The asymptotic normality properties of estimators are derived under regularity conditions, and simulation studies are conducted to empirically examine the finite-sample performances of the proposed estimators. Finally, we use real data about Lake Shasta inflow to illustrate the application of the proposed model.展开更多
BACKGROUND Histological changes after direct-acting antivirals(DAAs)therapy in hepatitis C virus(HCV)patients has not been elucidated.Whether the predominantly progressive,indeterminate and predominately regressive(P-...BACKGROUND Histological changes after direct-acting antivirals(DAAs)therapy in hepatitis C virus(HCV)patients has not been elucidated.Whether the predominantly progressive,indeterminate and predominately regressive(P-I-R)score,evaluating fibrosis activity in hepatitis B virus patients has predictive value in HCV patients has not been investigated.AIM To identify histological changes after DAAs therapy and to evaluate the predictive value of the P-I-R score in HCV patients.METHODS Chronic HCV patients with paired liver biopsy specimens before and after DAAs treatment were included.Sustained virologic response(SVR)was defined as an undetectable serum HCV RNA level at 24 wk after treatment cessation.The Ishak system and P-I-R score were assessed.Inflammation improvement and fibrosis regression were defined as a≥2-points decrease in the histology activity index(HAI)score and a≥1-point decrease in the Ishak fibrosis score,respectively.Fibrosis progression was defined as a≥1-point increase in the Ishak fibrosis score.Histologic improvement was defined as a≥2-points decrease in the HAI score without worsening of the Ishak fibrosis score after DAAs therapy.The P-I-R score was also assessed.“absolutely reversing or advancing”was defined as the same directionality implied by both change in the Ishak score and posttreatment P-I-R score;and“probably reversing or advancing”was defined as only one parameter showing directionality.RESULTS Thirty-eight chronic HCV patients with paired liver biopsy specimens before and after DAAs treatment were included.The mean age of these patients was 40.9±14.6 years and there were 53%(20/38)males.Thirty-four percent(13/38)of patients were cirrhotic.Eighty-two percent(31/38)of patients achieved inflammation improvement.The median HAI score decreased significantly after SVR(pretreatment 7.0 vs posttreatment 2.0,Z=-5.146,P=0.000).Thirty-seven percent(14/38)of patients achieved fibrosis improvement.The median Ishak score decreased significantly after SVR(pretreatment 4.0 vs posttreatment 3.0,Z=-2.354,P=0.019).Eighty-two percent(31/38)of patients showed histological improvement.The P-I-R score was evaluated in 61%(23/38)of patients.The progressive group showed lower platelet(P=0.024)and higher HAI scores(P=0.070)before treatment.In patients with stable Ishak stage after treatment:Progressive injury was seen in 22%(4/18)of patients,33%(6/18)were classified as indeterminate and regressive changes were seen in 44%(8/18)of patients who were judged as probably reversing by the Ishak and P-I-R systems.CONCLUSION Significant improvement of necroinflammation and partial remission of fibrosis in HCV patients occurred shortly after DAAs therapy.The P-I-R score has potential in predicting fibrosis in HCV patients.展开更多
The primary objective of the paper is to forecast the beta values of companies listed on Sensex,Bombay Stock Exchange(BSE).The BSE Sensex constitutes 30 top most companies listed which are popularly known as blue-chip...The primary objective of the paper is to forecast the beta values of companies listed on Sensex,Bombay Stock Exchange(BSE).The BSE Sensex constitutes 30 top most companies listed which are popularly known as blue-chip companies.To reach out the predefined objectives of the research,Auto Regressive Integrated Moving Average method is used to forecast the future risk and returns for 10 years of historical data from April 2007 to March 2017.Validation accomplished by comparison of forecasted and actual beta values for the hold back period of 2 years.Root-Mean-Square-Error and Mean-Absolute-Error both are used for accuracy measurement.The results revealed that out of 30 listed companies in the BSE Sensex,10 companies’exhibits high beta values,12 companies are with moderate and 8 companies are with low beta values.Further,it is to note that Housing Development Finance Corporation(HDFC)exhibits more inconsistency in terms of beta values though the average beta value is lowest among the companies under the study.A mixed trend is found in forecasted beta values of the BSE Sensex.In this analysis,all the p-values are less than the F-stat values except the case of Tata Steel and Wipro.Therefore,the null hypotheses were rejected leaving Tata Steel and Wipro.The values of actual and forecasted values are showing the almost same results with low error percentage.Therefore,it is concluded from the study that the estimation ARIMA could be acceptable,and forecasted beta values are accurate.So far,there are many studies on ARIMA model to forecast the returns of the stocks based on their historical data.But,hardly there are very few studies which attempt to forecast the returns on the basis of their beta values.Certainly,the attempt so made is a novel approach which has linked risk directly with return.On the basis of the present study,authors try to through light on investment decisions by linking it with beta values of respective stocks.Further,the outcomes of the present study undoubtedly useful to academicians,researchers,and policy makers in their respective area of studies.展开更多
Capturing the distributed platform with remotely controlled compromised machines using botnet is extensively analyzed by various researchers.However,certain limitations need to be addressed efficiently.The provisionin...Capturing the distributed platform with remotely controlled compromised machines using botnet is extensively analyzed by various researchers.However,certain limitations need to be addressed efficiently.The provisioning of detection mechanism with learning approaches provides a better solution more broadly by saluting multi-objective constraints.The bots’patterns or features over the network have to be analyzed in both linear and non-linear manner.The linear and non-linear features are composed of high-level and low-level features.The collected features are maintained over the Bag of Features(BoF)where the most influencing features are collected and provided into the classifier model.Here,the linearity and non-linearity of the threat are evaluated with Support Vector Machine(SVM).Next,with the collected BoF,the redundant features are eliminated as it triggers overhead towards the predictor model.Finally,a novel Incoming data Redundancy Elimination-based learning model(RedE-L)is built to classify the network features to provide robustness towards BotNets detection.The simulation is carried out in MATLAB environment,and the evaluation of proposed RedE-L model is performed with various online accessible network traffic dataset(benchmark dataset).The proposed model intends to show better tradeoff compared to the existing approaches like conventional SVM,C4.5,RepTree and so on.Here,various metrics like Accuracy,detection rate,Mathews Correlation Coefficient(MCC),and some other statistical analysis are performed to show the proposed RedE-L model's reliability.The F1-measure is 99.98%,precision is 99.93%,Accuracy is 99.84%,TPR is 99.92%,TNR is 99.94%,FNR is 0.06 and FPR is 0.06 respectively.展开更多
To study the sensitivity of inter-subspecific hybrid rice to climatic conditions, the spikelet fertilized rate (SFR) of four types of rice including indica-japonica hybrid, intermediate hybrid, indica and japonica w...To study the sensitivity of inter-subspecific hybrid rice to climatic conditions, the spikelet fertilized rate (SFR) of four types of rice including indica-japonica hybrid, intermediate hybrid, indica and japonica were analyzed during 2000-2004. The inter-subspecific hybrids showed lower SFR, and much higher fluctuation under various climatic conditions than indica and japonica rice, showing the inter-subspecific hybrids were sensitive to ecological conditions. Among 12 climatic factors, the key factor affecting rice SFR was temperature, with the most significant factor being the average temperature of the seven days around panicle flowering (T7). A regressive equation of SFR-temperature by T7, and a comprehensive synthetic model by four important temperature indices were put forward. The optimum temperature for inter-subspecific hybrids was estimated to be 26.1-26.6℃, and lower limit of safe temperature to be 22.5-23.3℃ for panicle flowering, showing higher by averagely 0.5℃ and 1.7℃, respectively, to be compared with indica and japonica rice. This suggested that inter-subspecific hybrids require proper climatic conditions. During panicle flowering, the suitable daily average temperature was 23.3-29.0℃, with the fittest one at 26.1-26.6℃. For an application example, optimum heading season for inter-subspecific hybrids in key rice growing areas in China was as same as common pure lines, while inferior limit for safe date of heading was about a ten-day period earlier than those of common pure lines.展开更多
The objective of this work is to model statistically the ultraviolet radiation index (UV Index) to make forecast (extrapolate) and analyze trends. The task is relevant, due to increased UV flux and high rate of cases ...The objective of this work is to model statistically the ultraviolet radiation index (UV Index) to make forecast (extrapolate) and analyze trends. The task is relevant, due to increased UV flux and high rate of cases non-melanoma skin cancer in northeast of Brazil. The methodology utilized an Autoregressive Distributed Lag model (ADL) or Dynamic Linear Regression model. The monthly data of UV index were measured in east coast of the Brazilian Northeast (City of Natal-Rio Grande do Norte). The Total Ozone is single explanatory variable to model and was obtained from the TOMS and OMI/AURA instruments. The Predictive Mean Matching (PMM) method was used to complete the missing data of UV Index. The results mean squared error (MSE) between the observed UV index and interpolated data by model was of 0.36 and for extrapolation was of 0.30 with correlations of 0.90 and 0.91 respectively. The forecast/extrapolation performed by model for a climatological period (2012-2042) indicated a trend of increased UV (Seasonal Man-Kendall test scored τ = 0.955 and p-value 0.001) if the Total Ozone remain on this tendency to reduce. In those circumstances, the model indicated an increase of almost one unit of UV index to year 2042.展开更多
Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ...Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.展开更多
The martensitic transformation temperature is the basis for the application of shape memory alloys(SMAs),and the ability to quickly and accurately predict the transformation temperature of SMAs has very important prac...The martensitic transformation temperature is the basis for the application of shape memory alloys(SMAs),and the ability to quickly and accurately predict the transformation temperature of SMAs has very important practical significance.In this work,machine learning(ML)methods were utilized to accelerate the search for shape memory alloys with targeted properties(phase transition temperature).A group of component data was selected to design shape memory alloys using reverse design method from numerous unexplored data.Component modeling and feature modeling were used to predict the phase transition temperature of the shape memory alloys.The experimental results of the shape memory alloys were obtained to verify the effectiveness of the support vector regression(SVR)model.The results show that the machine learning model can obtain target materials more efficiently and pertinently,and realize the accurate and rapid design of shape memory alloys with specific target phase transition temperature.On this basis,the relationship between phase transition temperature and material descriptors is analyzed,and it is proved that the key factors affecting the phase transition temperature of shape memory alloys are based on the strength of the bond energy between atoms.This work provides new ideas for the controllable design and performance optimization of Cu-based shape memory alloys.展开更多
BACKGROUND Within the normal range,elevated alanine aminotransferase(ALT)levels are associated with an increased risk of metabolic dysfunction-associated fatty liver disease(MAFLD).AIM To investigate the associations ...BACKGROUND Within the normal range,elevated alanine aminotransferase(ALT)levels are associated with an increased risk of metabolic dysfunction-associated fatty liver disease(MAFLD).AIM To investigate the associations between repeated high-normal ALT measurements and the risk of new-onset MAFLD prospectively.METHODS A cohort of 3553 participants followed for four consecutive health examinations over 4 years was selected.The incidence rate,cumulative times,and equally and unequally weighted cumulative effects of excess high-normal ALT levels(ehALT)were measured.Cox proportional hazards regression was used to analyse the association between the cumulative effects of ehALT and the risk of new-onset MAFLD.RESULTS A total of 83.13%of participants with MAFLD had normal ALT levels.The incidence rate of MAFLD showed a linear increasing trend in the cumulative ehALT group.Compared with those in the low-normal ALT group,the multivariate adjusted hazard ratios of the equally and unequally weighted cumulative effects of ehALT were 1.651[95%confidence interval(CI):1.199-2.273]and 1.535(95%CI:1.119-2.106)in the third quartile and 1.616(95%CI:1.162-2.246)and 1.580(95%CI:1.155-2.162)in the fourth quartile,respectively.CONCLUSION Most participants with MAFLD had normal ALT levels.Long-term high-normal ALT levels were associated with a cumulative increased risk of new-onset MAFLD.展开更多
China’s low-carbon development path will make significant contributions to achieving global sustainable development goals.Due to the diverse natural and economic conditions across different regions in China,there exi...China’s low-carbon development path will make significant contributions to achieving global sustainable development goals.Due to the diverse natural and economic conditions across different regions in China,there exists an imbalance in the distribution of car-bon emissions.Therefore,regional cooperation serves as an effective means to attain low-carbon development.This study examined the pattern of carbon emissions and proposed a potential joint emission reduction strategy by utilizing the industrial carbon emission intens-ity(ICEI)as a crucial factor.We utilized social network analysis and Local Indicators of Spatial Association(LISA)space-time trans-ition matrix to investigate the spatiotemporal connections and discrepancies of ICEI in the cities of the Pearl River Basin(PRB),China from 2010 to 2020.The primary drivers of the ICEI were determined through geographical detectors and multi-scale geographically weighted regression.The results were as follows:1)the overall ICEI in the Pearl River Basin is showing a downward trend,and there is a significant spatial imbalance.2)There are numerous network connections between cities regarding the ICEI,but the network structure is relatively fragile and unstable.3)Economically developed cities such as Guangzhou,Foshan,and Dongguan are in the center of the network while playing an intermediary role.4)Energy consumption,industrialization,per capita GDP,urbanization,science and techno-logy,and productivity are found to be the most influential variables in the spatial differentiation of ICEI,and their combination in-creased the explanatory power of the geographic variation of ICEI.Finally,through the analysis of differences and connections in urban carbon emissions under different economic levels and ICEI,the study suggests joint carbon reduction strategies,which are centered on carbon transfer,financial support,and technological assistance among cities.展开更多
A regressive correction method is presented with the primary goal of improving ENSO simulation in regional coupled GCM. It focuses on the correction of ocean-atmosphere exchanged fluxes. On the basis of numerical expe...A regressive correction method is presented with the primary goal of improving ENSO simulation in regional coupled GCM. It focuses on the correction of ocean-atmosphere exchanged fluxes. On the basis of numerical experiments and analysis, the method can be described as follows: first, driving the ocean model with heat and momentum flux computed from a long-term observation data set; the pro-duced SST is then applied to force the AGCM as its boundary condition; after that the AGCM’s simula-tion and the corresponding observation can be correlated by a linear regressive formula. Thus the re-gressive correction coefficients for the simulation with spatial and temporal variation could be obtained by linear fitting. Finally the coefficients are applied to redressing the variables used for the calculation of the exchanged air-sea flux in the coupled model when it starts integration. This method together with the anomaly coupling method is tested in a regional coupled model, which is composed of a global grid-point atmospheric general circulation model and a high-resolution tropical Pacific Ocean model. The comparison of the results shows that it is superior to the anomaly coupling both in reducing the coupled model ‘climate drift’ and in improving the ENSO simulation in the tropical Pacific Ocean.展开更多
Purpose:The purpose of this study is to develop and compare model choice strategies in context of logistic regression.Model choice means the choice of the covariates to be included in the model.Design/methodology/appr...Purpose:The purpose of this study is to develop and compare model choice strategies in context of logistic regression.Model choice means the choice of the covariates to be included in the model.Design/methodology/approach:The study is based on Monte Carlo simulations.The methods are compared in terms of three measures of accuracy:specificity and two kinds of sensitivity.A loss function combining sensitivity and specificity is introduced and used for a final comparison.Findings:The choice of method depends on how much the users emphasize sensitivity against specificity.It also depends on the sample size.For a typical logistic regression setting with a moderate sample size and a small to moderate effect size,either BIC,BICc or Lasso seems to be optimal.Research limitations:Numerical simulations cannot cover the whole range of data-generating processes occurring with real-world data.Thus,more simulations are needed.Practical implications:Researchers can refer to these results if they believe that their data-generating process is somewhat similar to some of the scenarios presented in this paper.Alternatively,they could run their own simulations and calculate the loss function.Originality/value:This is a systematic comparison of model choice algorithms and heuristics in context of logistic regression.The distinction between two types of sensitivity and a comparison based on a loss function are methodological novelties.展开更多
In forest science and practice, the total tree height is one of the basic morphometric attributes at the tree level and it has been closely linked with important stand attributes. In the current research, sixteen nonl...In forest science and practice, the total tree height is one of the basic morphometric attributes at the tree level and it has been closely linked with important stand attributes. In the current research, sixteen nonlinear functions for height prediction were tested in terms of their fitting ability against samples of Abies borisii regis and Pinus sylvestris trees from mountainous forests in central Greece. The fitting procedure was based on generalized nonlinear weighted regression. At the final stage, a five-quantile nonlinear height-diameter model was developed for both species through a quantile regression approach, to estimate the entire conditional distribution of tree height, enabling the evaluation of the diameter impact at various quantiles and providing a comprehensive understanding of the proposed relationship across the distribution. The results clearly showed that employing the diameter as the sole independent variable, the 3-parameter Hossfeld function and the 2-parameter N?slund function managed to explain approximately 84.0% and 81.7% of the total height variance in the case of King Boris fir and Scots pine species, respectively. Furthermore, the models exhibited low levels of error in both cases(2.310m for the fir and 3.004m for the pine), yielding unbiased predictions for both fir(-0.002m) and pine(-0.004m). Notably, all the required assumptions for homogeneity and normality of the associated residuals were achieved through the weighting procedure, while the quantile regression approach provided additional insights into the height-diameter allometry of the specific species. The proposed models can turn into valuable tools for operational forest management planning, particularly for wood production and conservation of mountainous forest ecosystems.展开更多
Birch has long suffered from a lack of active forest management,leading many researchers to use mate-rial without a detailed management history.Data collected from three birch(Betula pendula Roth,B.pubescens Ehrh.)sit...Birch has long suffered from a lack of active forest management,leading many researchers to use mate-rial without a detailed management history.Data collected from three birch(Betula pendula Roth,B.pubescens Ehrh.)sites in southern Sweden were analyzed using regression analysis to detect any trends or differences in wood proper-ties that could be explained by stand history,tree age and stem form.All sites were genetics trials established in the same way.Estimates of acoustic velocity(AV)from non-destructive testing(NDT)and predicted AV had a higher correlation if data was pooled across sites and other stem form factors were considered.A subsample of stems had radial profiles of X-ray wood density and ring width by year created,and wood density was related to ring number from the pith and ring width.It seemed likely that wood density was negatively related to ring width for both birch species.Linear models had slight improvements if site and species were included,but only the youngest site with trees at age 15 had both birch species.This paper indicated that NDT values need to be considered separately,and any predictive models will likely be improved if they are specific to the site and birch species measured.展开更多
Incorporating aluminum metal-organic frameworks(Al-MOFs)as energetic additives for solid fuels presents a promising avenue for enhancing combustion performance.This study explores the potential benefits of Al-MOF(MIL-...Incorporating aluminum metal-organic frameworks(Al-MOFs)as energetic additives for solid fuels presents a promising avenue for enhancing combustion performance.This study explores the potential benefits of Al-MOF(MIL-53(Al))energetic additive on the combustion performance of hydroxyl-terminated polybutadiene(HTPB)fuel.The HTPB-MOF fuel samples were manufactured using the vacuum-casting technique,followed by a comprehensive evaluation of their ignition and combustion properties using an opposed flow burner(OFB)setup utilizing gaseous oxygen as an oxidizer.To gauge the effectiveness of Al-MOFs as fuel additives,their impact is compared with that of nano-aluminum(nAl),another traditional additive in HTPB fuel.The results indicate that the addition of 15%(mass fraction)nAl into HTPB resulted in the shortest ignition delay time(136 ms),demonstrating improved ignition performance compared to pure HTPB(273 ms).The incorporation of Al-MOF in HTPB also reduced ignition delay times to 227 ms and 189 ms,respectively.Moreover,under high oxidizer mass flux conditions(79—81 kg/(m^(2)s)),HTPB fuel with 15%nAl exhibited a substantial 83.2%increase in regression rate compared to the baseline HTPB fuel,highlighting the positive influence of nAl on combustion behavior.In contrast,HTPB-MOF with a 15%Al-MOF additive showed a 32.7%increase in regression rate compared to pure HTPB.These results suggest that HTPB-nAl outperforms HTPB-MOF in terms of regression rates,indicating a more vigorous and rapid burning behavior.展开更多
Passerines moult during various life-cycle stages.Some of these moults involve the retention of a variable quantity of wing and tail feathers.This prompts the question whether these partial moults are just arrested co...Passerines moult during various life-cycle stages.Some of these moults involve the retention of a variable quantity of wing and tail feathers.This prompts the question whether these partial moults are just arrested complete moults or follow different processes.To address it,I investigated whether three relevant features remain constant across partial and complete moults:1) moult sequence(order of activation) within feather tracts(e.g.,consecutive outward moult of primaries) and among tracts(e.g.,starting with marginal coverts,followed by greater coverts second,tertials,etc.);2) dynamics of moult intensity(amount of feathers growing along the moult progress);and 3) protection of wing quills by overlapping fully grown feathers.To study the effect of moult completeness on these three features,I classified moults of 435 individuals from 61 species in 3 groups:i) complete and partial,ii) without and iii) with retention of feathers within tracts.To study the effect of life-cycle stage,I used postbreeding,postjuvenile,and prebreeding moults.I calculated phylogenetically corrected means to establish feather-moult sequence within tracts.I applied linear regression to analyse moult sequence among tracts,and polynomial regression to study the dynamics of moult intensity as moult progresses.Sequence and intensity dynamics of partial moults tended resemble those of the complete moult as moult completeness increased.Sequence within and among feather tracts tended to shift as moult intensity within tracts and number of tracts increased.Activation of primaries advanced in relation to the other feather tracts as number of moulted primaries increased.Tertial quills were protected by the innermost greater covert regardless of moult completeness.These findings suggest that moult is a self-organised process that adjusts to the degree of completeness of plumage renewal.However,protection of quills and differences among species and between postjuvenile-and prebreeding-moult sequences also suggest an active control linked to feather function,including protection and signalling.展开更多
Systematically determining the discriminatory power of various rainfall properties and their combinations in identifying debris flow occurrence is crucial for early warning systems.In this study,we evaluated the discr...Systematically determining the discriminatory power of various rainfall properties and their combinations in identifying debris flow occurrence is crucial for early warning systems.In this study,we evaluated the discriminatory power of different univariate and multivariate rainfall threshold models in identifying triggering conditions of debris flow in the Jiangjia Gully,Yunnan Province,China.The univariate models used single rainfall properties as indicators,including total rainfall(R_(tot)),rainfall duration(D),mean intensity(I_(mean)),absolute energy(Eabs),storm kinetic energy(E_(s)),antecedent rainfall(R_(a)),and maximum rainfall intensity over various durations(I_(max_dur)).The evaluation reveals that the I_(max_dur)and Eabs models have the best performance,followed by the E_(s),R_(tot),and I_(mean)models,while the D and R_(a)models have poor performances.Specifically,the I_(max_dur)model has the highest performance metrics at a 40-min duration.We used logistic regression to combine at least two rainfall properties to establish multivariate threshold models.The results show that adding D or R_(a)to the models dominated by Eabs,E_(s),R_(tot),or I_(mean)generally improve their performances,specifically when D is combined with I_(mean)or when R_(a)is combined with Eabs or E_(s).Including R_(a)in the I_(max_dur)model,it performs better than the univariate I_(max_dur)model.A power-law relationship between I_(max_dur)and R_(a)or between Eabs and R_(a)has better performance than the traditional I_(mean)–D model,while the performance of the E_(s)–R_(a)model is moderate.Our evaluation reemphasizes the important role of the maximum intensity over short durations in debris flow occurrence.It also highlights the importance of systematically investigating the role of R_(a)in establishing rainfall thresholds for triggering debris flow.Given the regional variations in rainfall patterns worldwide,it is necessary to evaluate the findings of this study across diverse watersheds.展开更多
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
文摘Regression and autoregressive mixed models are classical models used to analyze the relationship between time series response variable and other covariates. The coefficients in traditional regression and autoregressive mixed models are constants. However, for complicated data, the coefficients of covariates may change with time. In this article, we propose a kind of partial time-varying coefficient regression and autoregressive mixed model and obtain the local weighted least-square estimators of coefficient functions by the local polynomial technique. The asymptotic normality properties of estimators are derived under regularity conditions, and simulation studies are conducted to empirically examine the finite-sample performances of the proposed estimators. Finally, we use real data about Lake Shasta inflow to illustrate the application of the proposed model.
文摘Regression and autoregressive mixed models are classical models used to analyze the relationship between time series response variable and other covariates. The coefficients in traditional regression and autoregressive mixed models are constants. However, for complicated data, the coefficients of covariates may change with time. In this article, we propose a kind of partial time-varying coefficient regression and autoregressive mixed model and obtain the local weighted least-square estimators of coefficient functions by the local polynomial technique. The asymptotic normality properties of estimators are derived under regularity conditions, and simulation studies are conducted to empirically examine the finite-sample performances of the proposed estimators. Finally, we use real data about Lake Shasta inflow to illustrate the application of the proposed model.
基金The National Natural Science Foundation of China,No.81870406the Beijing Natural Science Foundation,No.7182174and the China National Science and Technology Major Project for Infectious Diseases Control during the 13th Five-Year Plan Period,No.2017ZX10202202.
文摘BACKGROUND Histological changes after direct-acting antivirals(DAAs)therapy in hepatitis C virus(HCV)patients has not been elucidated.Whether the predominantly progressive,indeterminate and predominately regressive(P-I-R)score,evaluating fibrosis activity in hepatitis B virus patients has predictive value in HCV patients has not been investigated.AIM To identify histological changes after DAAs therapy and to evaluate the predictive value of the P-I-R score in HCV patients.METHODS Chronic HCV patients with paired liver biopsy specimens before and after DAAs treatment were included.Sustained virologic response(SVR)was defined as an undetectable serum HCV RNA level at 24 wk after treatment cessation.The Ishak system and P-I-R score were assessed.Inflammation improvement and fibrosis regression were defined as a≥2-points decrease in the histology activity index(HAI)score and a≥1-point decrease in the Ishak fibrosis score,respectively.Fibrosis progression was defined as a≥1-point increase in the Ishak fibrosis score.Histologic improvement was defined as a≥2-points decrease in the HAI score without worsening of the Ishak fibrosis score after DAAs therapy.The P-I-R score was also assessed.“absolutely reversing or advancing”was defined as the same directionality implied by both change in the Ishak score and posttreatment P-I-R score;and“probably reversing or advancing”was defined as only one parameter showing directionality.RESULTS Thirty-eight chronic HCV patients with paired liver biopsy specimens before and after DAAs treatment were included.The mean age of these patients was 40.9±14.6 years and there were 53%(20/38)males.Thirty-four percent(13/38)of patients were cirrhotic.Eighty-two percent(31/38)of patients achieved inflammation improvement.The median HAI score decreased significantly after SVR(pretreatment 7.0 vs posttreatment 2.0,Z=-5.146,P=0.000).Thirty-seven percent(14/38)of patients achieved fibrosis improvement.The median Ishak score decreased significantly after SVR(pretreatment 4.0 vs posttreatment 3.0,Z=-2.354,P=0.019).Eighty-two percent(31/38)of patients showed histological improvement.The P-I-R score was evaluated in 61%(23/38)of patients.The progressive group showed lower platelet(P=0.024)and higher HAI scores(P=0.070)before treatment.In patients with stable Ishak stage after treatment:Progressive injury was seen in 22%(4/18)of patients,33%(6/18)were classified as indeterminate and regressive changes were seen in 44%(8/18)of patients who were judged as probably reversing by the Ishak and P-I-R systems.CONCLUSION Significant improvement of necroinflammation and partial remission of fibrosis in HCV patients occurred shortly after DAAs therapy.The P-I-R score has potential in predicting fibrosis in HCV patients.
文摘The primary objective of the paper is to forecast the beta values of companies listed on Sensex,Bombay Stock Exchange(BSE).The BSE Sensex constitutes 30 top most companies listed which are popularly known as blue-chip companies.To reach out the predefined objectives of the research,Auto Regressive Integrated Moving Average method is used to forecast the future risk and returns for 10 years of historical data from April 2007 to March 2017.Validation accomplished by comparison of forecasted and actual beta values for the hold back period of 2 years.Root-Mean-Square-Error and Mean-Absolute-Error both are used for accuracy measurement.The results revealed that out of 30 listed companies in the BSE Sensex,10 companies’exhibits high beta values,12 companies are with moderate and 8 companies are with low beta values.Further,it is to note that Housing Development Finance Corporation(HDFC)exhibits more inconsistency in terms of beta values though the average beta value is lowest among the companies under the study.A mixed trend is found in forecasted beta values of the BSE Sensex.In this analysis,all the p-values are less than the F-stat values except the case of Tata Steel and Wipro.Therefore,the null hypotheses were rejected leaving Tata Steel and Wipro.The values of actual and forecasted values are showing the almost same results with low error percentage.Therefore,it is concluded from the study that the estimation ARIMA could be acceptable,and forecasted beta values are accurate.So far,there are many studies on ARIMA model to forecast the returns of the stocks based on their historical data.But,hardly there are very few studies which attempt to forecast the returns on the basis of their beta values.Certainly,the attempt so made is a novel approach which has linked risk directly with return.On the basis of the present study,authors try to through light on investment decisions by linking it with beta values of respective stocks.Further,the outcomes of the present study undoubtedly useful to academicians,researchers,and policy makers in their respective area of studies.
文摘Capturing the distributed platform with remotely controlled compromised machines using botnet is extensively analyzed by various researchers.However,certain limitations need to be addressed efficiently.The provisioning of detection mechanism with learning approaches provides a better solution more broadly by saluting multi-objective constraints.The bots’patterns or features over the network have to be analyzed in both linear and non-linear manner.The linear and non-linear features are composed of high-level and low-level features.The collected features are maintained over the Bag of Features(BoF)where the most influencing features are collected and provided into the classifier model.Here,the linearity and non-linearity of the threat are evaluated with Support Vector Machine(SVM).Next,with the collected BoF,the redundant features are eliminated as it triggers overhead towards the predictor model.Finally,a novel Incoming data Redundancy Elimination-based learning model(RedE-L)is built to classify the network features to provide robustness towards BotNets detection.The simulation is carried out in MATLAB environment,and the evaluation of proposed RedE-L model is performed with various online accessible network traffic dataset(benchmark dataset).The proposed model intends to show better tradeoff compared to the existing approaches like conventional SVM,C4.5,RepTree and so on.Here,various metrics like Accuracy,detection rate,Mathews Correlation Coefficient(MCC),and some other statistical analysis are performed to show the proposed RedE-L model's reliability.The F1-measure is 99.98%,precision is 99.93%,Accuracy is 99.84%,TPR is 99.92%,TNR is 99.94%,FNR is 0.06 and FPR is 0.06 respectively.
文摘To study the sensitivity of inter-subspecific hybrid rice to climatic conditions, the spikelet fertilized rate (SFR) of four types of rice including indica-japonica hybrid, intermediate hybrid, indica and japonica were analyzed during 2000-2004. The inter-subspecific hybrids showed lower SFR, and much higher fluctuation under various climatic conditions than indica and japonica rice, showing the inter-subspecific hybrids were sensitive to ecological conditions. Among 12 climatic factors, the key factor affecting rice SFR was temperature, with the most significant factor being the average temperature of the seven days around panicle flowering (T7). A regressive equation of SFR-temperature by T7, and a comprehensive synthetic model by four important temperature indices were put forward. The optimum temperature for inter-subspecific hybrids was estimated to be 26.1-26.6℃, and lower limit of safe temperature to be 22.5-23.3℃ for panicle flowering, showing higher by averagely 0.5℃ and 1.7℃, respectively, to be compared with indica and japonica rice. This suggested that inter-subspecific hybrids require proper climatic conditions. During panicle flowering, the suitable daily average temperature was 23.3-29.0℃, with the fittest one at 26.1-26.6℃. For an application example, optimum heading season for inter-subspecific hybrids in key rice growing areas in China was as same as common pure lines, while inferior limit for safe date of heading was about a ten-day period earlier than those of common pure lines.
文摘The objective of this work is to model statistically the ultraviolet radiation index (UV Index) to make forecast (extrapolate) and analyze trends. The task is relevant, due to increased UV flux and high rate of cases non-melanoma skin cancer in northeast of Brazil. The methodology utilized an Autoregressive Distributed Lag model (ADL) or Dynamic Linear Regression model. The monthly data of UV index were measured in east coast of the Brazilian Northeast (City of Natal-Rio Grande do Norte). The Total Ozone is single explanatory variable to model and was obtained from the TOMS and OMI/AURA instruments. The Predictive Mean Matching (PMM) method was used to complete the missing data of UV Index. The results mean squared error (MSE) between the observed UV index and interpolated data by model was of 0.36 and for extrapolation was of 0.30 with correlations of 0.90 and 0.91 respectively. The forecast/extrapolation performed by model for a climatological period (2012-2042) indicated a trend of increased UV (Seasonal Man-Kendall test scored τ = 0.955 and p-value 0.001) if the Total Ozone remain on this tendency to reduce. In those circumstances, the model indicated an increase of almost one unit of UV index to year 2042.
基金financially supported by the National Key Research and Development Program(Grant No.2022YFE0107000)the General Projects of the National Natural Science Foundation of China(Grant No.52171259)the High-Tech Ship Research Project of the Ministry of Industry and Information Technology(Grant No.[2021]342)。
文摘Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.
基金financially supported by the National Natural Science Foundation of China(No.51974028)。
文摘The martensitic transformation temperature is the basis for the application of shape memory alloys(SMAs),and the ability to quickly and accurately predict the transformation temperature of SMAs has very important practical significance.In this work,machine learning(ML)methods were utilized to accelerate the search for shape memory alloys with targeted properties(phase transition temperature).A group of component data was selected to design shape memory alloys using reverse design method from numerous unexplored data.Component modeling and feature modeling were used to predict the phase transition temperature of the shape memory alloys.The experimental results of the shape memory alloys were obtained to verify the effectiveness of the support vector regression(SVR)model.The results show that the machine learning model can obtain target materials more efficiently and pertinently,and realize the accurate and rapid design of shape memory alloys with specific target phase transition temperature.On this basis,the relationship between phase transition temperature and material descriptors is analyzed,and it is proved that the key factors affecting the phase transition temperature of shape memory alloys are based on the strength of the bond energy between atoms.This work provides new ideas for the controllable design and performance optimization of Cu-based shape memory alloys.
基金National Natural Science Foundation of China,No.72101236China Postdoctoral Science Foundation,No.2022M722900+1 种基金Collaborative Innovation Project of Zhengzhou City,No.XTCX2023006Nursing Team Project of the First Affiliated Hospital of Zhengzhou University,No.HLKY2023005.
文摘BACKGROUND Within the normal range,elevated alanine aminotransferase(ALT)levels are associated with an increased risk of metabolic dysfunction-associated fatty liver disease(MAFLD).AIM To investigate the associations between repeated high-normal ALT measurements and the risk of new-onset MAFLD prospectively.METHODS A cohort of 3553 participants followed for four consecutive health examinations over 4 years was selected.The incidence rate,cumulative times,and equally and unequally weighted cumulative effects of excess high-normal ALT levels(ehALT)were measured.Cox proportional hazards regression was used to analyse the association between the cumulative effects of ehALT and the risk of new-onset MAFLD.RESULTS A total of 83.13%of participants with MAFLD had normal ALT levels.The incidence rate of MAFLD showed a linear increasing trend in the cumulative ehALT group.Compared with those in the low-normal ALT group,the multivariate adjusted hazard ratios of the equally and unequally weighted cumulative effects of ehALT were 1.651[95%confidence interval(CI):1.199-2.273]and 1.535(95%CI:1.119-2.106)in the third quartile and 1.616(95%CI:1.162-2.246)and 1.580(95%CI:1.155-2.162)in the fourth quartile,respectively.CONCLUSION Most participants with MAFLD had normal ALT levels.Long-term high-normal ALT levels were associated with a cumulative increased risk of new-onset MAFLD.
基金Under the auspices of the Philosophy and Social Science Planning Project of Guizhou,China(No.21GZZD59)。
文摘China’s low-carbon development path will make significant contributions to achieving global sustainable development goals.Due to the diverse natural and economic conditions across different regions in China,there exists an imbalance in the distribution of car-bon emissions.Therefore,regional cooperation serves as an effective means to attain low-carbon development.This study examined the pattern of carbon emissions and proposed a potential joint emission reduction strategy by utilizing the industrial carbon emission intens-ity(ICEI)as a crucial factor.We utilized social network analysis and Local Indicators of Spatial Association(LISA)space-time trans-ition matrix to investigate the spatiotemporal connections and discrepancies of ICEI in the cities of the Pearl River Basin(PRB),China from 2010 to 2020.The primary drivers of the ICEI were determined through geographical detectors and multi-scale geographically weighted regression.The results were as follows:1)the overall ICEI in the Pearl River Basin is showing a downward trend,and there is a significant spatial imbalance.2)There are numerous network connections between cities regarding the ICEI,but the network structure is relatively fragile and unstable.3)Economically developed cities such as Guangzhou,Foshan,and Dongguan are in the center of the network while playing an intermediary role.4)Energy consumption,industrialization,per capita GDP,urbanization,science and techno-logy,and productivity are found to be the most influential variables in the spatial differentiation of ICEI,and their combination in-creased the explanatory power of the geographic variation of ICEI.Finally,through the analysis of differences and connections in urban carbon emissions under different economic levels and ICEI,the study suggests joint carbon reduction strategies,which are centered on carbon transfer,financial support,and technological assistance among cities.
基金the National Natural Science Foundation of China (Grant Nos. 40523001, 40631005, and 40620130113)
文摘A regressive correction method is presented with the primary goal of improving ENSO simulation in regional coupled GCM. It focuses on the correction of ocean-atmosphere exchanged fluxes. On the basis of numerical experiments and analysis, the method can be described as follows: first, driving the ocean model with heat and momentum flux computed from a long-term observation data set; the pro-duced SST is then applied to force the AGCM as its boundary condition; after that the AGCM’s simula-tion and the corresponding observation can be correlated by a linear regressive formula. Thus the re-gressive correction coefficients for the simulation with spatial and temporal variation could be obtained by linear fitting. Finally the coefficients are applied to redressing the variables used for the calculation of the exchanged air-sea flux in the coupled model when it starts integration. This method together with the anomaly coupling method is tested in a regional coupled model, which is composed of a global grid-point atmospheric general circulation model and a high-resolution tropical Pacific Ocean model. The comparison of the results shows that it is superior to the anomaly coupling both in reducing the coupled model ‘climate drift’ and in improving the ENSO simulation in the tropical Pacific Ocean.
文摘Purpose:The purpose of this study is to develop and compare model choice strategies in context of logistic regression.Model choice means the choice of the covariates to be included in the model.Design/methodology/approach:The study is based on Monte Carlo simulations.The methods are compared in terms of three measures of accuracy:specificity and two kinds of sensitivity.A loss function combining sensitivity and specificity is introduced and used for a final comparison.Findings:The choice of method depends on how much the users emphasize sensitivity against specificity.It also depends on the sample size.For a typical logistic regression setting with a moderate sample size and a small to moderate effect size,either BIC,BICc or Lasso seems to be optimal.Research limitations:Numerical simulations cannot cover the whole range of data-generating processes occurring with real-world data.Thus,more simulations are needed.Practical implications:Researchers can refer to these results if they believe that their data-generating process is somewhat similar to some of the scenarios presented in this paper.Alternatively,they could run their own simulations and calculate the loss function.Originality/value:This is a systematic comparison of model choice algorithms and heuristics in context of logistic regression.The distinction between two types of sensitivity and a comparison based on a loss function are methodological novelties.
文摘In forest science and practice, the total tree height is one of the basic morphometric attributes at the tree level and it has been closely linked with important stand attributes. In the current research, sixteen nonlinear functions for height prediction were tested in terms of their fitting ability against samples of Abies borisii regis and Pinus sylvestris trees from mountainous forests in central Greece. The fitting procedure was based on generalized nonlinear weighted regression. At the final stage, a five-quantile nonlinear height-diameter model was developed for both species through a quantile regression approach, to estimate the entire conditional distribution of tree height, enabling the evaluation of the diameter impact at various quantiles and providing a comprehensive understanding of the proposed relationship across the distribution. The results clearly showed that employing the diameter as the sole independent variable, the 3-parameter Hossfeld function and the 2-parameter N?slund function managed to explain approximately 84.0% and 81.7% of the total height variance in the case of King Boris fir and Scots pine species, respectively. Furthermore, the models exhibited low levels of error in both cases(2.310m for the fir and 3.004m for the pine), yielding unbiased predictions for both fir(-0.002m) and pine(-0.004m). Notably, all the required assumptions for homogeneity and normality of the associated residuals were achieved through the weighting procedure, while the quantile regression approach provided additional insights into the height-diameter allometry of the specific species. The proposed models can turn into valuable tools for operational forest management planning, particularly for wood production and conservation of mountainous forest ecosystems.
基金financed by the research program FRAS-The Future Silviculture in Southern Sweden
文摘Birch has long suffered from a lack of active forest management,leading many researchers to use mate-rial without a detailed management history.Data collected from three birch(Betula pendula Roth,B.pubescens Ehrh.)sites in southern Sweden were analyzed using regression analysis to detect any trends or differences in wood proper-ties that could be explained by stand history,tree age and stem form.All sites were genetics trials established in the same way.Estimates of acoustic velocity(AV)from non-destructive testing(NDT)and predicted AV had a higher correlation if data was pooled across sites and other stem form factors were considered.A subsample of stems had radial profiles of X-ray wood density and ring width by year created,and wood density was related to ring number from the pith and ring width.It seemed likely that wood density was negatively related to ring width for both birch species.Linear models had slight improvements if site and species were included,but only the youngest site with trees at age 15 had both birch species.This paper indicated that NDT values need to be considered separately,and any predictive models will likely be improved if they are specific to the site and birch species measured.
文摘Incorporating aluminum metal-organic frameworks(Al-MOFs)as energetic additives for solid fuels presents a promising avenue for enhancing combustion performance.This study explores the potential benefits of Al-MOF(MIL-53(Al))energetic additive on the combustion performance of hydroxyl-terminated polybutadiene(HTPB)fuel.The HTPB-MOF fuel samples were manufactured using the vacuum-casting technique,followed by a comprehensive evaluation of their ignition and combustion properties using an opposed flow burner(OFB)setup utilizing gaseous oxygen as an oxidizer.To gauge the effectiveness of Al-MOFs as fuel additives,their impact is compared with that of nano-aluminum(nAl),another traditional additive in HTPB fuel.The results indicate that the addition of 15%(mass fraction)nAl into HTPB resulted in the shortest ignition delay time(136 ms),demonstrating improved ignition performance compared to pure HTPB(273 ms).The incorporation of Al-MOF in HTPB also reduced ignition delay times to 227 ms and 189 ms,respectively.Moreover,under high oxidizer mass flux conditions(79—81 kg/(m^(2)s)),HTPB fuel with 15%nAl exhibited a substantial 83.2%increase in regression rate compared to the baseline HTPB fuel,highlighting the positive influence of nAl on combustion behavior.In contrast,HTPB-MOF with a 15%Al-MOF additive showed a 32.7%increase in regression rate compared to pure HTPB.These results suggest that HTPB-nAl outperforms HTPB-MOF in terms of regression rates,indicating a more vigorous and rapid burning behavior.
文摘Passerines moult during various life-cycle stages.Some of these moults involve the retention of a variable quantity of wing and tail feathers.This prompts the question whether these partial moults are just arrested complete moults or follow different processes.To address it,I investigated whether three relevant features remain constant across partial and complete moults:1) moult sequence(order of activation) within feather tracts(e.g.,consecutive outward moult of primaries) and among tracts(e.g.,starting with marginal coverts,followed by greater coverts second,tertials,etc.);2) dynamics of moult intensity(amount of feathers growing along the moult progress);and 3) protection of wing quills by overlapping fully grown feathers.To study the effect of moult completeness on these three features,I classified moults of 435 individuals from 61 species in 3 groups:i) complete and partial,ii) without and iii) with retention of feathers within tracts.To study the effect of life-cycle stage,I used postbreeding,postjuvenile,and prebreeding moults.I calculated phylogenetically corrected means to establish feather-moult sequence within tracts.I applied linear regression to analyse moult sequence among tracts,and polynomial regression to study the dynamics of moult intensity as moult progresses.Sequence and intensity dynamics of partial moults tended resemble those of the complete moult as moult completeness increased.Sequence within and among feather tracts tended to shift as moult intensity within tracts and number of tracts increased.Activation of primaries advanced in relation to the other feather tracts as number of moulted primaries increased.Tertial quills were protected by the innermost greater covert regardless of moult completeness.These findings suggest that moult is a self-organised process that adjusts to the degree of completeness of plumage renewal.However,protection of quills and differences among species and between postjuvenile-and prebreeding-moult sequences also suggest an active control linked to feather function,including protection and signalling.
基金supported by the National Key R&D Program of China(No.2023YFC3007205)the National Natural Science Foundation of China(Nos.42271013,42077440)Project of the Department of Science and Technology of Sichuan Province(No.2023ZHCG0012).
文摘Systematically determining the discriminatory power of various rainfall properties and their combinations in identifying debris flow occurrence is crucial for early warning systems.In this study,we evaluated the discriminatory power of different univariate and multivariate rainfall threshold models in identifying triggering conditions of debris flow in the Jiangjia Gully,Yunnan Province,China.The univariate models used single rainfall properties as indicators,including total rainfall(R_(tot)),rainfall duration(D),mean intensity(I_(mean)),absolute energy(Eabs),storm kinetic energy(E_(s)),antecedent rainfall(R_(a)),and maximum rainfall intensity over various durations(I_(max_dur)).The evaluation reveals that the I_(max_dur)and Eabs models have the best performance,followed by the E_(s),R_(tot),and I_(mean)models,while the D and R_(a)models have poor performances.Specifically,the I_(max_dur)model has the highest performance metrics at a 40-min duration.We used logistic regression to combine at least two rainfall properties to establish multivariate threshold models.The results show that adding D or R_(a)to the models dominated by Eabs,E_(s),R_(tot),or I_(mean)generally improve their performances,specifically when D is combined with I_(mean)or when R_(a)is combined with Eabs or E_(s).Including R_(a)in the I_(max_dur)model,it performs better than the univariate I_(max_dur)model.A power-law relationship between I_(max_dur)and R_(a)or between Eabs and R_(a)has better performance than the traditional I_(mean)–D model,while the performance of the E_(s)–R_(a)model is moderate.Our evaluation reemphasizes the important role of the maximum intensity over short durations in debris flow occurrence.It also highlights the importance of systematically investigating the role of R_(a)in establishing rainfall thresholds for triggering debris flow.Given the regional variations in rainfall patterns worldwide,it is necessary to evaluate the findings of this study across diverse watersheds.