In this paper,the estimation for a class of generalized varying coefficient models with error-prone covariates is considered.By combining basis function approximations with some auxiliary variables,an instrumental var...In this paper,the estimation for a class of generalized varying coefficient models with error-prone covariates is considered.By combining basis function approximations with some auxiliary variables,an instrumental variable type estimation procedure is proposed.The asymptotic results of the estimator,such as the consistency and the weak convergence rate,are obtained.The proposed procedure can attenuate the effect of measurement errors and have proved workable for finite samples.展开更多
This paper studies the estimation and inference for a class of varying-coefficient regression models with error-prone covariates.The authors focus on the situation where the covariates are unobserved,there are no repe...This paper studies the estimation and inference for a class of varying-coefficient regression models with error-prone covariates.The authors focus on the situation where the covariates are unobserved,there are no repeated measurements,and the covariance matrix of the measurement errors is unknown,but some auxiliary information is available.The authors propose an instrumental variable type local polynomial estimator for the unknown varying-coefficient functions,and show that the estimator achieves the optimal nonparametric convergence rate,is asymptotically normal,and avoids using undersmoothing to allow the bandwidths to be selected using data-driven methods.A simulation is carried out to study the finite sample performance of the proposed estimator,and a real date set is analyzed to illustrate the usefulness of the developed methodology.展开更多
Biometric gait recognition is a lesser-known but emerging and effective biometric recognition method which enables subjects’walking patterns to be recognized.Existing research in this area has primarily focused on fe...Biometric gait recognition is a lesser-known but emerging and effective biometric recognition method which enables subjects’walking patterns to be recognized.Existing research in this area has primarily focused on feature analysis through the extraction of individual features,which captures most of the information but fails to capture subtle variations in gait dynamics.Therefore,a novel feature taxonomy and an approach for deriving a relationship between a function of one set of gait features with another set are introduced.The gait features extracted from body halves divided by anatomical planes on vertical,horizontal,and diagonal axes are grouped to form canonical gait covariates.Canonical Correlation Analysis is utilized to measure the strength of association between the canonical covariates of gait.Thus,gait assessment and identification are enhancedwhenmore semantic information is available through CCA-basedmulti-feature fusion.Hence,CarnegieMellon University’s 3D gait database,which contains 32 gait samples taken at different paces,is utilized in analyzing gait characteristics.The performance of Linear Discriminant Analysis,K-Nearest Neighbors,Naive Bayes,Artificial Neural Networks,and Support Vector Machines was improved by a 4%average when the CCA-utilized gait identification approachwas used.Asignificant maximumaccuracy rate of 97.8%was achieved throughCCA-based gait identification.Beyond that,the rate of false identifications and unrecognized gaits went down to half,demonstrating state-of-the-art for gait identification.展开更多
Library construction is a common method used to screen target genes in molecular biology.Most library constructions are not suitable for a small DNA library(<100 base pair(bp))and low RNA library output.To maximize...Library construction is a common method used to screen target genes in molecular biology.Most library constructions are not suitable for a small DNA library(<100 base pair(bp))and low RNA library output.To maximize the library’s complexity,error-prone polymerase chain reaction(PCR)was used to increase the base mutation rate.After introducing the DNA fragments into the competent cell,the library complexity could reach 109.Library mutation rate increased exponentially with the dilution and amplification of error-prone PCR.The error-prone PCR conditions were optimized including deoxyribonucleotide triphosphate(dNTP)concentration,Mn^(2+)concentration,Mg^(2+)concentration,PCR cycle number,and primer length.Then,a RNA library with high complexity can be obtained by in vitro transcription to meet most molecular biological screening requirements,and can also be used for mRNA vaccine screening.展开更多
Spatial covariance matrix(SCM) is essential in many multi-antenna systems such as massive multiple-input multiple-output(MIMO). For multi-antenna systems operating at millimeter-wave bands, hybrid analog-digital struc...Spatial covariance matrix(SCM) is essential in many multi-antenna systems such as massive multiple-input multiple-output(MIMO). For multi-antenna systems operating at millimeter-wave bands, hybrid analog-digital structure has been widely adopted to reduce the cost of radio frequency chains.In this situation, signals received at the antennas are unavailable to the digital receiver, and as a consequence, traditional sample average approach cannot be used for SCM reconstruction in hybrid multi-antenna systems. To address this issue, beam sweeping algorithm(BSA) which can reconstruct the SCM effectively for a hybrid uniform linear array, has been proposed in our previous works. However, direct extension of BSA to a hybrid uniform circular array(UCA)will result in a huge computational burden. To this end, a low-complexity approach is proposed in this paper. By exploiting the symmetry features of SCM for the UCA, the number of unknowns can be reduced significantly and thus the complexity of reconstruction can be saved accordingly. Furthermore, an insightful analysis is also presented in this paper, showing that the reduction of the number of unknowns can also improve the accuracy of the reconstructed SCM. Simulation results are also shown to demonstrate the proposed approach.展开更多
The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based o...The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.展开更多
This paper proposes linear and nonlinear filters for a non-Gaussian dynamic system with an unknown nominal covariance of the output noise.The challenge of designing a suitable filter in the presence of an unknown cova...This paper proposes linear and nonlinear filters for a non-Gaussian dynamic system with an unknown nominal covariance of the output noise.The challenge of designing a suitable filter in the presence of an unknown covariance matrix is addressed by focusing on the output data set of the system.Considering that data generated from a Gaussian distribution exhibit ellipsoidal scattering,we first propose the weighted sum of norms(SON)clustering method that prioritizes nearby points,reduces distant point influence,and lowers computational cost.Then,by introducing the weighted maximum likelihood,we propose a semi-definite program(SDP)to detect outliers and reduce their impacts on each cluster.Detecting these weights paves the way to obtain an appropriate covariance of the output noise.Next,two filtering approaches are presented:a cluster-based robust linear filter using the maximum a posterior(MAP)estimation and a clusterbased robust nonlinear filter assuming that output noise distribution stems from some Gaussian noise resources according to the ellipsoidal clusters.At last,simulation results demonstrate the effectiveness of our proposed filtering approaches.展开更多
The octupole deformation and collectivity in octupole double-magic nucleus 144Ba are investigated using the Cranking covariant density functional theory in a three-dimensional lattice space.The reduced B(E3)transition...The octupole deformation and collectivity in octupole double-magic nucleus 144Ba are investigated using the Cranking covariant density functional theory in a three-dimensional lattice space.The reduced B(E3)transition probability is implemented for the first time in semiclassical approximation based on the microscopically calculated electric octupole moments.The available data,including the I-ωrelation and electric transitional probabilities B(E2)and B(E3)are well reproduced.Furthermore,it is shown that the ground state of 144Ba exhibits axial octupole and quadrupole deformations that persist up to high spins(I≈24h).展开更多
The covariant density functional theory(CDFT)and five-dimensional collective Hamiltonian(5DCH)are used to analyze the experimental deformation parameters and moments of inertia(MoIs)of 12 triaxial nuclei as extracted ...The covariant density functional theory(CDFT)and five-dimensional collective Hamiltonian(5DCH)are used to analyze the experimental deformation parameters and moments of inertia(MoIs)of 12 triaxial nuclei as extracted by Allmond and Wood[J.M.Allmond and J.L.Wood,Phys.Lett.B 767,226(2017)].We find that the CDFT MoIs are generally smaller than the experimental values but exhibit qualitative consistency with the irrotational flow and experimental data for the relative MoIs,indicating that the intermediate axis exhibites the largest MoI.Additionally,it is found that the pairing interaction collapse could result in nuclei behaving as a rigid-body flow,as exhibited in the^(186-192)Os case.Furthermore,by incorporating enhanced CDFT MoIs(factor of f≈1.55)into the 5DCH,the experimental low-lying energy spectra and deformation parameters are reproduced successfully.Compared with both CDFT and the triaxial rotor model,the 5DCH demonstrates superior agreement with the experimental deformation parameters and low-lying energy spectra,respectively,emphasizing the importance of considering shape fluctuations.展开更多
The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed wo...The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed work intends to implement a new security framework for detecting the most specific and harmful intrusions in IoT networks.In this framework,a Covariance Linear Learning Embedding Selection(CL2ES)methodology is used at first to extract the features highly associated with the IoT intrusions.Then,the Kernel Distributed Bayes Classifier(KDBC)is created to forecast attacks based on the probability distribution value precisely.In addition,a unique Mongolian Gazellas Optimization(MGO)algorithm is used to optimize the weight value for the learning of the classifier.The effectiveness of the proposed CL2ES-KDBC framework has been assessed using several IoT cyber-attack datasets,The obtained results are then compared with current classification methods regarding accuracy(97%),precision(96.5%),and other factors.Computational analysis of the CL2ES-KDBC system on IoT intrusion datasets is performed,which provides valuable insight into its performance,efficiency,and suitability for securing IoT networks.展开更多
Environmental covariates are the basis of predictive soil mapping.Their selection determines the performance of soil mapping to a great extent,especially in cases where the number of soil samples is limited but soil s...Environmental covariates are the basis of predictive soil mapping.Their selection determines the performance of soil mapping to a great extent,especially in cases where the number of soil samples is limited but soil spatial heterogeneity is high.In this study,we proposed an integrated method to select environmental covariates for predictive soil depth mapping.First,candidate variables that may influence the development of soil depth were selected based on pedogenetic knowledge.Second,three conventional methods(Pearson correlation analysis(PsCA),generalized additive models(GAMs),and Random Forest(RF))were used to generate optimal combinations of environmental covariates.Finally,three optimal combinations were integrated to produce a final combination based on the importance and occurrence frequency of each environmental covariate.We tested this method for soil depth mapping in the upper reaches of the Heihe River Basin in Northwest China.A total of 129 soil sampling sites were collected using a representative sampling strategy,and RF and support vector machine(SVM)models were used to map soil depth.The results showed that compared to the set of environmental covariates selected by the three conventional selection methods,the set of environmental covariates selected by the proposed method achieved higher mapping accuracy.The combination from the proposed method obtained a root mean square error(RMSE)of 11.88 cm,which was 2.25–7.64 cm lower than the other methods,and an R^2 value of 0.76,which was 0.08–0.26 higher than the other methods.The results suggest that our method can be used as an alternative to the conventional methods for soil depth mapping and may also be effective for mapping other soil properties.展开更多
Selecting a proper set of covariates is one of the most important factors that influence the accuracy of digital soil mapping(DSM).The statistical or machine learning methods for selecting DSM covariates are not avail...Selecting a proper set of covariates is one of the most important factors that influence the accuracy of digital soil mapping(DSM).The statistical or machine learning methods for selecting DSM covariates are not available for those situations with limited samples.To solve the problem,this paper proposed a case-based method which could formalize the covariate selection knowledge contained in practical DSM applications.The proposed method trained Random Forest(RF)classifiers with DSM cases extracted from the practical DSM applications and then used the trained classifiers to determine whether each one potential covariate should be used in a new DSM application.In this study,we took topographic covariates as examples of covariates and extracted 191 DSM cases from 56 peer-reviewed journal articles to evaluate the performance of the proposed case-based method by Leave-One-Out cross validation.Compared with a novices’commonly-used way of selecting DSM covariates,the proposed case-based method improved more than 30%accuracy according to three quantitative evaluation indices(i.e.,recall,precision,and F1-score).The proposed method could be also applied to selecting the proper set of covariates for other similar geographical modeling domains,such as landslide susceptibility mapping,and species distribution modeling.展开更多
In WLANs, stations sharing a common wireless channel are governed by IEEE 802.11 protocol. Many conscious studies have been conducted to utilize this precious medium efficiently. However, most of these studies have be...In WLANs, stations sharing a common wireless channel are governed by IEEE 802.11 protocol. Many conscious studies have been conducted to utilize this precious medium efficiently. However, most of these studies have been done either under assumption of idealistic channel condition or with unlimited retransmitting number. This paper is devoted to investigate influence of limited retransmissions and error level in the utilizing channel on the network throughput, probability of packet dropping and time to drop a packet. The results show that for networks using basic access mechanism the throughput is suppressed with increasing amount of errors in the transmitting channel over all the range of the retry limit. It is also quite sensitive to the size of the network. On the other side, the networks using four-way handshaking mechanism has a good immunity against the error over the available range of retry limits. Also the throughput is unchangeable with size of the network over the range of retransmission limits. However, the throughput does not change with retry limits when it exceeds the maximum number of the backoff stage in both DCF’s mechanisms. In both mechanisms the probability of dropping a packet is a decreasing function with number of retransmissions and the time to drop a packet in the queue of a station is a strong function to the number of retry limit, size of the network, the utilizing medium access mechanism and amount of errors in the channel.展开更多
A comprehensive study was presented for WLAN 802.11b using error-prone channel. It was theoretically and numerically evaluated the performance of three different network sizes with the bit rates that available in 802....A comprehensive study was presented for WLAN 802.11b using error-prone channel. It was theoretically and numerically evaluated the performance of three different network sizes with the bit rates that available in 802.11b protocol. Results show that throughput does not change with the size of the network for wide range of bit error rates (BERs) and the channel bit rates play a significant role in the main characteristics of the network. A comprehensive explanation has given for the phenomenon of the packet delay suppression at relatively high level of BERs in view of the size of the networks and the BERs. The effect length of the transmitting packets is also investigated.展开更多
In this study,a microscopic method for calculating the nuclear level density(NLD)based on the covariant density functional theory(CDFT)is developed.The particle-hole state density is calculated by a combinatorial meth...In this study,a microscopic method for calculating the nuclear level density(NLD)based on the covariant density functional theory(CDFT)is developed.The particle-hole state density is calculated by a combinatorial method using single-particle level schemes obtained from the CDFT,and the level densities are then obtained by considering collective effects such as vibration and rotation.Our results are compared with those of other NLD models,including phenomenological,microstatisti-cal and nonrelativistic Hartree–Fock–Bogoliubov combinatorial models.This comparison suggests that the general trends among these models are essentially the same,except for some deviations among the different NLD models.In addition,the NLDs obtained using the CDFT combinatorial method with normalization are compared with experimental data,including the observed cumulative number of levels at low excitation energies and the measured NLDs.The CDFT combinatorial method yields results that are in reasonable agreement with the existing experimental data.展开更多
Given a sample of regression data from (Y, Z), a new diagnostic plotting method is proposed for checking the hypothesis H0: the data are from a given Cox model with the time-dependent covariates Z. It compares two est...Given a sample of regression data from (Y, Z), a new diagnostic plotting method is proposed for checking the hypothesis H0: the data are from a given Cox model with the time-dependent covariates Z. It compares two estimates of the marginal distribution FY of Y. One is an estimate of the modified expression of FY under H0, based on a consistent estimate of the parameter under H0, and based on the baseline distribution of the data. The other is the Kaplan-Meier-estimator of FY, together with its confidence band. The new plot, called the marginal distribution plot, can be viewed as a test for testing H0. The main advantage of the test over the existing residual tests is in the case that the data do not satisfy any Cox model or the Cox model is mis-specified. Then the new test is still valid, but not the residual tests and the residual tests often make type II error with a very large probability.展开更多
The consideration of the time-varying covariate and time-varying coefficient effect in survival models are plausible and robust techniques. Such kind of analysis can be carried out with a general class of semiparametr...The consideration of the time-varying covariate and time-varying coefficient effect in survival models are plausible and robust techniques. Such kind of analysis can be carried out with a general class of semiparametric transformation models. The aim of this article is to develop modified estimating equations under semiparametric transformation models of survival time with time-varying coefficient effect and time-varying continuous covariates. For this, it is important to organize the data in a counting process style and transform the time with standard transformation classes which shall be applied in this article. In the situation when the effect of coefficient and covariates change over time, the widely used maximum likelihood estimation method becomes more complex and burdensome in estimating consistent estimates. To overcome this problem, alternatively, the modified estimating equations were applied to estimate the unknown parameters and unspecified monotone transformation functions. The estimating equations were modified to incorporate the time-varying effect in both coefficient and covariates. The performance of the proposed methods is tested through a simulation study. To sum up the study, the effect of possibly time-varying covariates and time-varying coefficients was evaluated in some special cases of semiparametric transformation models. Finally, the results have shown that the role of the time-varying covariate in the semiparametric transformation models was plausible and credible.展开更多
The empirical likelihood-based inference for varying coefficient models with missing covariates is investigated. An imputed empirical likelihood ratio function for the coefficient functions is proposed, and it is show...The empirical likelihood-based inference for varying coefficient models with missing covariates is investigated. An imputed empirical likelihood ratio function for the coefficient functions is proposed, and it is shown that iis limiting distribution is standard chi-squared. Then the corresponding confidence intervals for the regression coefficients are constructed. Some simulations show that the proposed procedure can attenuate the effect of the missing data, and performs well for the finite sample.展开更多
This research develops a comparative study between different multiplicative weights that are assigned to the covariance matrix that represents the background error in two hybrid assimilation schemes: 3DEnVAR and 4DEnV...This research develops a comparative study between different multiplicative weights that are assigned to the covariance matrix that represents the background error in two hybrid assimilation schemes: 3DEnVAR and 4DEnVAR. These weights are distributed between the static and time-invariant matrix and the matrix generated from the perturbations of a previous ensemble. The assigned values are 25%, 50%, and 75%, always having as a reference the ensemble matrix. The experiments are applied to the short-range Prediction System (SisPI) that works operationally at the Institute of Meteorology. The impact of Tropical Storm Eta on November 7 and 8, 2020 was selected as a study case. The results suggest that by giving the main weight to the ensemble matrix more realistic solutions are achieved because it shows a better representation of the synoptic flow. On the other hand, it is observed that 3DEnVAR method is more sensitive to multiplicative weight change of the first guess. More realistic results are obtained with 50% and 75% relations with 4DEnVAR method, whereas with 3DEnVAR a weight of 75% for the ensemble matrix is required.展开更多
基金Supported by the National Natural Science Foundation of China(11101119)the Natural Science Foundation of Guangxi(2010GXNSFB013051)the Philosophy and Social Sciences Foundation of Guangxi(11FTJ002)
文摘In this paper,the estimation for a class of generalized varying coefficient models with error-prone covariates is considered.By combining basis function approximations with some auxiliary variables,an instrumental variable type estimation procedure is proposed.The asymptotic results of the estimator,such as the consistency and the weak convergence rate,are obtained.The proposed procedure can attenuate the effect of measurement errors and have proved workable for finite samples.
基金supported by the Graduate Student Innovation Foundation of SHUFE(#CXJJ-2011-351)supported by the Natural Sciences and Engineering Research Council of Canada
文摘This paper studies the estimation and inference for a class of varying-coefficient regression models with error-prone covariates.The authors focus on the situation where the covariates are unobserved,there are no repeated measurements,and the covariance matrix of the measurement errors is unknown,but some auxiliary information is available.The authors propose an instrumental variable type local polynomial estimator for the unknown varying-coefficient functions,and show that the estimator achieves the optimal nonparametric convergence rate,is asymptotically normal,and avoids using undersmoothing to allow the bandwidths to be selected using data-driven methods.A simulation is carried out to study the finite sample performance of the proposed estimator,and a real date set is analyzed to illustrate the usefulness of the developed methodology.
基金supported by Istanbul University Scientific Research Project Department with IRP-51706 Project Number.
文摘Biometric gait recognition is a lesser-known but emerging and effective biometric recognition method which enables subjects’walking patterns to be recognized.Existing research in this area has primarily focused on feature analysis through the extraction of individual features,which captures most of the information but fails to capture subtle variations in gait dynamics.Therefore,a novel feature taxonomy and an approach for deriving a relationship between a function of one set of gait features with another set are introduced.The gait features extracted from body halves divided by anatomical planes on vertical,horizontal,and diagonal axes are grouped to form canonical gait covariates.Canonical Correlation Analysis is utilized to measure the strength of association between the canonical covariates of gait.Thus,gait assessment and identification are enhancedwhenmore semantic information is available through CCA-basedmulti-feature fusion.Hence,CarnegieMellon University’s 3D gait database,which contains 32 gait samples taken at different paces,is utilized in analyzing gait characteristics.The performance of Linear Discriminant Analysis,K-Nearest Neighbors,Naive Bayes,Artificial Neural Networks,and Support Vector Machines was improved by a 4%average when the CCA-utilized gait identification approachwas used.Asignificant maximumaccuracy rate of 97.8%was achieved throughCCA-based gait identification.Beyond that,the rate of false identifications and unrecognized gaits went down to half,demonstrating state-of-the-art for gait identification.
基金Shanghai Science and Technology Commission’s“Belt and Road Initiative”International Cooperation Project,China(No.19410741800)。
文摘Library construction is a common method used to screen target genes in molecular biology.Most library constructions are not suitable for a small DNA library(<100 base pair(bp))and low RNA library output.To maximize the library’s complexity,error-prone polymerase chain reaction(PCR)was used to increase the base mutation rate.After introducing the DNA fragments into the competent cell,the library complexity could reach 109.Library mutation rate increased exponentially with the dilution and amplification of error-prone PCR.The error-prone PCR conditions were optimized including deoxyribonucleotide triphosphate(dNTP)concentration,Mn^(2+)concentration,Mg^(2+)concentration,PCR cycle number,and primer length.Then,a RNA library with high complexity can be obtained by in vitro transcription to meet most molecular biological screening requirements,and can also be used for mRNA vaccine screening.
基金supported by National Key Research and Development Program of China under Grant 2020YFB1804901State Key Laboratory of Rail Traffic Control and Safety(Contract:No.RCS2022ZT 015)Special Key Project of Technological Innovation and Application Development of Chongqing Science and Technology Bureau(cstc2019jscx-fxydX0053).
文摘Spatial covariance matrix(SCM) is essential in many multi-antenna systems such as massive multiple-input multiple-output(MIMO). For multi-antenna systems operating at millimeter-wave bands, hybrid analog-digital structure has been widely adopted to reduce the cost of radio frequency chains.In this situation, signals received at the antennas are unavailable to the digital receiver, and as a consequence, traditional sample average approach cannot be used for SCM reconstruction in hybrid multi-antenna systems. To address this issue, beam sweeping algorithm(BSA) which can reconstruct the SCM effectively for a hybrid uniform linear array, has been proposed in our previous works. However, direct extension of BSA to a hybrid uniform circular array(UCA)will result in a huge computational burden. To this end, a low-complexity approach is proposed in this paper. By exploiting the symmetry features of SCM for the UCA, the number of unknowns can be reduced significantly and thus the complexity of reconstruction can be saved accordingly. Furthermore, an insightful analysis is also presented in this paper, showing that the reduction of the number of unknowns can also improve the accuracy of the reconstructed SCM. Simulation results are also shown to demonstrate the proposed approach.
文摘The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.
文摘This paper proposes linear and nonlinear filters for a non-Gaussian dynamic system with an unknown nominal covariance of the output noise.The challenge of designing a suitable filter in the presence of an unknown covariance matrix is addressed by focusing on the output data set of the system.Considering that data generated from a Gaussian distribution exhibit ellipsoidal scattering,we first propose the weighted sum of norms(SON)clustering method that prioritizes nearby points,reduces distant point influence,and lowers computational cost.Then,by introducing the weighted maximum likelihood,we propose a semi-definite program(SDP)to detect outliers and reduce their impacts on each cluster.Detecting these weights paves the way to obtain an appropriate covariance of the output noise.Next,two filtering approaches are presented:a cluster-based robust linear filter using the maximum a posterior(MAP)estimation and a clusterbased robust nonlinear filter assuming that output noise distribution stems from some Gaussian noise resources according to the ellipsoidal clusters.At last,simulation results demonstrate the effectiveness of our proposed filtering approaches.
基金supported by the National Natural Science Foundation of China(NSFC)(No.12205097)the Fundamental Research Funds for the Central Universities(No.2024MS071)。
文摘The octupole deformation and collectivity in octupole double-magic nucleus 144Ba are investigated using the Cranking covariant density functional theory in a three-dimensional lattice space.The reduced B(E3)transition probability is implemented for the first time in semiclassical approximation based on the microscopically calculated electric octupole moments.The available data,including the I-ωrelation and electric transitional probabilities B(E2)and B(E3)are well reproduced.Furthermore,it is shown that the ground state of 144Ba exhibits axial octupole and quadrupole deformations that persist up to high spins(I≈24h).
基金supported by the National Natural Science Foundation of China(No.12205103)。
文摘The covariant density functional theory(CDFT)and five-dimensional collective Hamiltonian(5DCH)are used to analyze the experimental deformation parameters and moments of inertia(MoIs)of 12 triaxial nuclei as extracted by Allmond and Wood[J.M.Allmond and J.L.Wood,Phys.Lett.B 767,226(2017)].We find that the CDFT MoIs are generally smaller than the experimental values but exhibit qualitative consistency with the irrotational flow and experimental data for the relative MoIs,indicating that the intermediate axis exhibites the largest MoI.Additionally,it is found that the pairing interaction collapse could result in nuclei behaving as a rigid-body flow,as exhibited in the^(186-192)Os case.Furthermore,by incorporating enhanced CDFT MoIs(factor of f≈1.55)into the 5DCH,the experimental low-lying energy spectra and deformation parameters are reproduced successfully.Compared with both CDFT and the triaxial rotor model,the 5DCH demonstrates superior agreement with the experimental deformation parameters and low-lying energy spectra,respectively,emphasizing the importance of considering shape fluctuations.
文摘The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed work intends to implement a new security framework for detecting the most specific and harmful intrusions in IoT networks.In this framework,a Covariance Linear Learning Embedding Selection(CL2ES)methodology is used at first to extract the features highly associated with the IoT intrusions.Then,the Kernel Distributed Bayes Classifier(KDBC)is created to forecast attacks based on the probability distribution value precisely.In addition,a unique Mongolian Gazellas Optimization(MGO)algorithm is used to optimize the weight value for the learning of the classifier.The effectiveness of the proposed CL2ES-KDBC framework has been assessed using several IoT cyber-attack datasets,The obtained results are then compared with current classification methods regarding accuracy(97%),precision(96.5%),and other factors.Computational analysis of the CL2ES-KDBC system on IoT intrusion datasets is performed,which provides valuable insight into its performance,efficiency,and suitability for securing IoT networks.
基金supported financially by the National Natural Science Foundation of China (91325301, 41571212 and 41137224)the Project of "One-Three-Five" Strategic Planning & Frontier Sciences of the Institute of Soil Science, Chinese Academy of Sciences (ISSASIP1622)the National Key Basic Research Special Foundation of China (2012FY112100)
文摘Environmental covariates are the basis of predictive soil mapping.Their selection determines the performance of soil mapping to a great extent,especially in cases where the number of soil samples is limited but soil spatial heterogeneity is high.In this study,we proposed an integrated method to select environmental covariates for predictive soil depth mapping.First,candidate variables that may influence the development of soil depth were selected based on pedogenetic knowledge.Second,three conventional methods(Pearson correlation analysis(PsCA),generalized additive models(GAMs),and Random Forest(RF))were used to generate optimal combinations of environmental covariates.Finally,three optimal combinations were integrated to produce a final combination based on the importance and occurrence frequency of each environmental covariate.We tested this method for soil depth mapping in the upper reaches of the Heihe River Basin in Northwest China.A total of 129 soil sampling sites were collected using a representative sampling strategy,and RF and support vector machine(SVM)models were used to map soil depth.The results showed that compared to the set of environmental covariates selected by the three conventional selection methods,the set of environmental covariates selected by the proposed method achieved higher mapping accuracy.The combination from the proposed method obtained a root mean square error(RMSE)of 11.88 cm,which was 2.25–7.64 cm lower than the other methods,and an R^2 value of 0.76,which was 0.08–0.26 higher than the other methods.The results suggest that our method can be used as an alternative to the conventional methods for soil depth mapping and may also be effective for mapping other soil properties.
基金supported by grants from the National Natural Science Foundation of China(41431177 and 41871300)the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD),China+4 种基金the Innovation Project of State Key Laboratory of Resources and Environmental Information System(LREIS),China(O88RA20CYA)the Outstanding Innovation Team in Colleges and Universities in Jiangsu Province,ChinaSupports to A-Xing Zhu through the Vilas Associate Awardthe Hammel Faculty Fellow Awardthe Manasse Chair Professorship from the University of Wisconsin-Madison。
文摘Selecting a proper set of covariates is one of the most important factors that influence the accuracy of digital soil mapping(DSM).The statistical or machine learning methods for selecting DSM covariates are not available for those situations with limited samples.To solve the problem,this paper proposed a case-based method which could formalize the covariate selection knowledge contained in practical DSM applications.The proposed method trained Random Forest(RF)classifiers with DSM cases extracted from the practical DSM applications and then used the trained classifiers to determine whether each one potential covariate should be used in a new DSM application.In this study,we took topographic covariates as examples of covariates and extracted 191 DSM cases from 56 peer-reviewed journal articles to evaluate the performance of the proposed case-based method by Leave-One-Out cross validation.Compared with a novices’commonly-used way of selecting DSM covariates,the proposed case-based method improved more than 30%accuracy according to three quantitative evaluation indices(i.e.,recall,precision,and F1-score).The proposed method could be also applied to selecting the proper set of covariates for other similar geographical modeling domains,such as landslide susceptibility mapping,and species distribution modeling.
文摘In WLANs, stations sharing a common wireless channel are governed by IEEE 802.11 protocol. Many conscious studies have been conducted to utilize this precious medium efficiently. However, most of these studies have been done either under assumption of idealistic channel condition or with unlimited retransmitting number. This paper is devoted to investigate influence of limited retransmissions and error level in the utilizing channel on the network throughput, probability of packet dropping and time to drop a packet. The results show that for networks using basic access mechanism the throughput is suppressed with increasing amount of errors in the transmitting channel over all the range of the retry limit. It is also quite sensitive to the size of the network. On the other side, the networks using four-way handshaking mechanism has a good immunity against the error over the available range of retry limits. Also the throughput is unchangeable with size of the network over the range of retransmission limits. However, the throughput does not change with retry limits when it exceeds the maximum number of the backoff stage in both DCF’s mechanisms. In both mechanisms the probability of dropping a packet is a decreasing function with number of retransmissions and the time to drop a packet in the queue of a station is a strong function to the number of retry limit, size of the network, the utilizing medium access mechanism and amount of errors in the channel.
文摘A comprehensive study was presented for WLAN 802.11b using error-prone channel. It was theoretically and numerically evaluated the performance of three different network sizes with the bit rates that available in 802.11b protocol. Results show that throughput does not change with the size of the network for wide range of bit error rates (BERs) and the channel bit rates play a significant role in the main characteristics of the network. A comprehensive explanation has given for the phenomenon of the packet delay suppression at relatively high level of BERs in view of the size of the networks and the BERs. The effect length of the transmitting packets is also investigated.
基金supported by the Natural Science Foundation of Jilin Province(No.20220101017JC)National Natural Science Foundation of China(No.11675063)Key Laboratory of Nuclear Data Foundation(JCKY2020201C157).
文摘In this study,a microscopic method for calculating the nuclear level density(NLD)based on the covariant density functional theory(CDFT)is developed.The particle-hole state density is calculated by a combinatorial method using single-particle level schemes obtained from the CDFT,and the level densities are then obtained by considering collective effects such as vibration and rotation.Our results are compared with those of other NLD models,including phenomenological,microstatisti-cal and nonrelativistic Hartree–Fock–Bogoliubov combinatorial models.This comparison suggests that the general trends among these models are essentially the same,except for some deviations among the different NLD models.In addition,the NLDs obtained using the CDFT combinatorial method with normalization are compared with experimental data,including the observed cumulative number of levels at low excitation energies and the measured NLDs.The CDFT combinatorial method yields results that are in reasonable agreement with the existing experimental data.
文摘Given a sample of regression data from (Y, Z), a new diagnostic plotting method is proposed for checking the hypothesis H0: the data are from a given Cox model with the time-dependent covariates Z. It compares two estimates of the marginal distribution FY of Y. One is an estimate of the modified expression of FY under H0, based on a consistent estimate of the parameter under H0, and based on the baseline distribution of the data. The other is the Kaplan-Meier-estimator of FY, together with its confidence band. The new plot, called the marginal distribution plot, can be viewed as a test for testing H0. The main advantage of the test over the existing residual tests is in the case that the data do not satisfy any Cox model or the Cox model is mis-specified. Then the new test is still valid, but not the residual tests and the residual tests often make type II error with a very large probability.
文摘The consideration of the time-varying covariate and time-varying coefficient effect in survival models are plausible and robust techniques. Such kind of analysis can be carried out with a general class of semiparametric transformation models. The aim of this article is to develop modified estimating equations under semiparametric transformation models of survival time with time-varying coefficient effect and time-varying continuous covariates. For this, it is important to organize the data in a counting process style and transform the time with standard transformation classes which shall be applied in this article. In the situation when the effect of coefficient and covariates change over time, the widely used maximum likelihood estimation method becomes more complex and burdensome in estimating consistent estimates. To overcome this problem, alternatively, the modified estimating equations were applied to estimate the unknown parameters and unspecified monotone transformation functions. The estimating equations were modified to incorporate the time-varying effect in both coefficient and covariates. The performance of the proposed methods is tested through a simulation study. To sum up the study, the effect of possibly time-varying covariates and time-varying coefficients was evaluated in some special cases of semiparametric transformation models. Finally, the results have shown that the role of the time-varying covariate in the semiparametric transformation models was plausible and credible.
文摘The empirical likelihood-based inference for varying coefficient models with missing covariates is investigated. An imputed empirical likelihood ratio function for the coefficient functions is proposed, and it is shown that iis limiting distribution is standard chi-squared. Then the corresponding confidence intervals for the regression coefficients are constructed. Some simulations show that the proposed procedure can attenuate the effect of the missing data, and performs well for the finite sample.
文摘This research develops a comparative study between different multiplicative weights that are assigned to the covariance matrix that represents the background error in two hybrid assimilation schemes: 3DEnVAR and 4DEnVAR. These weights are distributed between the static and time-invariant matrix and the matrix generated from the perturbations of a previous ensemble. The assigned values are 25%, 50%, and 75%, always having as a reference the ensemble matrix. The experiments are applied to the short-range Prediction System (SisPI) that works operationally at the Institute of Meteorology. The impact of Tropical Storm Eta on November 7 and 8, 2020 was selected as a study case. The results suggest that by giving the main weight to the ensemble matrix more realistic solutions are achieved because it shows a better representation of the synoptic flow. On the other hand, it is observed that 3DEnVAR method is more sensitive to multiplicative weight change of the first guess. More realistic results are obtained with 50% and 75% relations with 4DEnVAR method, whereas with 3DEnVAR a weight of 75% for the ensemble matrix is required.