This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighte...This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighted.The influence of inter-layer couplings on the target controllability of multi-layer networks is discussed.It is found that even if there exists a layer which is not target controllable,the entire multi-layer network can still be target controllable due to the inter-layer couplings.For the multi-layer networks with general structure,a necessary and sufficient condition for target controllability is given by establishing the relationship between uncontrollable subspace and output matrix.By the derived condition,it can be found that the system may be target controllable even if it is not state controllable.On this basis,two corollaries are derived,which clarify the relationship between target controllability,state controllability and output controllability.For the multi-layer networks where the inter-layer couplings are directed chains and directed stars,sufficient conditions for target controllability of networked systems are given,respectively.These conditions are easier to verify than the classic criterion.展开更多
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the effi...The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.展开更多
The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based o...The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.展开更多
In this paper,an Observation Points Classifier Ensemble(OPCE)algorithm is proposed to deal with High-Dimensional Imbalanced Classification(HDIC)problems based on data processed using the Multi-Dimensional Scaling(MDS)...In this paper,an Observation Points Classifier Ensemble(OPCE)algorithm is proposed to deal with High-Dimensional Imbalanced Classification(HDIC)problems based on data processed using the Multi-Dimensional Scaling(MDS)feature extraction technique.First,dimensionality of the original imbalanced data is reduced using MDS so that distances between any two different samples are preserved as well as possible.Second,a novel OPCE algorithm is applied to classify imbalanced samples by placing optimised observation points in a low-dimensional data space.Third,optimization of the observation point mappings is carried out to obtain a reliable assessment of the unknown samples.Exhaustive experiments have been conducted to evaluate the feasibility,rationality,and effectiveness of the proposed OPCE algorithm using seven benchmark HDIC data sets.Experimental results show that(1)the OPCE algorithm can be trained faster on low-dimensional imbalanced data than on high-dimensional data;(2)the OPCE algorithm can correctly identify samples as the number of optimised observation points is increased;and(3)statistical analysis reveals that OPCE yields better HDIC performances on the selected data sets in comparison with eight other HDIC algorithms.This demonstrates that OPCE is a viable algorithm to deal with HDIC problems.展开更多
As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected featu...As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected features.Evolutionary computing(EC)is promising for FS owing to its powerful search capability.However,in traditional EC-based methods,feature subsets are represented via a length-fixed individual encoding.It is ineffective for high-dimensional data,because it results in a huge search space and prohibitive training time.This work proposes a length-adaptive non-dominated sorting genetic algorithm(LA-NSGA)with a length-variable individual encoding and a length-adaptive evolution mechanism for bi-objective highdimensional FS.In LA-NSGA,an initialization method based on correlation and redundancy is devised to initialize individuals of diverse lengths,and a Pareto dominance-based length change operator is introduced to guide individuals to explore in promising search space adaptively.Moreover,a dominance-based local search method is employed for further improvement.The experimental results based on 12 high-dimensional gene datasets show that the Pareto front of feature subsets produced by LA-NSGA is superior to those of existing algorithms.展开更多
k-means is a popular clustering algorithm because of its simplicity and scalability to handle large datasets.However,one of its setbacks is the challenge of identifying the correct k-hyperparameter value.Tuning this v...k-means is a popular clustering algorithm because of its simplicity and scalability to handle large datasets.However,one of its setbacks is the challenge of identifying the correct k-hyperparameter value.Tuning this value correctly is critical for building effective k-means models.The use of the traditional elbow method to help identify this value has a long-standing literature.However,when using this method with certain datasets,smooth curves may appear,making it challenging to identify the k-value due to its unclear nature.On the other hand,various internal validation indexes,which are proposed as a solution to this issue,may be inconsistent.Although various techniques for solving smooth elbow challenges exist,k-hyperparameter tuning in high-dimensional spaces still remains intractable and an open research issue.In this paper,we have first reviewed the existing techniques for solving smooth elbow challenges.The identified research gaps are then utilized in the development of the new technique.The new technique,referred to as the ensemble-based technique of a self-adapting autoencoder and internal validation indexes,is then validated in high-dimensional space clustering.The optimal k-value,tuned by this technique using a voting scheme,is a trade-off between the number of clusters visualized in the autoencoder’s latent space,k-value from the ensemble internal validation index score and one that generates a value of 0 or close to 0 on the derivative f″′(k)(1+f′(k)^(2))−3 f″(k)^(2)f″((k)2f′(k),at the elbow.Experimental results based on the Cochran’s Q test,ANOVA,and McNemar’s score indicate a relatively good performance of the newly developed technique in k-hyperparameter tuning.展开更多
Triosephosphate isomerase(TPI)is an enzyme that functions in plant energy production,accumulation,and conversion.To understand its function in maize,we characterized a maize TPI mutant,zmtpi4.In comparison to the wild...Triosephosphate isomerase(TPI)is an enzyme that functions in plant energy production,accumulation,and conversion.To understand its function in maize,we characterized a maize TPI mutant,zmtpi4.In comparison to the wild type,zmtpi4 mutants showed altered ear development,reduced kernel weight and starch content,modified starch granule morphology,and altered amylose and amylopectin content.Protein,ATP,and pyruvate contents were reduced,indicating ZmTPI4 was involved in glycolysis.Although subcellular localization confirmed ZmTPI4 as a cytosolic rather than a plastid isoform of TPI,the zmtpi4 mutant showed reduced leaf size and chlorophyll content.Overexpression of ZmTPI4 in Arabidopsis led to enlarged leaves and increased seed weight,suggesting a positive regulatory role of ZmTPI4 in kernel weight and starch content.We conclude that ZmTPI4 functions in maize kernel development,starch synthesis,glycolysis,and photosynthesis.展开更多
The adulteration concentration of palm kernel oil(PKO)in virgin coconut oil(VCO)was quantified using near-infrared(NIR)hyperspectral imaging.Nowadays,some VCO is adulterated with lower-priced PKO to reduce production ...The adulteration concentration of palm kernel oil(PKO)in virgin coconut oil(VCO)was quantified using near-infrared(NIR)hyperspectral imaging.Nowadays,some VCO is adulterated with lower-priced PKO to reduce production costs,which diminishes the quality of the VCO.This study used NIR hyperspectral imaging in the wavelength region 900-1,650 nm to create a quantitative model for the detection of PKO contaminants(0-100%)in VCO and to develop predictive mapping.The prediction equation for the adulteration of VCO with PKO was constructed using the partial least squares regression method.The best predictive model was pre-processed using the standard normal variate method,and the coefficient of determination of prediction was 0.991,the root mean square error of prediction was 2.93%,and the residual prediction deviation was 10.37.The results showed that this model could be applied for quantifying the adulteration concentration of PKO in VCO.The prediction adulteration concentration mapping of VCO with PKO was created from a calibration model that showed the color level according to the adulteration concentration in the range of 0-100%.NIR hyperspectral imaging could be clearly used to quantify the adulteration of VCO with a color level map that provides a quick,accurate,and non-destructive detection method.展开更多
A non-Maxwellian collision kernel is employed to study the evolution of wealth distribution in a multi-agent society.The collision kernel divides agents into two different groups under certain conditions. Applying the...A non-Maxwellian collision kernel is employed to study the evolution of wealth distribution in a multi-agent society.The collision kernel divides agents into two different groups under certain conditions. Applying the kinetic theory of rarefied gases, we construct a two-group kinetic model for the evolution of wealth distribution. Under the continuous trading limit, the Fokker–Planck equation is derived and its steady-state solution is obtained. For the non-Maxwellian collision kernel, we find a suitable redistribution operator to match the taxation. Our results illustrate that taxation and redistribution have the property to change the Pareto index.展开更多
The extended kernel ridge regression(EKRR)method with odd-even effects was adopted to improve the description of the nuclear charge radius using five commonly used nuclear models.These are:(i)the isospin-dependent A^(...The extended kernel ridge regression(EKRR)method with odd-even effects was adopted to improve the description of the nuclear charge radius using five commonly used nuclear models.These are:(i)the isospin-dependent A^(1∕3) formula,(ii)relativistic continuum Hartree-Bogoliubov(RCHB)theory,(iii)Hartree-Fock-Bogoliubov(HFB)model HFB25,(iv)the Weizsacker-Skyrme(WS)model WS*,and(v)HFB25*model.In the last two models,the charge radii were calculated using a five-parameter formula with the nuclear shell corrections and deformations obtained from the WS and HFB25 models,respectively.For each model,the resultant root-mean-square deviation for the 1014 nuclei with proton number Z≥8 can be significantly reduced to 0.009-0.013 fm after considering the modification with the EKRR method.The best among them was the RCHB model,with a root-mean-square deviation of 0.0092 fm.The extrapolation abilities of the KRR and EKRR methods for the neutron-rich region were examined,and it was found that after considering the odd-even effects,the extrapolation power was improved compared with that of the original KRR method.The strong odd-even staggering of nuclear charge radii of Ca and Cu isotopes and the abrupt kinks across the neutron N=126 and 82 shell closures were also calculated and could be reproduced quite well by calculations using the EKRR method.展开更多
Adjusting agronomic measures to alleviate the kernel position effect in maize is important for ensuring high yields.In order to clarify whether the combined application of organic fertilizer and chemical fertilizer(CA...Adjusting agronomic measures to alleviate the kernel position effect in maize is important for ensuring high yields.In order to clarify whether the combined application of organic fertilizer and chemical fertilizer(CAOFCF)can alleviate the kernel position effect of summer maize,field experiments were conducted during the 2019 and 2020 growing seasons,and five treatments were assessed:CF,100%chemical fertilizer;OFCF1,15%organic fertilizer+85%chemical fertilizer;OFCF2,30%organic fertilizer+70%chemical fertilizer;OFCF3,45%organic fertilizer+55%chemical fertilizer;and OFCF4,60%organic fertilizer+40%chemical fertilizer.Compared with the CF treatment,the OFCF1 and OFCF2 treatments significantly alleviated the kernel position effect by increasing the weight ratio of inferior kernels to superior kernels and reducing the weight gap between the superior and inferior kernels.These effects were largely due to the improved filling and starch accumulation of inferior kernels.However,there were no obvious differences in the kernel position effect among plants treated with CF,OFCF3,or OFCF4 in most cases.Leaf area indexes,post-silking photosynthetic rates,and net assimilation rates were higher in plants treated with OFCF1 or OFCF2 than in those treated with CF,reflecting an enhanced photosynthetic capacity and improved postsilking dry matter accumulation(DMA)in the plants treated with OFCF1 or OFCF2.Compared with the CF treatment,the OFCF1 and OFCF2 treatments increased post-silking N uptake by 66.3 and 75.5%,respectively,which was the major factor driving post-silking photosynthetic capacity and DMA.Moreover,the increases in root DMA and zeatin riboside content observed following the OFCF1 and OFCF2 treatments resulted in reduced root senescence,which is associated with an increased post-silking N uptake.Analyses showed that post-silking N uptake,DMA,and grain yield in summer maize were negatively correlated with the kernel position effect.In conclusion,the combined application of 15-30%organic fertilizer and 70-85%chemical fertilizer alleviated the kernel position effect in summer maize by improving post-silking N uptake and DMA.These results provide new insights into how CAOFCF can be used to improve maize productivity.展开更多
Although it is recognized that the post-harvest system is most responsible for the loss of soybean quality,the real impact of this loss is still unknown.Brazilian regulation allows 15%and 30%of broken soybean for grou...Although it is recognized that the post-harvest system is most responsible for the loss of soybean quality,the real impact of this loss is still unknown.Brazilian regulation allows 15%and 30%of broken soybean for group I and group II(quality groups),respectively.However,the industry is not informed about the loss in the quality parameters of soybeans and its impacts during long-term storage.Therefore,the objective was to evaluate the effect of the breakage kernel percentage of soybean stored for 12 months.Content of 15% of breakage kernels did not affect soybean quality.However,content of 30% of breakage kernels affected significantly soybean quality,which was evidenced by the increase of up to 75% in moldy soybeans,72% in acidity,50% in leached solids,27% in electrical conductivity,and the decrease of up to 12% in soluble protein,6.4% in germination and 3.5% in thousand kernel weight after 8 months of storage.Although the legislation establishes a percentage limit,it is recommended to store soybeans with up to 15% breakage kernels.On the contrary,values higher than that can cause a significant reduction in soybean quality,resulting in commercial losses.展开更多
The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed wo...The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed work intends to implement a new security framework for detecting the most specific and harmful intrusions in IoT networks.In this framework,a Covariance Linear Learning Embedding Selection(CL2ES)methodology is used at first to extract the features highly associated with the IoT intrusions.Then,the Kernel Distributed Bayes Classifier(KDBC)is created to forecast attacks based on the probability distribution value precisely.In addition,a unique Mongolian Gazellas Optimization(MGO)algorithm is used to optimize the weight value for the learning of the classifier.The effectiveness of the proposed CL2ES-KDBC framework has been assessed using several IoT cyber-attack datasets,The obtained results are then compared with current classification methods regarding accuracy(97%),precision(96.5%),and other factors.Computational analysis of the CL2ES-KDBC system on IoT intrusion datasets is performed,which provides valuable insight into its performance,efficiency,and suitability for securing IoT networks.展开更多
Metal trace elements (MTE) are among the most harmful micropollutants of natural waters. Eliminating them helps improve the quality and safety of drinking water and protect human health. In this work, we used mango ke...Metal trace elements (MTE) are among the most harmful micropollutants of natural waters. Eliminating them helps improve the quality and safety of drinking water and protect human health. In this work, we used mango kernel powder (MKP) as bioadsorbent material for removal of Cr (VI) from water. Uv-visible spectroscopy was used to monitor and quantify Cr (VI) during processing using the Beer-Lambert formula. Some parameters such as pH, mango powder, mass and contact time were optimized to determine adsorption capacity and chromium removal rate. Adsorption kinetics, equilibrium, isotherms and thermodynamic parameters such as ΔG˚, ΔH˚, and ΔS˚, as well as FTIR were studied to better understand the Cr (VI) removal process by MKP. The adsorption capacity reached 94.87 mg/g, for an optimal contact time of 30 min at 298 K. The obtained results are in accordance with a pseudo-second order Freundlich adsorption isotherm model. Finally FTIR was used to monitor the evolution of absorption bands, while Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS) were used to evaluate surface properties and morphology of the adsorbent.展开更多
We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner...We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner product defined on both the unit ball and the unit sphere, construct the kernel-regularized learning algorithm from the view of semi-supervised learning and bound the upper bounds for the learning rates. The theory analysis shows that the learning algorithm has better uniform convergence according to the number of samples. The research can be regarded as an application of kernel-regularized semi-supervised learning.展开更多
Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for...Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for high-dimensiona multi-agent systems with time-varying delays, where a cos function is defined based on state errors among neighboring agents and control inputs of all the agents. By the state space decomposition approach and the linear matrix inequality(LMI)sufficient conditions for guaranteed cost consensus and consensu alization are given. Moreover, a guaranteed cost upper bound o the cost function is determined. It should be mentioned that these LMI criteria are dependent on the change rate of time delays and the maximum time delay, the guaranteed cost upper bound is only dependent on the maximum time delay but independen of the Laplacian matrix. Finally, numerical simulations are given to demonstrate theoretical results.展开更多
Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts itera...Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost. Hence, determining how to accelerate the training process for LF models has become a significant issue. To address this, this work proposes a randomized latent factor(RLF) model. It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices, thereby greatly alleviating computational burden. It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models, RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices, which is especially desired for industrial applications demanding highly efficient models.展开更多
The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities...The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction.展开更多
Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high paralleliz...Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high parallelization,large bandwidth,and low power consumption to meet the demand of big data.Here,we demonstrate the dual-layer ONN with Mach-Zehnder interferometer(MZI)network and nonlinear layer,while the nonlinear activation function is achieved by optical-electronic signal conversion.Two frequency components from the microcomb source carrying digit datasets are simultaneously imposed and intelligently recognized through the ONN.We successfully achieve the digit classification of different frequency components by demultiplexing the output signal and testing power distribution.Efficient parallelization feasibility with wavelength division multiplexing is demonstrated in our high-dimensional ONN.This work provides a high-performance architecture for future parallel high-capacity optical analog computing.展开更多
基金supported by the National Natural Science Foundation of China (U1808205)Hebei Natural Science Foundation (F2000501005)。
文摘This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighted.The influence of inter-layer couplings on the target controllability of multi-layer networks is discussed.It is found that even if there exists a layer which is not target controllable,the entire multi-layer network can still be target controllable due to the inter-layer couplings.For the multi-layer networks with general structure,a necessary and sufficient condition for target controllability is given by establishing the relationship between uncontrollable subspace and output matrix.By the derived condition,it can be found that the system may be target controllable even if it is not state controllable.On this basis,two corollaries are derived,which clarify the relationship between target controllability,state controllability and output controllability.For the multi-layer networks where the inter-layer couplings are directed chains and directed stars,sufficient conditions for target controllability of networked systems are given,respectively.These conditions are easier to verify than the classic criterion.
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
基金supported by the Innovation Fund Project of the Gansu Education Department(Grant No.2021B-099).
文摘The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.
文摘The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.
基金National Natural Science Foundation of China,Grant/Award Number:61972261Basic Research Foundations of Shenzhen,Grant/Award Numbers:JCYJ20210324093609026,JCYJ20200813091134001。
文摘In this paper,an Observation Points Classifier Ensemble(OPCE)algorithm is proposed to deal with High-Dimensional Imbalanced Classification(HDIC)problems based on data processed using the Multi-Dimensional Scaling(MDS)feature extraction technique.First,dimensionality of the original imbalanced data is reduced using MDS so that distances between any two different samples are preserved as well as possible.Second,a novel OPCE algorithm is applied to classify imbalanced samples by placing optimised observation points in a low-dimensional data space.Third,optimization of the observation point mappings is carried out to obtain a reliable assessment of the unknown samples.Exhaustive experiments have been conducted to evaluate the feasibility,rationality,and effectiveness of the proposed OPCE algorithm using seven benchmark HDIC data sets.Experimental results show that(1)the OPCE algorithm can be trained faster on low-dimensional imbalanced data than on high-dimensional data;(2)the OPCE algorithm can correctly identify samples as the number of optimised observation points is increased;and(3)statistical analysis reveals that OPCE yields better HDIC performances on the selected data sets in comparison with eight other HDIC algorithms.This demonstrates that OPCE is a viable algorithm to deal with HDIC problems.
基金supported in part by the National Natural Science Foundation of China(62172065,62072060)。
文摘As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected features.Evolutionary computing(EC)is promising for FS owing to its powerful search capability.However,in traditional EC-based methods,feature subsets are represented via a length-fixed individual encoding.It is ineffective for high-dimensional data,because it results in a huge search space and prohibitive training time.This work proposes a length-adaptive non-dominated sorting genetic algorithm(LA-NSGA)with a length-variable individual encoding and a length-adaptive evolution mechanism for bi-objective highdimensional FS.In LA-NSGA,an initialization method based on correlation and redundancy is devised to initialize individuals of diverse lengths,and a Pareto dominance-based length change operator is introduced to guide individuals to explore in promising search space adaptively.Moreover,a dominance-based local search method is employed for further improvement.The experimental results based on 12 high-dimensional gene datasets show that the Pareto front of feature subsets produced by LA-NSGA is superior to those of existing algorithms.
文摘k-means is a popular clustering algorithm because of its simplicity and scalability to handle large datasets.However,one of its setbacks is the challenge of identifying the correct k-hyperparameter value.Tuning this value correctly is critical for building effective k-means models.The use of the traditional elbow method to help identify this value has a long-standing literature.However,when using this method with certain datasets,smooth curves may appear,making it challenging to identify the k-value due to its unclear nature.On the other hand,various internal validation indexes,which are proposed as a solution to this issue,may be inconsistent.Although various techniques for solving smooth elbow challenges exist,k-hyperparameter tuning in high-dimensional spaces still remains intractable and an open research issue.In this paper,we have first reviewed the existing techniques for solving smooth elbow challenges.The identified research gaps are then utilized in the development of the new technique.The new technique,referred to as the ensemble-based technique of a self-adapting autoencoder and internal validation indexes,is then validated in high-dimensional space clustering.The optimal k-value,tuned by this technique using a voting scheme,is a trade-off between the number of clusters visualized in the autoencoder’s latent space,k-value from the ensemble internal validation index score and one that generates a value of 0 or close to 0 on the derivative f″′(k)(1+f′(k)^(2))−3 f″(k)^(2)f″((k)2f′(k),at the elbow.Experimental results based on the Cochran’s Q test,ANOVA,and McNemar’s score indicate a relatively good performance of the newly developed technique in k-hyperparameter tuning.
基金supported by the Major Public Welfare Projects of Henan Province(201300111100 to Yuling Li)Zhongyuan Scholars in Henan Province(22400510003 to Yuling Li)+2 种基金Tackle Program of Agricultural Seed in Henan Province(2022010201 to Yuling Li)Technical System of Maize Industry in Henan Province(HARS-2202-S to Yuling Li)State Key Laboratory of Wheat and Maize Crop Science(SKL2023ZZ05)。
文摘Triosephosphate isomerase(TPI)is an enzyme that functions in plant energy production,accumulation,and conversion.To understand its function in maize,we characterized a maize TPI mutant,zmtpi4.In comparison to the wild type,zmtpi4 mutants showed altered ear development,reduced kernel weight and starch content,modified starch granule morphology,and altered amylose and amylopectin content.Protein,ATP,and pyruvate contents were reduced,indicating ZmTPI4 was involved in glycolysis.Although subcellular localization confirmed ZmTPI4 as a cytosolic rather than a plastid isoform of TPI,the zmtpi4 mutant showed reduced leaf size and chlorophyll content.Overexpression of ZmTPI4 in Arabidopsis led to enlarged leaves and increased seed weight,suggesting a positive regulatory role of ZmTPI4 in kernel weight and starch content.We conclude that ZmTPI4 functions in maize kernel development,starch synthesis,glycolysis,and photosynthesis.
基金supported by the Thailand Research Fund through the Royal Golden Jubilee Ph.D.Program(PHD/0225/2561)the Faculty of Engineering,Kamphaeng Saen Campus,Kasetsart University,Thailand。
文摘The adulteration concentration of palm kernel oil(PKO)in virgin coconut oil(VCO)was quantified using near-infrared(NIR)hyperspectral imaging.Nowadays,some VCO is adulterated with lower-priced PKO to reduce production costs,which diminishes the quality of the VCO.This study used NIR hyperspectral imaging in the wavelength region 900-1,650 nm to create a quantitative model for the detection of PKO contaminants(0-100%)in VCO and to develop predictive mapping.The prediction equation for the adulteration of VCO with PKO was constructed using the partial least squares regression method.The best predictive model was pre-processed using the standard normal variate method,and the coefficient of determination of prediction was 0.991,the root mean square error of prediction was 2.93%,and the residual prediction deviation was 10.37.The results showed that this model could be applied for quantifying the adulteration concentration of PKO in VCO.The prediction adulteration concentration mapping of VCO with PKO was created from a calibration model that showed the color level according to the adulteration concentration in the range of 0-100%.NIR hyperspectral imaging could be clearly used to quantify the adulteration of VCO with a color level map that provides a quick,accurate,and non-destructive detection method.
基金Project supported by the National Natural Science Foundation of China(Grant No.11471263)the Natural Science Foundation of Xinjiang Uygur Autonomous Region,China(Grant No.2021D01B09)+1 种基金the Initial Research Foundation of Kashi University(Grant No.022024076)“Mathematics and Finance Research Centre Funding Project”,Dazhou Social Science Federation(Grant No.SCMF202305)。
文摘A non-Maxwellian collision kernel is employed to study the evolution of wealth distribution in a multi-agent society.The collision kernel divides agents into two different groups under certain conditions. Applying the kinetic theory of rarefied gases, we construct a two-group kinetic model for the evolution of wealth distribution. Under the continuous trading limit, the Fokker–Planck equation is derived and its steady-state solution is obtained. For the non-Maxwellian collision kernel, we find a suitable redistribution operator to match the taxation. Our results illustrate that taxation and redistribution have the property to change the Pareto index.
基金This work was supported by the National Natural Science Foundation of China(Nos.11875027,11975096).
文摘The extended kernel ridge regression(EKRR)method with odd-even effects was adopted to improve the description of the nuclear charge radius using five commonly used nuclear models.These are:(i)the isospin-dependent A^(1∕3) formula,(ii)relativistic continuum Hartree-Bogoliubov(RCHB)theory,(iii)Hartree-Fock-Bogoliubov(HFB)model HFB25,(iv)the Weizsacker-Skyrme(WS)model WS*,and(v)HFB25*model.In the last two models,the charge radii were calculated using a five-parameter formula with the nuclear shell corrections and deformations obtained from the WS and HFB25 models,respectively.For each model,the resultant root-mean-square deviation for the 1014 nuclei with proton number Z≥8 can be significantly reduced to 0.009-0.013 fm after considering the modification with the EKRR method.The best among them was the RCHB model,with a root-mean-square deviation of 0.0092 fm.The extrapolation abilities of the KRR and EKRR methods for the neutron-rich region were examined,and it was found that after considering the odd-even effects,the extrapolation power was improved compared with that of the original KRR method.The strong odd-even staggering of nuclear charge radii of Ca and Cu isotopes and the abrupt kinks across the neutron N=126 and 82 shell closures were also calculated and could be reproduced quite well by calculations using the EKRR method.
基金financially supported by the HAAFS Science and Technology Innovation Special Project China(2022KJCXZX-LYS-9)the Natural Science Foundation of Hebei Province China(C2021301004)the Key Research and Dvelopment Program of Hebei Province China(20326401D)。
文摘Adjusting agronomic measures to alleviate the kernel position effect in maize is important for ensuring high yields.In order to clarify whether the combined application of organic fertilizer and chemical fertilizer(CAOFCF)can alleviate the kernel position effect of summer maize,field experiments were conducted during the 2019 and 2020 growing seasons,and five treatments were assessed:CF,100%chemical fertilizer;OFCF1,15%organic fertilizer+85%chemical fertilizer;OFCF2,30%organic fertilizer+70%chemical fertilizer;OFCF3,45%organic fertilizer+55%chemical fertilizer;and OFCF4,60%organic fertilizer+40%chemical fertilizer.Compared with the CF treatment,the OFCF1 and OFCF2 treatments significantly alleviated the kernel position effect by increasing the weight ratio of inferior kernels to superior kernels and reducing the weight gap between the superior and inferior kernels.These effects were largely due to the improved filling and starch accumulation of inferior kernels.However,there were no obvious differences in the kernel position effect among plants treated with CF,OFCF3,or OFCF4 in most cases.Leaf area indexes,post-silking photosynthetic rates,and net assimilation rates were higher in plants treated with OFCF1 or OFCF2 than in those treated with CF,reflecting an enhanced photosynthetic capacity and improved postsilking dry matter accumulation(DMA)in the plants treated with OFCF1 or OFCF2.Compared with the CF treatment,the OFCF1 and OFCF2 treatments increased post-silking N uptake by 66.3 and 75.5%,respectively,which was the major factor driving post-silking photosynthetic capacity and DMA.Moreover,the increases in root DMA and zeatin riboside content observed following the OFCF1 and OFCF2 treatments resulted in reduced root senescence,which is associated with an increased post-silking N uptake.Analyses showed that post-silking N uptake,DMA,and grain yield in summer maize were negatively correlated with the kernel position effect.In conclusion,the combined application of 15-30%organic fertilizer and 70-85%chemical fertilizer alleviated the kernel position effect in summer maize by improving post-silking N uptake and DMA.These results provide new insights into how CAOFCF can be used to improve maize productivity.
基金Coordenacao de Aperfeicoamento de Pessoal de Nível Superior - Brasil (CAPES)Fundacao de Amparo à Pesquisa do Estado do Rio Grande do Sul (FAPERGS)+2 种基金Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)financed in part by Coordenacao de Aperfeicoamento de Pessoal de Nível Superior-Brasil(CAPES)-Finance code 001,Fundacao de Amparoa Pesquisa do Estado do Rio Grande do Sul(FAPERGS)-Finances code 17/2551-0000935-5,22/2551-0001051-2,21/2551-0002255-8Conselho Nacional de Desenvolvimento Científico e Tecnologico(CNPq)-Finance codes 205518/2018-4,312603/2018-5.
文摘Although it is recognized that the post-harvest system is most responsible for the loss of soybean quality,the real impact of this loss is still unknown.Brazilian regulation allows 15%and 30%of broken soybean for group I and group II(quality groups),respectively.However,the industry is not informed about the loss in the quality parameters of soybeans and its impacts during long-term storage.Therefore,the objective was to evaluate the effect of the breakage kernel percentage of soybean stored for 12 months.Content of 15% of breakage kernels did not affect soybean quality.However,content of 30% of breakage kernels affected significantly soybean quality,which was evidenced by the increase of up to 75% in moldy soybeans,72% in acidity,50% in leached solids,27% in electrical conductivity,and the decrease of up to 12% in soluble protein,6.4% in germination and 3.5% in thousand kernel weight after 8 months of storage.Although the legislation establishes a percentage limit,it is recommended to store soybeans with up to 15% breakage kernels.On the contrary,values higher than that can cause a significant reduction in soybean quality,resulting in commercial losses.
文摘The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed work intends to implement a new security framework for detecting the most specific and harmful intrusions in IoT networks.In this framework,a Covariance Linear Learning Embedding Selection(CL2ES)methodology is used at first to extract the features highly associated with the IoT intrusions.Then,the Kernel Distributed Bayes Classifier(KDBC)is created to forecast attacks based on the probability distribution value precisely.In addition,a unique Mongolian Gazellas Optimization(MGO)algorithm is used to optimize the weight value for the learning of the classifier.The effectiveness of the proposed CL2ES-KDBC framework has been assessed using several IoT cyber-attack datasets,The obtained results are then compared with current classification methods regarding accuracy(97%),precision(96.5%),and other factors.Computational analysis of the CL2ES-KDBC system on IoT intrusion datasets is performed,which provides valuable insight into its performance,efficiency,and suitability for securing IoT networks.
文摘Metal trace elements (MTE) are among the most harmful micropollutants of natural waters. Eliminating them helps improve the quality and safety of drinking water and protect human health. In this work, we used mango kernel powder (MKP) as bioadsorbent material for removal of Cr (VI) from water. Uv-visible spectroscopy was used to monitor and quantify Cr (VI) during processing using the Beer-Lambert formula. Some parameters such as pH, mango powder, mass and contact time were optimized to determine adsorption capacity and chromium removal rate. Adsorption kinetics, equilibrium, isotherms and thermodynamic parameters such as ΔG˚, ΔH˚, and ΔS˚, as well as FTIR were studied to better understand the Cr (VI) removal process by MKP. The adsorption capacity reached 94.87 mg/g, for an optimal contact time of 30 min at 298 K. The obtained results are in accordance with a pseudo-second order Freundlich adsorption isotherm model. Finally FTIR was used to monitor the evolution of absorption bands, while Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS) were used to evaluate surface properties and morphology of the adsorbent.
文摘We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner product defined on both the unit ball and the unit sphere, construct the kernel-regularized learning algorithm from the view of semi-supervised learning and bound the upper bounds for the learning rates. The theory analysis shows that the learning algorithm has better uniform convergence according to the number of samples. The research can be regarded as an application of kernel-regularized semi-supervised learning.
基金supported by Shaanxi Province Natural Science Foundation of Research Projects(2016JM6014)the Innovation Foundation of High-Tech Institute of Xi’an(2015ZZDJJ03)the Youth Foundation of HighTech Institute of Xi’an(2016QNJJ004)
文摘Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for high-dimensiona multi-agent systems with time-varying delays, where a cos function is defined based on state errors among neighboring agents and control inputs of all the agents. By the state space decomposition approach and the linear matrix inequality(LMI)sufficient conditions for guaranteed cost consensus and consensu alization are given. Moreover, a guaranteed cost upper bound o the cost function is determined. It should be mentioned that these LMI criteria are dependent on the change rate of time delays and the maximum time delay, the guaranteed cost upper bound is only dependent on the maximum time delay but independen of the Laplacian matrix. Finally, numerical simulations are given to demonstrate theoretical results.
基金supported in part by the National Natural Science Foundation of China (6177249391646114)+1 种基金Chongqing research program of technology innovation and application (cstc2017rgzn-zdyfX0020)in part by the Pioneer Hundred Talents Program of Chinese Academy of Sciences
文摘Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost. Hence, determining how to accelerate the training process for LF models has become a significant issue. To address this, this work proposes a randomized latent factor(RLF) model. It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices, thereby greatly alleviating computational burden. It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models, RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices, which is especially desired for industrial applications demanding highly efficient models.
基金Supported by the National Natural Science Foundation of China(No.61502475)the Importation and Development of High-Caliber Talents Project of the Beijing Municipal Institutions(No.CIT&TCD201504039)
文摘The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction.
基金Peng Xie acknowledges the support from the China Scholarship Council(Grant no.201804910829).
文摘Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high parallelization,large bandwidth,and low power consumption to meet the demand of big data.Here,we demonstrate the dual-layer ONN with Mach-Zehnder interferometer(MZI)network and nonlinear layer,while the nonlinear activation function is achieved by optical-electronic signal conversion.Two frequency components from the microcomb source carrying digit datasets are simultaneously imposed and intelligently recognized through the ONN.We successfully achieve the digit classification of different frequency components by demultiplexing the output signal and testing power distribution.Efficient parallelization feasibility with wavelength division multiplexing is demonstrated in our high-dimensional ONN.This work provides a high-performance architecture for future parallel high-capacity optical analog computing.