期刊文献+
共找到18,868篇文章
< 1 2 250 >
每页显示 20 50 100
Target Controllability of Multi-Layer Networks With High-Dimensional
1
作者 Lifu Wang Zhaofei Li +1 位作者 Ge Guo Zhi Kong 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第9期1999-2010,共12页
This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighte... This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighted.The influence of inter-layer couplings on the target controllability of multi-layer networks is discussed.It is found that even if there exists a layer which is not target controllable,the entire multi-layer network can still be target controllable due to the inter-layer couplings.For the multi-layer networks with general structure,a necessary and sufficient condition for target controllability is given by establishing the relationship between uncontrollable subspace and output matrix.By the derived condition,it can be found that the system may be target controllable even if it is not state controllable.On this basis,two corollaries are derived,which clarify the relationship between target controllability,state controllability and output controllability.For the multi-layer networks where the inter-layer couplings are directed chains and directed stars,sufficient conditions for target controllability of networked systems are given,respectively.These conditions are easier to verify than the classic criterion. 展开更多
关键词 high-dimensional nodes inter-layer couplings multi-layer networks target controllability
下载PDF
Multi-Objective Equilibrium Optimizer for Feature Selection in High-Dimensional English Speech Emotion Recognition
2
作者 Liya Yue Pei Hu +1 位作者 Shu-Chuan Chu Jeng-Shyang Pan 《Computers, Materials & Continua》 SCIE EI 2024年第2期1957-1975,共19页
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext... Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER. 展开更多
关键词 Speech emotion recognition filter-wrapper high-dimensional feature selection equilibrium optimizer MULTI-OBJECTIVE
下载PDF
An Efficient Reliability-Based Optimization Method Utilizing High-Dimensional Model Representation and Weight-Point Estimation Method
3
作者 Xiaoyi Wang Xinyue Chang +2 位作者 Wenxuan Wang Zijie Qiao Feng Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1775-1796,共22页
The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the effi... The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method. 展开更多
关键词 Reliability-based design optimization high-dimensional model decomposition point estimation method Lagrange interpolation aviation hydraulic piping system
下载PDF
Optimal Estimation of High-Dimensional Covariance Matrices with Missing and Noisy Data
4
作者 Meiyin Wang Wanzhou Ye 《Advances in Pure Mathematics》 2024年第4期214-227,共14页
The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based o... The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method. 展开更多
关键词 high-dimensional Covariance Matrix Missing Data Sub-Gaussian Noise Optimal Estimation
下载PDF
Observation points classifier ensemble for high-dimensional imbalanced classification 被引量:1
5
作者 Yulin He Xu Li +3 位作者 Philippe Fournier‐Viger Joshua Zhexue Huang Mianjie Li Salman Salloum 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第2期500-517,共18页
In this paper,an Observation Points Classifier Ensemble(OPCE)algorithm is proposed to deal with High-Dimensional Imbalanced Classification(HDIC)problems based on data processed using the Multi-Dimensional Scaling(MDS)... In this paper,an Observation Points Classifier Ensemble(OPCE)algorithm is proposed to deal with High-Dimensional Imbalanced Classification(HDIC)problems based on data processed using the Multi-Dimensional Scaling(MDS)feature extraction technique.First,dimensionality of the original imbalanced data is reduced using MDS so that distances between any two different samples are preserved as well as possible.Second,a novel OPCE algorithm is applied to classify imbalanced samples by placing optimised observation points in a low-dimensional data space.Third,optimization of the observation point mappings is carried out to obtain a reliable assessment of the unknown samples.Exhaustive experiments have been conducted to evaluate the feasibility,rationality,and effectiveness of the proposed OPCE algorithm using seven benchmark HDIC data sets.Experimental results show that(1)the OPCE algorithm can be trained faster on low-dimensional imbalanced data than on high-dimensional data;(2)the OPCE algorithm can correctly identify samples as the number of optimised observation points is increased;and(3)statistical analysis reveals that OPCE yields better HDIC performances on the selected data sets in comparison with eight other HDIC algorithms.This demonstrates that OPCE is a viable algorithm to deal with HDIC problems. 展开更多
关键词 classifier ensemble feature transformation high-dimensional data classification imbalanced learning observation point mechanism
下载PDF
A Length-Adaptive Non-Dominated Sorting Genetic Algorithm for Bi-Objective High-Dimensional Feature Selection
6
作者 Yanlu Gong Junhai Zhou +2 位作者 Quanwang Wu MengChu Zhou Junhao Wen 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第9期1834-1844,共11页
As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected featu... As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected features.Evolutionary computing(EC)is promising for FS owing to its powerful search capability.However,in traditional EC-based methods,feature subsets are represented via a length-fixed individual encoding.It is ineffective for high-dimensional data,because it results in a huge search space and prohibitive training time.This work proposes a length-adaptive non-dominated sorting genetic algorithm(LA-NSGA)with a length-variable individual encoding and a length-adaptive evolution mechanism for bi-objective highdimensional FS.In LA-NSGA,an initialization method based on correlation and redundancy is devised to initialize individuals of diverse lengths,and a Pareto dominance-based length change operator is introduced to guide individuals to explore in promising search space adaptively.Moreover,a dominance-based local search method is employed for further improvement.The experimental results based on 12 high-dimensional gene datasets show that the Pareto front of feature subsets produced by LA-NSGA is superior to those of existing algorithms. 展开更多
关键词 Bi-objective optimization feature selection(FS) genetic algorithm high-dimensional data length-adaptive
下载PDF
K-Hyperparameter Tuning in High-Dimensional Space Clustering:Solving Smooth Elbow Challenges Using an Ensemble Based Technique of a Self-Adapting Autoencoder and Internal Validation Indexes
7
作者 Rufus Gikera Jonathan Mwaura +1 位作者 Elizaphan Muuro Shadrack Mambo 《Journal on Artificial Intelligence》 2023年第1期75-112,共38页
k-means is a popular clustering algorithm because of its simplicity and scalability to handle large datasets.However,one of its setbacks is the challenge of identifying the correct k-hyperparameter value.Tuning this v... k-means is a popular clustering algorithm because of its simplicity and scalability to handle large datasets.However,one of its setbacks is the challenge of identifying the correct k-hyperparameter value.Tuning this value correctly is critical for building effective k-means models.The use of the traditional elbow method to help identify this value has a long-standing literature.However,when using this method with certain datasets,smooth curves may appear,making it challenging to identify the k-value due to its unclear nature.On the other hand,various internal validation indexes,which are proposed as a solution to this issue,may be inconsistent.Although various techniques for solving smooth elbow challenges exist,k-hyperparameter tuning in high-dimensional spaces still remains intractable and an open research issue.In this paper,we have first reviewed the existing techniques for solving smooth elbow challenges.The identified research gaps are then utilized in the development of the new technique.The new technique,referred to as the ensemble-based technique of a self-adapting autoencoder and internal validation indexes,is then validated in high-dimensional space clustering.The optimal k-value,tuned by this technique using a voting scheme,is a trade-off between the number of clusters visualized in the autoencoder’s latent space,k-value from the ensemble internal validation index score and one that generates a value of 0 or close to 0 on the derivative f″′(k)(1+f′(k)^(2))−3 f″(k)^(2)f″((k)2f′(k),at the elbow.Experimental results based on the Cochran’s Q test,ANOVA,and McNemar’s score indicate a relatively good performance of the newly developed technique in k-hyperparameter tuning. 展开更多
关键词 k-hyperparameter tuning high-dimensional smooth elbow
下载PDF
The cytosolic isoform of triosephosphate isomerase,ZmTPI4,is required for kernel development and starch synthesis in maize(Zea mays L.)
8
作者 Wenyu Li Han Wang +7 位作者 Qiuyue Xu Long Zhang Yan Wang Yongbiao Yu Xiangkun Guo Zhiwei Zhang Yongbin Dong Yuling Li 《The Crop Journal》 SCIE CSCD 2024年第2期401-410,共10页
Triosephosphate isomerase(TPI)is an enzyme that functions in plant energy production,accumulation,and conversion.To understand its function in maize,we characterized a maize TPI mutant,zmtpi4.In comparison to the wild... Triosephosphate isomerase(TPI)is an enzyme that functions in plant energy production,accumulation,and conversion.To understand its function in maize,we characterized a maize TPI mutant,zmtpi4.In comparison to the wild type,zmtpi4 mutants showed altered ear development,reduced kernel weight and starch content,modified starch granule morphology,and altered amylose and amylopectin content.Protein,ATP,and pyruvate contents were reduced,indicating ZmTPI4 was involved in glycolysis.Although subcellular localization confirmed ZmTPI4 as a cytosolic rather than a plastid isoform of TPI,the zmtpi4 mutant showed reduced leaf size and chlorophyll content.Overexpression of ZmTPI4 in Arabidopsis led to enlarged leaves and increased seed weight,suggesting a positive regulatory role of ZmTPI4 in kernel weight and starch content.We conclude that ZmTPI4 functions in maize kernel development,starch synthesis,glycolysis,and photosynthesis. 展开更多
关键词 MAIZE kernel STARCH Weight PHOTOSYNTHESIS
下载PDF
Quantification of the adulteration concentration of palm kernel oil in virgin coconut oil using near-infrared hyperspectral imaging
9
作者 Phiraiwan Jermwongruttanachai Siwalak Pathaveerat Sirinad Noypitak 《Journal of Integrative Agriculture》 SCIE CSCD 2024年第1期298-309,共12页
The adulteration concentration of palm kernel oil(PKO)in virgin coconut oil(VCO)was quantified using near-infrared(NIR)hyperspectral imaging.Nowadays,some VCO is adulterated with lower-priced PKO to reduce production ... The adulteration concentration of palm kernel oil(PKO)in virgin coconut oil(VCO)was quantified using near-infrared(NIR)hyperspectral imaging.Nowadays,some VCO is adulterated with lower-priced PKO to reduce production costs,which diminishes the quality of the VCO.This study used NIR hyperspectral imaging in the wavelength region 900-1,650 nm to create a quantitative model for the detection of PKO contaminants(0-100%)in VCO and to develop predictive mapping.The prediction equation for the adulteration of VCO with PKO was constructed using the partial least squares regression method.The best predictive model was pre-processed using the standard normal variate method,and the coefficient of determination of prediction was 0.991,the root mean square error of prediction was 2.93%,and the residual prediction deviation was 10.37.The results showed that this model could be applied for quantifying the adulteration concentration of PKO in VCO.The prediction adulteration concentration mapping of VCO with PKO was created from a calibration model that showed the color level according to the adulteration concentration in the range of 0-100%.NIR hyperspectral imaging could be clearly used to quantify the adulteration of VCO with a color level map that provides a quick,accurate,and non-destructive detection method. 展开更多
关键词 virgin coconut oil ADULTERATION CONTAMINATION palm kernel oil hyperspectral imaging
下载PDF
A wealth distribution model with a non-Maxwellian collision kernel
10
作者 孟俊 周霞 赖绍永 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第7期224-231,共8页
A non-Maxwellian collision kernel is employed to study the evolution of wealth distribution in a multi-agent society.The collision kernel divides agents into two different groups under certain conditions. Applying the... A non-Maxwellian collision kernel is employed to study the evolution of wealth distribution in a multi-agent society.The collision kernel divides agents into two different groups under certain conditions. Applying the kinetic theory of rarefied gases, we construct a two-group kinetic model for the evolution of wealth distribution. Under the continuous trading limit, the Fokker–Planck equation is derived and its steady-state solution is obtained. For the non-Maxwellian collision kernel, we find a suitable redistribution operator to match the taxation. Our results illustrate that taxation and redistribution have the property to change the Pareto index. 展开更多
关键词 kinetic theory non-Maxwellian collision kernel wealth distribution Pareto index
下载PDF
Nuclear charge radius predictions by kernel ridge regression with odd-even effects
11
作者 Lu Tang Zhen-Hua Zhang 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第2期94-102,共9页
The extended kernel ridge regression(EKRR)method with odd-even effects was adopted to improve the description of the nuclear charge radius using five commonly used nuclear models.These are:(i)the isospin-dependent A^(... The extended kernel ridge regression(EKRR)method with odd-even effects was adopted to improve the description of the nuclear charge radius using five commonly used nuclear models.These are:(i)the isospin-dependent A^(1∕3) formula,(ii)relativistic continuum Hartree-Bogoliubov(RCHB)theory,(iii)Hartree-Fock-Bogoliubov(HFB)model HFB25,(iv)the Weizsacker-Skyrme(WS)model WS*,and(v)HFB25*model.In the last two models,the charge radii were calculated using a five-parameter formula with the nuclear shell corrections and deformations obtained from the WS and HFB25 models,respectively.For each model,the resultant root-mean-square deviation for the 1014 nuclei with proton number Z≥8 can be significantly reduced to 0.009-0.013 fm after considering the modification with the EKRR method.The best among them was the RCHB model,with a root-mean-square deviation of 0.0092 fm.The extrapolation abilities of the KRR and EKRR methods for the neutron-rich region were examined,and it was found that after considering the odd-even effects,the extrapolation power was improved compared with that of the original KRR method.The strong odd-even staggering of nuclear charge radii of Ca and Cu isotopes and the abrupt kinks across the neutron N=126 and 82 shell closures were also calculated and could be reproduced quite well by calculations using the EKRR method. 展开更多
关键词 Nuclear charge radius Machine learning kernel ridge regression method
下载PDF
Combined application of organic fertilizer and chemical fertilizer alleviates the kernel position effect in summer maize by promoting post-silking nitrogen uptake and dry matter accumulation
12
作者 Lichao Zhai Lihua Zhang +7 位作者 Yongzeng Cui Lifang Zhai Mengjing Zheng Yanrong Yao Jingting Zhang Wanbin Hou Liyong Wu Xiuling Jia 《Journal of Integrative Agriculture》 SCIE CAS CSCD 2024年第4期1179-1194,共16页
Adjusting agronomic measures to alleviate the kernel position effect in maize is important for ensuring high yields.In order to clarify whether the combined application of organic fertilizer and chemical fertilizer(CA... Adjusting agronomic measures to alleviate the kernel position effect in maize is important for ensuring high yields.In order to clarify whether the combined application of organic fertilizer and chemical fertilizer(CAOFCF)can alleviate the kernel position effect of summer maize,field experiments were conducted during the 2019 and 2020 growing seasons,and five treatments were assessed:CF,100%chemical fertilizer;OFCF1,15%organic fertilizer+85%chemical fertilizer;OFCF2,30%organic fertilizer+70%chemical fertilizer;OFCF3,45%organic fertilizer+55%chemical fertilizer;and OFCF4,60%organic fertilizer+40%chemical fertilizer.Compared with the CF treatment,the OFCF1 and OFCF2 treatments significantly alleviated the kernel position effect by increasing the weight ratio of inferior kernels to superior kernels and reducing the weight gap between the superior and inferior kernels.These effects were largely due to the improved filling and starch accumulation of inferior kernels.However,there were no obvious differences in the kernel position effect among plants treated with CF,OFCF3,or OFCF4 in most cases.Leaf area indexes,post-silking photosynthetic rates,and net assimilation rates were higher in plants treated with OFCF1 or OFCF2 than in those treated with CF,reflecting an enhanced photosynthetic capacity and improved postsilking dry matter accumulation(DMA)in the plants treated with OFCF1 or OFCF2.Compared with the CF treatment,the OFCF1 and OFCF2 treatments increased post-silking N uptake by 66.3 and 75.5%,respectively,which was the major factor driving post-silking photosynthetic capacity and DMA.Moreover,the increases in root DMA and zeatin riboside content observed following the OFCF1 and OFCF2 treatments resulted in reduced root senescence,which is associated with an increased post-silking N uptake.Analyses showed that post-silking N uptake,DMA,and grain yield in summer maize were negatively correlated with the kernel position effect.In conclusion,the combined application of 15-30%organic fertilizer and 70-85%chemical fertilizer alleviated the kernel position effect in summer maize by improving post-silking N uptake and DMA.These results provide new insights into how CAOFCF can be used to improve maize productivity. 展开更多
关键词 chemical fertilizer dry mater accumulation kernel position effect N uptake organic fertilizer
下载PDF
Influence of broken kernels content on soybean quality during storage
13
作者 Lazaro da Costa Correa Canizares Cesar Augusto Gaioso +5 位作者 Newiton da Silva Timm Silvia Leticia Rivero Meza Adriano Hirsch Ramos Maurício de Oliveira Everton Lutz Moacir Cardoso Elias 《Grain & Oil Science and Technology》 CAS 2024年第2期105-112,共8页
Although it is recognized that the post-harvest system is most responsible for the loss of soybean quality,the real impact of this loss is still unknown.Brazilian regulation allows 15%and 30%of broken soybean for grou... Although it is recognized that the post-harvest system is most responsible for the loss of soybean quality,the real impact of this loss is still unknown.Brazilian regulation allows 15%and 30%of broken soybean for group I and group II(quality groups),respectively.However,the industry is not informed about the loss in the quality parameters of soybeans and its impacts during long-term storage.Therefore,the objective was to evaluate the effect of the breakage kernel percentage of soybean stored for 12 months.Content of 15% of breakage kernels did not affect soybean quality.However,content of 30% of breakage kernels affected significantly soybean quality,which was evidenced by the increase of up to 75% in moldy soybeans,72% in acidity,50% in leached solids,27% in electrical conductivity,and the decrease of up to 12% in soluble protein,6.4% in germination and 3.5% in thousand kernel weight after 8 months of storage.Although the legislation establishes a percentage limit,it is recommended to store soybeans with up to 15% breakage kernels.On the contrary,values higher than that can cause a significant reduction in soybean quality,resulting in commercial losses. 展开更多
关键词 Soybean quality Breakage kernels Storage problems Grain defects Quality parameters
下载PDF
CL2ES-KDBC:A Novel Covariance Embedded Selection Based on Kernel Distributed Bayes Classifier for Detection of Cyber-Attacks in IoT Systems
14
作者 Talal Albalawi P.Ganeshkumar 《Computers, Materials & Continua》 SCIE EI 2024年第3期3511-3528,共18页
The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed wo... The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed work intends to implement a new security framework for detecting the most specific and harmful intrusions in IoT networks.In this framework,a Covariance Linear Learning Embedding Selection(CL2ES)methodology is used at first to extract the features highly associated with the IoT intrusions.Then,the Kernel Distributed Bayes Classifier(KDBC)is created to forecast attacks based on the probability distribution value precisely.In addition,a unique Mongolian Gazellas Optimization(MGO)algorithm is used to optimize the weight value for the learning of the classifier.The effectiveness of the proposed CL2ES-KDBC framework has been assessed using several IoT cyber-attack datasets,The obtained results are then compared with current classification methods regarding accuracy(97%),precision(96.5%),and other factors.Computational analysis of the CL2ES-KDBC system on IoT intrusion datasets is performed,which provides valuable insight into its performance,efficiency,and suitability for securing IoT networks. 展开更多
关键词 IoT security attack detection covariance linear learning embedding selection kernel distributed bayes classifier mongolian gazellas optimization
下载PDF
Hexavalent Chromium Cr (VI) Removal from Water by Mango Kernel Powder
15
作者 Amadou Sarr Gning Cheikh Gaye +3 位作者 Antoine Blaise Kama Pape Abdoulaye Diaw Diène Diégane Thiare Modou Fall 《Journal of Materials Science and Chemical Engineering》 2024年第1期84-103,共20页
Metal trace elements (MTE) are among the most harmful micropollutants of natural waters. Eliminating them helps improve the quality and safety of drinking water and protect human health. In this work, we used mango ke... Metal trace elements (MTE) are among the most harmful micropollutants of natural waters. Eliminating them helps improve the quality and safety of drinking water and protect human health. In this work, we used mango kernel powder (MKP) as bioadsorbent material for removal of Cr (VI) from water. Uv-visible spectroscopy was used to monitor and quantify Cr (VI) during processing using the Beer-Lambert formula. Some parameters such as pH, mango powder, mass and contact time were optimized to determine adsorption capacity and chromium removal rate. Adsorption kinetics, equilibrium, isotherms and thermodynamic parameters such as ΔG˚, ΔH˚, and ΔS˚, as well as FTIR were studied to better understand the Cr (VI) removal process by MKP. The adsorption capacity reached 94.87 mg/g, for an optimal contact time of 30 min at 298 K. The obtained results are in accordance with a pseudo-second order Freundlich adsorption isotherm model. Finally FTIR was used to monitor the evolution of absorption bands, while Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS) were used to evaluate surface properties and morphology of the adsorbent. 展开更多
关键词 ADSORPTION CHROMIUM Mango kernel Powder Spectroscopy Analysis Water Treatment
下载PDF
Solving Neumann Boundary Problem with Kernel-Regularized Learning Approach
16
作者 Xuexue Ran Baohuai Sheng 《Journal of Applied Mathematics and Physics》 2024年第4期1101-1125,共25页
We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner... We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner product defined on both the unit ball and the unit sphere, construct the kernel-regularized learning algorithm from the view of semi-supervised learning and bound the upper bounds for the learning rates. The theory analysis shows that the learning algorithm has better uniform convergence according to the number of samples. The research can be regarded as an application of kernel-regularized semi-supervised learning. 展开更多
关键词 Neumann Boundary Value kernel-Regularized Approach Reproducing kernel Hilbert Space The Unit Ball The Unit Sphere
下载PDF
Guaranteed Cost Consensus for High-dimensional Multi-agent Systems With Time-varying Delays 被引量:8
17
作者 Zhong Wang Ming He +2 位作者 Tang Zheng Zhiliang Fan Guangbin Liu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2018年第1期181-189,共9页
Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for... Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for high-dimensiona multi-agent systems with time-varying delays, where a cos function is defined based on state errors among neighboring agents and control inputs of all the agents. By the state space decomposition approach and the linear matrix inequality(LMI)sufficient conditions for guaranteed cost consensus and consensu alization are given. Moreover, a guaranteed cost upper bound o the cost function is determined. It should be mentioned that these LMI criteria are dependent on the change rate of time delays and the maximum time delay, the guaranteed cost upper bound is only dependent on the maximum time delay but independen of the Laplacian matrix. Finally, numerical simulations are given to demonstrate theoretical results. 展开更多
关键词 Guaranteed cost consensus high-dimensional multi-agent system time-varying delay
下载PDF
Randomized Latent Factor Model for High-dimensional and Sparse Matrices from Industrial Applications 被引量:13
18
作者 Mingsheng Shang Xin Luo +3 位作者 Zhigang Liu Jia Chen Ye Yuan MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2019年第1期131-141,共11页
Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts itera... Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost. Hence, determining how to accelerate the training process for LF models has become a significant issue. To address this, this work proposes a randomized latent factor(RLF) model. It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices, thereby greatly alleviating computational burden. It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models, RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices, which is especially desired for industrial applications demanding highly efficient models. 展开更多
关键词 Big data high-dimensional and sparse matrix latent factor analysis latent factor model randomized learning
下载PDF
Similarity measurement method of high-dimensional data based on normalized net lattice subspace 被引量:4
19
作者 李文法 Wang Gongming +1 位作者 Li Ke Huang Su 《High Technology Letters》 EI CAS 2017年第2期179-184,共6页
The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities... The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction. 展开更多
关键词 high-dimensional data the curse of dimensionality SIMILARITY NORMALIZATION SUBSPACE NPsim
下载PDF
Chip-Based High-Dimensional Optical Neural Network 被引量:5
20
作者 Xinyu Wang Peng Xie +1 位作者 Bohan Chen Xingcai Zhang 《Nano-Micro Letters》 SCIE EI CAS CSCD 2022年第12期570-578,共9页
Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high paralleliz... Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high parallelization,large bandwidth,and low power consumption to meet the demand of big data.Here,we demonstrate the dual-layer ONN with Mach-Zehnder interferometer(MZI)network and nonlinear layer,while the nonlinear activation function is achieved by optical-electronic signal conversion.Two frequency components from the microcomb source carrying digit datasets are simultaneously imposed and intelligently recognized through the ONN.We successfully achieve the digit classification of different frequency components by demultiplexing the output signal and testing power distribution.Efficient parallelization feasibility with wavelength division multiplexing is demonstrated in our high-dimensional ONN.This work provides a high-performance architecture for future parallel high-capacity optical analog computing. 展开更多
关键词 Integrated optics Optical neural network high-dimension Mach-Zehnder interferometer Nonlinear activation function Parallel high-capacity analog computing
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部