Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a s...Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a significant problem.The development of secure communication methods that keep recipient-only data transmissions secret has always been an area of interest.Therefore,several approaches,including steganography,have been developed by researchers over time to enable safe data transit.In this review,we have discussed image steganography based on Discrete Cosine Transform(DCT)algorithm,etc.We have also discussed image steganography based on multiple hashing algorithms like the Rivest–Shamir–Adleman(RSA)method,the Blowfish technique,and the hash-least significant bit(LSB)approach.In this review,a novel method of hiding information in images has been developed with minimal variance in image bits,making our method secure and effective.A cryptography mechanism was also used in this strategy.Before encoding the data and embedding it into a carry image,this review verifies that it has been encrypted.Usually,embedded text in photos conveys crucial signals about the content.This review employs hash table encryption on the message before hiding it within the picture to provide a more secure method of data transport.If the message is ever intercepted by a third party,there are several ways to stop this operation.A second level of security process implementation involves encrypting and decrypting steganography images using different hashing algorithms.展开更多
In the generalized continuum mechanics(GCM)theory framework,asymmetric wave equations encompass the characteristic scale parameters of the medium,accounting for microstructure interactions.This study integrates two th...In the generalized continuum mechanics(GCM)theory framework,asymmetric wave equations encompass the characteristic scale parameters of the medium,accounting for microstructure interactions.This study integrates two theoretical branches of the GCM,the modified couple stress theory(M-CST)and the one-parameter second-strain-gradient theory,to form a novel asymmetric wave equation in a unified framework.Numerical modeling of the asymmetric wave equation in a unified framework accurately describes subsurface structures with vital implications for subsequent seismic wave inversion and imaging endeavors.However,employing finite-difference(FD)methods for numerical modeling may introduce numerical dispersion,adversely affecting the accuracy of numerical modeling.The design of an optimal FD operator is crucial for enhancing the accuracy of numerical modeling and emphasizing the scale effects.Therefore,this study devises a hybrid scheme called the dung beetle optimization(DBO)algorithm with a simulated annealing(SA)algorithm,denoted as the SA-based hybrid DBO(SDBO)algorithm.An FD operator optimization method under the SDBO algorithm was developed and applied to the numerical modeling of asymmetric wave equations in a unified framework.Integrating the DBO and SA algorithms mitigates the risk of convergence to a local extreme.The numerical dispersion outcomes underscore that the proposed SDBO algorithm yields FD operators with precision errors constrained to 0.5‱while encompassing a broader spectrum coverage.This result confirms the efficacy of the SDBO algorithm.Ultimately,the numerical modeling results demonstrate that the new FD method based on the SDBO algorithm effectively suppresses numerical dispersion and enhances the accuracy of elastic wave numerical modeling,thereby accentuating scale effects.This result is significant for extracting wavefield perturbations induced by complex microstructures in the medium and the analysis of scale effects.展开更多
This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective o...This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations.展开更多
An algorithm to track multiple sharply maneuvering targets without prior knowledge about new target birth is proposed. These targets are capable of achieving sharp maneuvers within a short period of time, such as dron...An algorithm to track multiple sharply maneuvering targets without prior knowledge about new target birth is proposed. These targets are capable of achieving sharp maneuvers within a short period of time, such as drones and agile missiles.The probability hypothesis density (PHD) filter, which propagates only the first-order statistical moment of the full target posterior, has been shown to be a computationally efficient solution to multitarget tracking problems. However, the standard PHD filter operates on the single dynamic model and requires prior information about target birth distribution, which leads to many limitations in terms of practical applications. In this paper,we introduce a nonzero mean, white noise turn rate dynamic model and generalize jump Markov systems to multitarget case to accommodate sharply maneuvering dynamics. Moreover, to adaptively estimate newborn targets’information, a measurement-driven method based on the recursive random sampling consensus (RANSAC) algorithm is proposed. Simulation results demonstrate that the proposed method achieves significant improvement in tracking multiple sharply maneuvering targets with adaptive birth estimation.展开更多
A hybrid identification model based on multilayer artificial neural networks(ANNs) and particle swarm optimization(PSO) algorithm is developed to improve the simultaneous identification efficiency of thermal conductiv...A hybrid identification model based on multilayer artificial neural networks(ANNs) and particle swarm optimization(PSO) algorithm is developed to improve the simultaneous identification efficiency of thermal conductivity and effective absorption coefficient of semitransparent materials.For the direct model,the spherical harmonic method and the finite volume method are used to solve the coupled conduction-radiation heat transfer problem in an absorbing,emitting,and non-scattering 2D axisymmetric gray medium in the background of laser flash method.For the identification part,firstly,the temperature field and the incident radiation field in different positions are chosen as observables.Then,a traditional identification model based on PSO algorithm is established.Finally,multilayer ANNs are built to fit and replace the direct model in the traditional identification model to speed up the identification process.The results show that compared with the traditional identification model,the time cost of the hybrid identification model is reduced by about 1 000 times.Besides,the hybrid identification model remains a high level of accuracy even with measurement errors.展开更多
BACKGROUND The spread of the severe acute respiratory syndrome coronavirus 2 outbreak worldwide has caused concern regarding the mortality rate caused by the infection.The determinants of mortality on a global scale c...BACKGROUND The spread of the severe acute respiratory syndrome coronavirus 2 outbreak worldwide has caused concern regarding the mortality rate caused by the infection.The determinants of mortality on a global scale cannot be fully understood due to lack of information.AIM To identify key factors that may explain the variability in case lethality across countries.METHODS We identified 21 Potential risk factors for coronavirus disease 2019(COVID-19)case fatality rate for all the countries with available data.We examined univariate relationships of each variable with case fatality rate(CFR),and all independent variables to identify candidate variables for our final multiple model.Multiple regression analysis technique was used to assess the strength of relationship.RESULTS The mean of COVID-19 mortality was 1.52±1.72%.There was a statistically significant inverse correlation between health expenditure,and number of computed tomography scanners per 1 million with CFR,and significant direct correlation was found between literacy,and air pollution with CFR.This final model can predict approximately 97%of the changes in CFR.CONCLUSION The current study recommends some new predictors explaining affect mortality rate.Thus,it could help decision-makers develop health policies to fight COVID-19.展开更多
Rural electrification remains a critical challenge in achieving equitable access to electricity, a cornerstone for poverty alleviation, economic growth, and improved living standards. Capacitor Coupled Substations (CC...Rural electrification remains a critical challenge in achieving equitable access to electricity, a cornerstone for poverty alleviation, economic growth, and improved living standards. Capacitor Coupled Substations (CCS) offer a promising solution for delivering cost-effective electricity to these underserved areas. However, the integration of multiple CCS units along a transmission network introduces complex interactions that can significantly impact voltage, current, and power flow. This study presents a detailed mathematical model to analyze the effects of varying distances and configurations of multiple CCS units on a transmission network, with a focus on voltage stability, power quality, and reactive power fluctuations. Furthermore, the research addresses the phenomenon of ferroresonance, a critical issue in networks with multiple CCS units, by developing and validating suppression strategies to ensure stable operation. Through simulation and practical testing, the study provides insights into optimizing CCS deployment, ultimately contributing to more reliable and efficient rural electrification solutions.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
A multiple model tracking algorithm based on neural network and multiple-process noise soft-switching for maneuvering targets is presented.In this algorithm, the"current"statistical model and neural network are runn...A multiple model tracking algorithm based on neural network and multiple-process noise soft-switching for maneuvering targets is presented.In this algorithm, the"current"statistical model and neural network are running in parallel.The neural network algorithm is used to modify the adaptive noise filtering algorithm based on the mean value and variance of the"current"statistical model for maneuvering targets, and then the multiple model tracking algorithm of the multiple processing switch is used to improve the precision of tracking maneuvering targets.The modified algorithm is proved to be effective by simulation.展开更多
In order to reduce average arterial vehicle delay, a novel distributed and coordinated traffic control algorithm is developed using the multiple agent system and the reinforce learning (RL). The RL is used to minimi...In order to reduce average arterial vehicle delay, a novel distributed and coordinated traffic control algorithm is developed using the multiple agent system and the reinforce learning (RL). The RL is used to minimize average delay of arterial vehicles by training the interaction ability between agents and exterior environments. The Robertson platoon dispersion model is embedded in the RL algorithm to precisely predict platoon movements on arteries and then the reward function is developed based on the dispersion model and delay equations cited by HCM2000. The performance of the algorithm is evaluated in a Matlab environment and comparisons between the algorithm and the conventional coordination algorithm are conducted in three different traffic load scenarios. Results show that the proposed algorithm outperforms the conventional algorithm in all the scenarios. Moreover, with the increase in saturation degree, the performance is improved more significantly. The results verify the feasibility and efficiency of the established algorithm.展开更多
To avoid missing track caused by the target maneuvers in automatic target tracking system, a new maneuvering target tracking technique called threshold interacting multiple model (TIMM) is proposed. This algorithm i...To avoid missing track caused by the target maneuvers in automatic target tracking system, a new maneuvering target tracking technique called threshold interacting multiple model (TIMM) is proposed. This algorithm is based on the interacting multiple model (IMM) method and applies a threshold controller to improve tracking accuracy. It is also applicable to other advanced algorithms of IMM. In this research, we also compare the position and velocity root mean square (RMS) errors of TIMM and IMM algorithms with two different examples. Simulation results show that the TIMM algorithm is superior to the traditional IMM alzorithm in estimation accuracy.展开更多
To solve the complex weight matrix derivative problem when using the weighted least squares method to estimate the parameters of the mixed additive and multiplicative random error model(MAM error model),we use an impr...To solve the complex weight matrix derivative problem when using the weighted least squares method to estimate the parameters of the mixed additive and multiplicative random error model(MAM error model),we use an improved artificial bee colony algorithm without derivative and the bootstrap method to estimate the parameters and evaluate the accuracy of MAM error model.The improved artificial bee colony algorithm can update individuals in multiple dimensions and improve the cooperation ability between individuals by constructing a new search equation based on the idea of quasi-affine transformation.The experimental results show that based on the weighted least squares criterion,the algorithm can get the results consistent with the weighted least squares method without multiple formula derivation.The parameter estimation and accuracy evaluation method based on the bootstrap method can get better parameter estimation and more reasonable accuracy information than existing methods,which provides a new idea for the theory of parameter estimation and accuracy evaluation of the MAM error model.展开更多
Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of D...Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of DN to reduce the prevalence and delay the development of DN.Kidney biopsy is the gold standard for diagnosing DN;however,its invasive character is its primary limitation.The machine learning approach provides a non-invasive and specific criterion for diagnosing DN,although traditional machine learning algorithms need to be improved to enhance diagnostic performance.Methods:We applied high-throughput RNA sequencing to obtain the genes related to DN tubular tissues and normal tubular tissues of mice.Then machine learning algorithms,random forest,LASSO logistic regression,and principal component analysis were used to identify key genes(CES1G,CYP4A14,NDUFA4,ABCC4,ACE).Then,the genetic algorithm-optimized backpropagation neural network(GA-BPNN)was used to improve the DN diagnostic model.Results:The AUC value of the GA-BPNN model in the training dataset was 0.83,and the AUC value of the model in the validation dataset was 0.81,while the AUC values of the SVM model in the training dataset and external validation dataset were 0.756 and 0.650,respectively.Thus,this GA-BPNN gave better values than the traditional SVM model.This diagnosis model may aim for personalized diagnosis and treatment of patients with DN.Immunohistochemical staining further confirmed that the tissue and cell expression of NADH dehydrogenase(ubiquinone)1 alpha subcomplex,4-like 2(NDUFA4L2)in tubular tissue in DN mice were decreased.Conclusion:The GA-BPNN model has better accuracy than the traditional SVM model and may provide an effective tool for diagnosing DN.展开更多
Model parameters estimation is a pivotal issue for runoff modeling in ungauged catchments.The nonlinear relationship between model parameters and catchment descriptors is a major obstacle for parameter regionalization...Model parameters estimation is a pivotal issue for runoff modeling in ungauged catchments.The nonlinear relationship between model parameters and catchment descriptors is a major obstacle for parameter regionalization,which is the most widely used approach.Runoff modeling was studied in 38 catchments located in the Yellow–Huai–Hai River Basin(YHHRB).The values of the Nash–Sutcliffe efficiency coefficient(NSE),coefficient of determination(R2),and percent bias(PBIAS)indicated the acceptable performance of the soil and water assessment tool(SWAT)model in the YHHRB.Nine descriptors belonging to the categories of climate,soil,vegetation,and topography were used to express the catchment characteristics related to the hydrological processes.The quantitative relationships between the parameters of the SWAT model and the catchment descriptors were analyzed by six regression-based models,including linear regression(LR)equations,support vector regression(SVR),random forest(RF),k-nearest neighbor(kNN),decision tree(DT),and radial basis function(RBF).Each of the 38 catchments was assumed to be an ungauged catchment in turn.Then,the parameters in each target catchment were estimated by the constructed regression models based on the remaining 37 donor catchments.Furthermore,the similaritybased regionalization scheme was used for comparison with the regression-based approach.The results indicated that the runoff with the highest accuracy was modeled by the SVR-based scheme in ungauged catchments.Compared with the traditional LR-based approach,the accuracy of the runoff modeling in ungauged catchments was improved by the machine learning algorithms because of the outstanding capability to deal with nonlinear relationships.The performances of different approaches were similar in humid regions,while the advantages of the machine learning techniques were more evident in arid regions.When the study area contained nested catchments,the best result was calculated with the similarity-based parameter regionalization scheme because of the high catchment density and short spatial distance.The new findings could improve flood forecasting and water resources planning in regions that lack observed data.展开更多
Combining interacting multiple model (IMM) and unscented particle filter (UPF), a new multiple model filtering algorithm is presented. Multiple models can be adapted to targets' high maneu- vering. Particle filte...Combining interacting multiple model (IMM) and unscented particle filter (UPF), a new multiple model filtering algorithm is presented. Multiple models can be adapted to targets' high maneu- vering. Particle filter can be used to deal with the nonlinear or non-Gaussian problems and the unscented Kalman filter (UKF) can improve the approximate accuracy. Compared with other interacting multiple model algorithms in the simulations, the results demonstrate the validity of the new filtering method.展开更多
Interacting multiple models is the hotspot in the research of maneuvering target models at present. A hierarchical idea is introduced into IMM algorithm. The method is that the whole models are organized as two levels...Interacting multiple models is the hotspot in the research of maneuvering target models at present. A hierarchical idea is introduced into IMM algorithm. The method is that the whole models are organized as two levels to co-work, and each cell model is an improved "current" statistical model. In the improved model, a kind of nonlinear fuzzy membership function is presented to get over the limitation of original model, which can not track weak maneuvering target precisely. At last, simulation experiments prove the efficient of the novel algorithm compared to interacting multiple model and hierarchical interacting multiple model based original "current" statistical model in tracking precision.展开更多
Constraint-based multicast routing, which aims at identifying a path that satisfies a set of quality of service (QoS) constraints, has became a very important research issue in the areas of networks and distributed sy...Constraint-based multicast routing, which aims at identifying a path that satisfies a set of quality of service (QoS) constraints, has became a very important research issue in the areas of networks and distributed systems. In general, multi-constrained path selection with or without optimization is a NP-complete problem that can not be exactly solved in polynomial time. Hence, accurate constraints-based routing algorithms with a fast running time are scarce, perhaps even non-existent. The expected impact of such a constrained-based routing algorithm has resulted in the proposal of numerous heuristics and a few exact QoS algorithms. This paper aims to give a thorough, concise and fair evaluation of the most important multiple constraint-based QoS multicast routing algorithms known today, and it provides a descriptive overview and simulation results of these multi-constrained routing algorithms.展开更多
Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic i...Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.展开更多
This paper studies the algorithm of the adaptive grid and fuzzy interacting multiple model (AGFIMM) for maneuvering target tracking, while focusing on the problems of the fixed structure multiple model (FSMM) algo...This paper studies the algorithm of the adaptive grid and fuzzy interacting multiple model (AGFIMM) for maneuvering target tracking, while focusing on the problems of the fixed structure multiple model (FSMM) algorithm's cost-efficiency ratio being not high and the Markov transition probability of the interacting multiple model (IMM) algorithm being difficult to determine exactly. This algorithm realizes the adaptive model set by adaptive grid adjustment, and obtains each model matching degree in the model set by fuzzy logic inference. The simulation results show that the AGFIMM algorithm can effectively improve the accuracy and cost-efficiency ratio of the multiple model algorithm, and as a result is suitable for enineering apolications.展开更多
This study aims to realize the sharing of near-infrared analysis models of lignin and holocellulose content in pulp wood on two different batches of spectrometers and proposes a combined algorithm of SPA-DS,MCUVE-DS a...This study aims to realize the sharing of near-infrared analysis models of lignin and holocellulose content in pulp wood on two different batches of spectrometers and proposes a combined algorithm of SPA-DS,MCUVE-DS and SiPLS-DS.The Successive Projection Algorithm(SPA),the Monte-Carlo of Uninformative Variable Elimination(MCUVE)and the Synergy Interval Partial Least Squares(SiPLS)algorithms are respectively used to reduce the adverse effects of redundant information in the transmission process of the full spectrum DS algorithm model.These three algorithms can improve model transfer accuracy and efficiency and reduce the manpower and material consumption required for modeling.These results show that the modeling effects of the characteristic wavelengths screened by the SPA,MCUVE and SiPLS algorithms are all greatly improved compared with the full-spectrum modeling,in which the SPA-PLS result in the best prediction with RPDs above 6.5 for both components.The three wavelength selection methods combined with the DS algorithm are used to transfer the models of the two instruments.Among them,the MCUVE combined with the DS algorithm has the best transfer effect.After the model transfer,the RMSEP of lignin is 0.701,and the RMSEP of holocellulose is 0.839,which was improved significantly than the full-spectrum model transfer of 0.759 and 0.918.展开更多
文摘Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a significant problem.The development of secure communication methods that keep recipient-only data transmissions secret has always been an area of interest.Therefore,several approaches,including steganography,have been developed by researchers over time to enable safe data transit.In this review,we have discussed image steganography based on Discrete Cosine Transform(DCT)algorithm,etc.We have also discussed image steganography based on multiple hashing algorithms like the Rivest–Shamir–Adleman(RSA)method,the Blowfish technique,and the hash-least significant bit(LSB)approach.In this review,a novel method of hiding information in images has been developed with minimal variance in image bits,making our method secure and effective.A cryptography mechanism was also used in this strategy.Before encoding the data and embedding it into a carry image,this review verifies that it has been encrypted.Usually,embedded text in photos conveys crucial signals about the content.This review employs hash table encryption on the message before hiding it within the picture to provide a more secure method of data transport.If the message is ever intercepted by a third party,there are several ways to stop this operation.A second level of security process implementation involves encrypting and decrypting steganography images using different hashing algorithms.
基金supported by project XJZ2023050044,A2309002 and XJZ2023070052.
文摘In the generalized continuum mechanics(GCM)theory framework,asymmetric wave equations encompass the characteristic scale parameters of the medium,accounting for microstructure interactions.This study integrates two theoretical branches of the GCM,the modified couple stress theory(M-CST)and the one-parameter second-strain-gradient theory,to form a novel asymmetric wave equation in a unified framework.Numerical modeling of the asymmetric wave equation in a unified framework accurately describes subsurface structures with vital implications for subsequent seismic wave inversion and imaging endeavors.However,employing finite-difference(FD)methods for numerical modeling may introduce numerical dispersion,adversely affecting the accuracy of numerical modeling.The design of an optimal FD operator is crucial for enhancing the accuracy of numerical modeling and emphasizing the scale effects.Therefore,this study devises a hybrid scheme called the dung beetle optimization(DBO)algorithm with a simulated annealing(SA)algorithm,denoted as the SA-based hybrid DBO(SDBO)algorithm.An FD operator optimization method under the SDBO algorithm was developed and applied to the numerical modeling of asymmetric wave equations in a unified framework.Integrating the DBO and SA algorithms mitigates the risk of convergence to a local extreme.The numerical dispersion outcomes underscore that the proposed SDBO algorithm yields FD operators with precision errors constrained to 0.5‱while encompassing a broader spectrum coverage.This result confirms the efficacy of the SDBO algorithm.Ultimately,the numerical modeling results demonstrate that the new FD method based on the SDBO algorithm effectively suppresses numerical dispersion and enhances the accuracy of elastic wave numerical modeling,thereby accentuating scale effects.This result is significant for extracting wavefield perturbations induced by complex microstructures in the medium and the analysis of scale effects.
文摘This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations.
基金supported by the National Natural Science Foundation of China (61773142)。
文摘An algorithm to track multiple sharply maneuvering targets without prior knowledge about new target birth is proposed. These targets are capable of achieving sharp maneuvers within a short period of time, such as drones and agile missiles.The probability hypothesis density (PHD) filter, which propagates only the first-order statistical moment of the full target posterior, has been shown to be a computationally efficient solution to multitarget tracking problems. However, the standard PHD filter operates on the single dynamic model and requires prior information about target birth distribution, which leads to many limitations in terms of practical applications. In this paper,we introduce a nonzero mean, white noise turn rate dynamic model and generalize jump Markov systems to multitarget case to accommodate sharply maneuvering dynamics. Moreover, to adaptively estimate newborn targets’information, a measurement-driven method based on the recursive random sampling consensus (RANSAC) algorithm is proposed. Simulation results demonstrate that the proposed method achieves significant improvement in tracking multiple sharply maneuvering targets with adaptive birth estimation.
基金supported by the Fundamental Research Funds for the Central Universities (No.3122020072)the Multi-investment Project of Tianjin Applied Basic Research(No.23JCQNJC00250)。
文摘A hybrid identification model based on multilayer artificial neural networks(ANNs) and particle swarm optimization(PSO) algorithm is developed to improve the simultaneous identification efficiency of thermal conductivity and effective absorption coefficient of semitransparent materials.For the direct model,the spherical harmonic method and the finite volume method are used to solve the coupled conduction-radiation heat transfer problem in an absorbing,emitting,and non-scattering 2D axisymmetric gray medium in the background of laser flash method.For the identification part,firstly,the temperature field and the incident radiation field in different positions are chosen as observables.Then,a traditional identification model based on PSO algorithm is established.Finally,multilayer ANNs are built to fit and replace the direct model in the traditional identification model to speed up the identification process.The results show that compared with the traditional identification model,the time cost of the hybrid identification model is reduced by about 1 000 times.Besides,the hybrid identification model remains a high level of accuracy even with measurement errors.
文摘BACKGROUND The spread of the severe acute respiratory syndrome coronavirus 2 outbreak worldwide has caused concern regarding the mortality rate caused by the infection.The determinants of mortality on a global scale cannot be fully understood due to lack of information.AIM To identify key factors that may explain the variability in case lethality across countries.METHODS We identified 21 Potential risk factors for coronavirus disease 2019(COVID-19)case fatality rate for all the countries with available data.We examined univariate relationships of each variable with case fatality rate(CFR),and all independent variables to identify candidate variables for our final multiple model.Multiple regression analysis technique was used to assess the strength of relationship.RESULTS The mean of COVID-19 mortality was 1.52±1.72%.There was a statistically significant inverse correlation between health expenditure,and number of computed tomography scanners per 1 million with CFR,and significant direct correlation was found between literacy,and air pollution with CFR.This final model can predict approximately 97%of the changes in CFR.CONCLUSION The current study recommends some new predictors explaining affect mortality rate.Thus,it could help decision-makers develop health policies to fight COVID-19.
文摘Rural electrification remains a critical challenge in achieving equitable access to electricity, a cornerstone for poverty alleviation, economic growth, and improved living standards. Capacitor Coupled Substations (CCS) offer a promising solution for delivering cost-effective electricity to these underserved areas. However, the integration of multiple CCS units along a transmission network introduces complex interactions that can significantly impact voltage, current, and power flow. This study presents a detailed mathematical model to analyze the effects of varying distances and configurations of multiple CCS units on a transmission network, with a focus on voltage stability, power quality, and reactive power fluctuations. Furthermore, the research addresses the phenomenon of ferroresonance, a critical issue in networks with multiple CCS units, by developing and validating suppression strategies to ensure stable operation. Through simulation and practical testing, the study provides insights into optimizing CCS deployment, ultimately contributing to more reliable and efficient rural electrification solutions.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
文摘A multiple model tracking algorithm based on neural network and multiple-process noise soft-switching for maneuvering targets is presented.In this algorithm, the"current"statistical model and neural network are running in parallel.The neural network algorithm is used to modify the adaptive noise filtering algorithm based on the mean value and variance of the"current"statistical model for maneuvering targets, and then the multiple model tracking algorithm of the multiple processing switch is used to improve the precision of tracking maneuvering targets.The modified algorithm is proved to be effective by simulation.
基金The National Key Technology R&D Program during the 11th Five-Year Plan Period of China (No. 2009BAG17B02)the National High Technology Research and Development Program of China (863 Program) (No. 2011AA110304)the National Natural Science Foundation of China (No. 50908100)
文摘In order to reduce average arterial vehicle delay, a novel distributed and coordinated traffic control algorithm is developed using the multiple agent system and the reinforce learning (RL). The RL is used to minimize average delay of arterial vehicles by training the interaction ability between agents and exterior environments. The Robertson platoon dispersion model is embedded in the RL algorithm to precisely predict platoon movements on arteries and then the reward function is developed based on the dispersion model and delay equations cited by HCM2000. The performance of the algorithm is evaluated in a Matlab environment and comparisons between the algorithm and the conventional coordination algorithm are conducted in three different traffic load scenarios. Results show that the proposed algorithm outperforms the conventional algorithm in all the scenarios. Moreover, with the increase in saturation degree, the performance is improved more significantly. The results verify the feasibility and efficiency of the established algorithm.
文摘To avoid missing track caused by the target maneuvers in automatic target tracking system, a new maneuvering target tracking technique called threshold interacting multiple model (TIMM) is proposed. This algorithm is based on the interacting multiple model (IMM) method and applies a threshold controller to improve tracking accuracy. It is also applicable to other advanced algorithms of IMM. In this research, we also compare the position and velocity root mean square (RMS) errors of TIMM and IMM algorithms with two different examples. Simulation results show that the TIMM algorithm is superior to the traditional IMM alzorithm in estimation accuracy.
基金supported by the National Natural Science Foundation of China(No.42174011 and No.41874001).
文摘To solve the complex weight matrix derivative problem when using the weighted least squares method to estimate the parameters of the mixed additive and multiplicative random error model(MAM error model),we use an improved artificial bee colony algorithm without derivative and the bootstrap method to estimate the parameters and evaluate the accuracy of MAM error model.The improved artificial bee colony algorithm can update individuals in multiple dimensions and improve the cooperation ability between individuals by constructing a new search equation based on the idea of quasi-affine transformation.The experimental results show that based on the weighted least squares criterion,the algorithm can get the results consistent with the weighted least squares method without multiple formula derivation.The parameter estimation and accuracy evaluation method based on the bootstrap method can get better parameter estimation and more reasonable accuracy information than existing methods,which provides a new idea for the theory of parameter estimation and accuracy evaluation of the MAM error model.
基金the National Natural Science Foundation of China(Grant Number:81970631 to W.L.).
文摘Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of DN to reduce the prevalence and delay the development of DN.Kidney biopsy is the gold standard for diagnosing DN;however,its invasive character is its primary limitation.The machine learning approach provides a non-invasive and specific criterion for diagnosing DN,although traditional machine learning algorithms need to be improved to enhance diagnostic performance.Methods:We applied high-throughput RNA sequencing to obtain the genes related to DN tubular tissues and normal tubular tissues of mice.Then machine learning algorithms,random forest,LASSO logistic regression,and principal component analysis were used to identify key genes(CES1G,CYP4A14,NDUFA4,ABCC4,ACE).Then,the genetic algorithm-optimized backpropagation neural network(GA-BPNN)was used to improve the DN diagnostic model.Results:The AUC value of the GA-BPNN model in the training dataset was 0.83,and the AUC value of the model in the validation dataset was 0.81,while the AUC values of the SVM model in the training dataset and external validation dataset were 0.756 and 0.650,respectively.Thus,this GA-BPNN gave better values than the traditional SVM model.This diagnosis model may aim for personalized diagnosis and treatment of patients with DN.Immunohistochemical staining further confirmed that the tissue and cell expression of NADH dehydrogenase(ubiquinone)1 alpha subcomplex,4-like 2(NDUFA4L2)in tubular tissue in DN mice were decreased.Conclusion:The GA-BPNN model has better accuracy than the traditional SVM model and may provide an effective tool for diagnosing DN.
基金funded by the National Key Research and Development Program of China(2017YFA0605002,2017YFA0605004,and 2016YFA0601501)the National Natural Science Foundation of China(41961124007,51779145,and 41830863)“Six top talents”in Jiangsu Province(RJFW-031)。
文摘Model parameters estimation is a pivotal issue for runoff modeling in ungauged catchments.The nonlinear relationship between model parameters and catchment descriptors is a major obstacle for parameter regionalization,which is the most widely used approach.Runoff modeling was studied in 38 catchments located in the Yellow–Huai–Hai River Basin(YHHRB).The values of the Nash–Sutcliffe efficiency coefficient(NSE),coefficient of determination(R2),and percent bias(PBIAS)indicated the acceptable performance of the soil and water assessment tool(SWAT)model in the YHHRB.Nine descriptors belonging to the categories of climate,soil,vegetation,and topography were used to express the catchment characteristics related to the hydrological processes.The quantitative relationships between the parameters of the SWAT model and the catchment descriptors were analyzed by six regression-based models,including linear regression(LR)equations,support vector regression(SVR),random forest(RF),k-nearest neighbor(kNN),decision tree(DT),and radial basis function(RBF).Each of the 38 catchments was assumed to be an ungauged catchment in turn.Then,the parameters in each target catchment were estimated by the constructed regression models based on the remaining 37 donor catchments.Furthermore,the similaritybased regionalization scheme was used for comparison with the regression-based approach.The results indicated that the runoff with the highest accuracy was modeled by the SVR-based scheme in ungauged catchments.Compared with the traditional LR-based approach,the accuracy of the runoff modeling in ungauged catchments was improved by the machine learning algorithms because of the outstanding capability to deal with nonlinear relationships.The performances of different approaches were similar in humid regions,while the advantages of the machine learning techniques were more evident in arid regions.When the study area contained nested catchments,the best result was calculated with the similarity-based parameter regionalization scheme because of the high catchment density and short spatial distance.The new findings could improve flood forecasting and water resources planning in regions that lack observed data.
文摘Combining interacting multiple model (IMM) and unscented particle filter (UPF), a new multiple model filtering algorithm is presented. Multiple models can be adapted to targets' high maneu- vering. Particle filter can be used to deal with the nonlinear or non-Gaussian problems and the unscented Kalman filter (UKF) can improve the approximate accuracy. Compared with other interacting multiple model algorithms in the simulations, the results demonstrate the validity of the new filtering method.
文摘Interacting multiple models is the hotspot in the research of maneuvering target models at present. A hierarchical idea is introduced into IMM algorithm. The method is that the whole models are organized as two levels to co-work, and each cell model is an improved "current" statistical model. In the improved model, a kind of nonlinear fuzzy membership function is presented to get over the limitation of original model, which can not track weak maneuvering target precisely. At last, simulation experiments prove the efficient of the novel algorithm compared to interacting multiple model and hierarchical interacting multiple model based original "current" statistical model in tracking precision.
文摘Constraint-based multicast routing, which aims at identifying a path that satisfies a set of quality of service (QoS) constraints, has became a very important research issue in the areas of networks and distributed systems. In general, multi-constrained path selection with or without optimization is a NP-complete problem that can not be exactly solved in polynomial time. Hence, accurate constraints-based routing algorithms with a fast running time are scarce, perhaps even non-existent. The expected impact of such a constrained-based routing algorithm has resulted in the proposal of numerous heuristics and a few exact QoS algorithms. This paper aims to give a thorough, concise and fair evaluation of the most important multiple constraint-based QoS multicast routing algorithms known today, and it provides a descriptive overview and simulation results of these multi-constrained routing algorithms.
基金This project was supported by National Natural Science Foundation (No. 69934020).
文摘Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.
基金Foundation item: Supported by the National Nature Science Foundation of China (No. 61074053, 61374114) and the Applied Basic Research Program of Ministry of Transport of China (No. 2011-329-225 -390).
文摘This paper studies the algorithm of the adaptive grid and fuzzy interacting multiple model (AGFIMM) for maneuvering target tracking, while focusing on the problems of the fixed structure multiple model (FSMM) algorithm's cost-efficiency ratio being not high and the Markov transition probability of the interacting multiple model (IMM) algorithm being difficult to determine exactly. This algorithm realizes the adaptive model set by adaptive grid adjustment, and obtains each model matching degree in the model set by fuzzy logic inference. The simulation results show that the AGFIMM algorithm can effectively improve the accuracy and cost-efficiency ratio of the multiple model algorithm, and as a result is suitable for enineering apolications.
基金The authors are grateful for the support of the Fundamental Research Funds of Research Institute of Forest New Technology,CAF(CAFYBB2019SY039).
文摘This study aims to realize the sharing of near-infrared analysis models of lignin and holocellulose content in pulp wood on two different batches of spectrometers and proposes a combined algorithm of SPA-DS,MCUVE-DS and SiPLS-DS.The Successive Projection Algorithm(SPA),the Monte-Carlo of Uninformative Variable Elimination(MCUVE)and the Synergy Interval Partial Least Squares(SiPLS)algorithms are respectively used to reduce the adverse effects of redundant information in the transmission process of the full spectrum DS algorithm model.These three algorithms can improve model transfer accuracy and efficiency and reduce the manpower and material consumption required for modeling.These results show that the modeling effects of the characteristic wavelengths screened by the SPA,MCUVE and SiPLS algorithms are all greatly improved compared with the full-spectrum modeling,in which the SPA-PLS result in the best prediction with RPDs above 6.5 for both components.The three wavelength selection methods combined with the DS algorithm are used to transfer the models of the two instruments.Among them,the MCUVE combined with the DS algorithm has the best transfer effect.After the model transfer,the RMSEP of lignin is 0.701,and the RMSEP of holocellulose is 0.839,which was improved significantly than the full-spectrum model transfer of 0.759 and 0.918.