This work investigates a multi-product parallel disassembly line balancing problem considering multi-skilled workers.A mathematical model for the parallel disassembly line is established to achieve maximized disassemb...This work investigates a multi-product parallel disassembly line balancing problem considering multi-skilled workers.A mathematical model for the parallel disassembly line is established to achieve maximized disassembly profit and minimized workstation cycle time.Based on a product’s AND/OR graph,matrices for task-skill,worker-skill,precedence relationships,and disassembly correlations are developed.A multi-objective discrete chemical reaction optimization algorithm is designed.To enhance solution diversity,improvements are made to four reactions:decomposition,synthesis,intermolecular ineffective collision,and wall invalid collision reaction,completing the evolution of molecular individuals.The established model and improved algorithm are applied to ball pen,flashlight,washing machine,and radio combinations,respectively.Introducing a Collaborative Resource Allocation(CRA)strategy based on a Decomposition-Based Multi-Objective Evolutionary Algorithm,the experimental results are compared with four classical algorithms:MOEA/D,MOEAD-CRA,Non-dominated Sorting Genetic Algorithm Ⅱ(NSGA-Ⅱ),and Non-dominated Sorting Genetic Algorithm Ⅲ(NSGA-Ⅲ).This validates the feasibility and superiority of the proposed algorithm in parallel disassembly production lines.展开更多
The heterogeneous variational nodal method(HVNM)has emerged as a potential approach for solving high-fidelity neutron transport problems.However,achieving accurate results with HVNM in large-scale problems using high-...The heterogeneous variational nodal method(HVNM)has emerged as a potential approach for solving high-fidelity neutron transport problems.However,achieving accurate results with HVNM in large-scale problems using high-fidelity models has been challenging due to the prohibitive computational costs.This paper presents an efficient parallel algorithm tailored for HVNM based on the Message Passing Interface standard.The algorithm evenly distributes the response matrix sets among processors during the matrix formation process,thus enabling independent construction without communication.Once the formation tasks are completed,a collective operation merges and shares the matrix sets among the processors.For the solution process,the problem domain is decomposed into subdomains assigned to specific processors,and the red-black Gauss-Seidel iteration is employed within each subdomain to solve the response matrix equation.Point-to-point communication is conducted between adjacent subdomains to exchange data along the boundaries.The accuracy and efficiency of the parallel algorithm are verified using the KAIST and JRR-3 test cases.Numerical results obtained with multiple processors agree well with those obtained from Monte Carlo calculations.The parallelization of HVNM results in eigenvalue errors of 31 pcm/-90 pcm and fission rate RMS errors of 1.22%/0.66%,respectively,for the 3D KAIST problem and the 3D JRR-3 problem.In addition,the parallel algorithm significantly reduces computation time,with an efficiency of 68.51% using 36 processors in the KAIST problem and 77.14% using 144 processors in the JRR-3 problem.展开更多
This study explores the application of parallel algorithms to enhance large-scale sorting, focusing on the QuickSort method. Implemented in both sequential and parallel forms, the paper provides a detailed comparison ...This study explores the application of parallel algorithms to enhance large-scale sorting, focusing on the QuickSort method. Implemented in both sequential and parallel forms, the paper provides a detailed comparison of their performance. This study investigates the efficacy of both techniques through the lens of array generation and pivot selection to manage datasets of varying sizes. This study meticulously documents the performance metrics, recording 16,499.2 milliseconds for the serial implementation and 16,339 milliseconds for the parallel implementation when sorting an array by using C++ chrono library. These results suggest that while the performance gains of the parallel approach over its serial counterpart are not immediately pronounced for smaller datasets, the benefits are expected to be more substantial as the dataset size increases.展开更多
Ray tracing is a computer graphics method that renders images realistically. As the name suggests, this technique primarily traces the path of light rays interacting with objects in a scene [1], permitting the calcula...Ray tracing is a computer graphics method that renders images realistically. As the name suggests, this technique primarily traces the path of light rays interacting with objects in a scene [1], permitting the calculation of lighting and reflecting impact [2]. As ray tracing is a time-consuming process, the need for parallelization to solve this problem arises. One downside of this solution is the existence of race conditions. In this work, we explore and experiment with a different, well-known solution for this race condition. Starting with the introduction and the background section, a brief overview of the topic is followed by a detailed part of how the race conditions may occur in the case of the ray tracing algorithm. Continuing with the methods and results section, we have used OpenMP to parallelize the Ray tracing algorithm with the different compiler directives critical, atomic, and first-private. Hence, it concluded that both critical and atomic are not efficient solutions to produce a good-quality picture, but first-private succeeded in producing a high-quality picture.展开更多
This study focuses on the scheduling problem of unrelated parallel batch processing machines(BPM)with release times,a scenario derived from the moulding process in a foundry.In this process,a batch is initially formed...This study focuses on the scheduling problem of unrelated parallel batch processing machines(BPM)with release times,a scenario derived from the moulding process in a foundry.In this process,a batch is initially formed,placed in a sandbox,and then the sandbox is positioned on a BPM formoulding.The complexity of the scheduling problem increases due to the consideration of BPM capacity and sandbox volume.To minimize the makespan,a new cooperated imperialist competitive algorithm(CICA)is introduced.In CICA,the number of empires is not a parameter,and four empires aremaintained throughout the search process.Two types of assimilations are achieved:The strongest and weakest empires cooperate in their assimilation,while the remaining two empires,having a close normalization total cost,combine in their assimilation.A new form of imperialist competition is proposed to prevent insufficient competition,and the unique features of the problem are effectively utilized.Computational experiments are conducted across several instances,and a significant amount of experimental results show that the newstrategies of CICAare effective,indicating promising advantages for the considered BPMscheduling problems.展开更多
In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and t...In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and the greatest common divisor.We further provided several suggestions for teaching.展开更多
Due to the complex high-temperature characteristics of hydrocarbon fuel,the research on the long-term working process of parallel channel structure under variable working conditions,especially under high heat-mass rat...Due to the complex high-temperature characteristics of hydrocarbon fuel,the research on the long-term working process of parallel channel structure under variable working conditions,especially under high heat-mass ratio,has not been systematically carried out.In this paper,the heat transfer and flow characteristics of related high temperature fuels are studied by using typical engine parallel channel structure.Through numeri⁃cal simulation and systematic experimental verification,the flow and heat transfer characteristics of parallel chan⁃nels under typical working conditions are obtained,and the effectiveness of high-precision calculation method is preliminarily established.It is known that the stable time required for hot start of regenerative cooling engine is about 50 s,and the flow resistance of parallel channel structure first increases and then decreases with the in⁃crease of equivalence ratio(The following equivalence ratio is expressed byΦ),and there is a flow resistance peak in the range ofΦ=0.5~0.8.This is mainly caused by the coupling effect of high temperature physical proper⁃ties,flow rate and pressure of fuel in parallel channels.At the same time,the cooling and heat transfer character⁃istics of parallel channels under some conditions of high heat-mass ratio are obtained,and the main factors affect⁃ing the heat transfer of parallel channels such as improving surface roughness and strengthening heat transfer are mastered.In the experiment,whenΦis less than 0.9,the phenomenon of local heat transfer enhancement and deterioration can be obviously observed,and the temperature rise of local structures exceeds 200℃,which is the risk of structural damage.Therefore,the reliability of long-term parallel channel structure under the condition of high heat-mass ratio should be fully considered in structural design.展开更多
In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-base...In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.展开更多
Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,curr...Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.展开更多
Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,th...Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy.展开更多
The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamic...The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamically choose heuristics during the search process.Numerous selection hyper-heuristics have different imple-mentation strategies.However,comparisons between them are lacking in the literature,and previous works have not highlighted the beneficial and detrimental implementation methods of different components.The question is how to effectively employ them to produce an efficient search heuristic.Furthermore,the algorithms that competed in the inaugural CHeSC have not been collectively reviewed.This work conducts a review analysis of the top twenty competitors from this competition to identify effective and ineffective strategies influencing algorithmic performance.A summary of the main characteristics and classification of the algorithms is presented.The analysis underlines efficient and inefficient methods in eight key components,including search points,search phases,heuristic selection,move acceptance,feedback,Tabu mechanism,restart mechanism,and low-level heuristic parameter control.This review analyzes the components referencing the competition’s final leaderboard and discusses future research directions for these components.The effective approaches,identified as having the highest quality index,are mixed search point,iterated search phases,relay hybridization selection,threshold acceptance,mixed learning,Tabu heuristics,stochastic restart,and dynamic parameters.Findings are also compared with recent trends in hyper-heuristics.This work enhances the understanding of selection hyper-heuristics,offering valuable insights for researchers and practitioners aiming to develop effective search algorithms for diverse problem domains.展开更多
This paper introduces a hybrid multi-objective optimization algorithm,designated HMODESFO,which amalgamates the exploratory prowess of Differential Evolution(DE)with the rapid convergence attributes of the Sailfish Op...This paper introduces a hybrid multi-objective optimization algorithm,designated HMODESFO,which amalgamates the exploratory prowess of Differential Evolution(DE)with the rapid convergence attributes of the Sailfish Optimization(SFO)algorithm.The primary objective is to address multi-objective optimization challenges within mechanical engineering,with a specific emphasis on planetary gearbox optimization.The algorithm is equipped with the ability to dynamically select the optimal mutation operator,contingent upon an adaptive normalized population spacing parameter.The efficacy of HMODESFO has been substantiated through rigorous validation against estab-lished industry benchmarks,including a suite of Zitzler-Deb-Thiele(ZDT)and Zeb-Thiele-Laumanns-Zitzler(DTLZ)problems,where it exhibited superior performance.The outcomes underscore the algorithm’s markedly enhanced optimization capabilities relative to existing methods,particularly in tackling highly intricate multi-objective planetary gearbox optimization problems.Additionally,the performance of HMODESFO is evaluated against selected well-known mechanical engineering test problems,further accentuating its adeptness in resolving complex optimization challenges within this domain.展开更多
Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic ...Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic models,and there is a significant gap between the research results and actual wireless sensor networks.Some scholars have now modeled data fusion networks to make them more suitable for practical applications.This paper will explore the deployment problem of a stochastic data fusion wireless sensor network(SDFWSN),a model that reflects the randomness of environmental monitoring and uses data fusion techniques widely used in actual sensor networks for information collection.The deployment problem of SDFWSN is modeled as a multi-objective optimization problem.The network life cycle,spatiotemporal coverage,detection rate,and false alarm rate of SDFWSN are used as optimization objectives to optimize the deployment of network nodes.This paper proposes an enhanced multi-objective mongoose optimization algorithm(EMODMOA)to solve the deployment problem of SDFWSN.First,to overcome the shortcomings of the DMOA algorithm,such as its low convergence and tendency to get stuck in a local optimum,an encircling and hunting strategy is introduced into the original algorithm to propose the EDMOA algorithm.The EDMOA algorithm is designed as the EMODMOA algorithm by selecting reference points using the K-Nearest Neighbor(KNN)algorithm.To verify the effectiveness of the proposed algorithm,the EMODMOA algorithm was tested at CEC 2020 and achieved good results.In the SDFWSN deployment problem,the algorithm was compared with the Non-dominated Sorting Genetic Algorithm II(NSGAII),Multiple Objective Particle Swarm Optimization(MOPSO),Multi-Objective Evolutionary Algorithm based on Decomposition(MOEA/D),and Multi-Objective Grey Wolf Optimizer(MOGWO).By comparing and analyzing the performance evaluation metrics and optimization results of the objective functions of the multi-objective algorithms,the algorithm outperforms the other algorithms in the SDFWSN deployment results.To better demonstrate the superiority of the algorithm,simulations of diverse test cases were also performed,and good results were obtained.展开更多
With the development of parallel computing technology,non-linear inversion calculation efficiency has been improving.However,for single-point search-based non-linear inversion methods,the implementation of parallel al...With the development of parallel computing technology,non-linear inversion calculation efficiency has been improving.However,for single-point search-based non-linear inversion methods,the implementation of parallel algorithms is a difficult issue.We introduce the idea of group search to the single-point search-based non-linear inversion algorithm, taking the quantum Monte Carlo method as an example for two-dimensional seismic wave velocity inversion and practical impedance inversion and test the calculation efficiency of using different node numbers.The results show the parallel algorithm in theoretical and practical data inversion is feasible and effective.The parallel algorithm has good versatility. The algorithm efficiency increases with increasing node numbers but the algorithm efficiency rate of increase gradually decreases as the node numbers increase.展开更多
An improved parallel weighted bit-flipping(PWBF) algorithm is presented. To accelerate the information exchanges between check nodes and variable nodes, the bit-flipping step and the check node updating step of the ...An improved parallel weighted bit-flipping(PWBF) algorithm is presented. To accelerate the information exchanges between check nodes and variable nodes, the bit-flipping step and the check node updating step of the original algorithm are parallelized. The simulation experiments demonstrate that the improved PWBF algorithm provides about 0. 1 to 0. 3 dB coding gain over the original PWBF algorithm. And the improved algorithm achieves a higher convergence rate. The choice of the threshold is also discussed, which is used to determine whether a bit should be flipped during each iteration. The appropriate threshold can ensure that most error bits be flipped, and keep the right ones untouched at the same time. The improvement is particularly effective for decoding quasi-cyclic low-density paritycheck(QC-LDPC) codes.展开更多
Using the method of mathematical morphology,this paper fulfills filtration,segmentation and extraction of morphological features of the satellite cloud image.It also gives out the relative algorithms,which is realized...Using the method of mathematical morphology,this paper fulfills filtration,segmentation and extraction of morphological features of the satellite cloud image.It also gives out the relative algorithms,which is realized by parallel C programming based on Transputer networks.It has been successfully used to process the typhoon and the low tornado cloud image.And it will be used in weather forecast.展开更多
To decrease the time of generating a closure, a parallel algorithm of generating the closure of a resource description framework schema (RDFS) source is presented. In the algorithm, RDFS triples in the source are cl...To decrease the time of generating a closure, a parallel algorithm of generating the closure of a resource description framework schema (RDFS) source is presented. In the algorithm, RDFS triples in the source are classified according to the forms of triples in the entailment rules and it reduces the scope of searching for specific triples. The dependence among the classes of triples is analyzed. Based on the classification, the initial RDFS source is partitioned into several subsets. The subsets are distributed to each process, and the closure is generated in parallel by applying the RDFS entailment rules. Parallel generating the closure of an RDFS source takes less time and increases efficiency.展开更多
Traditional two-dimensional(2D) complex resistivity forward modeling is based on Poisson's equation but spectral induced polarization(SIP) data are the coproducts of the induced polarization(IP) and the electro...Traditional two-dimensional(2D) complex resistivity forward modeling is based on Poisson's equation but spectral induced polarization(SIP) data are the coproducts of the induced polarization(IP) and the electromagnetic induction(EMI) effects.This is especially true under high frequencies,where the EMI effect can exceed the IP effect.2D inversion that only considers the IP effect reduces the reliability of the inversion data.In this paper,we derive differential equations using Maxwell's equations.With the introduction of the Cole-Cole model,we use the finite-element method to conduct2 D SIP forward modeling that considers the EMI and IP effects simultaneously.The data-space Occam method,in which different constraints to the model smoothness and parametric boundaries are introduced,is then used to simultaneously obtain the four parameters of the Cole-Cole model using multi-array electric field data.This approach not only improves the stability of the inversion but also significantly reduces the solution ambiguity.To improve the computational efficiency,message passing interface programming was used to accelerate the 2D SIP forward modeling and inversion.Synthetic datasets were tested using both serial and parallel algorithms,and the tests suggest that the proposed parallel algorithm is robust and efficient.展开更多
Dimensional synthesis is one of the most difficult issues in the field of parallel robots with actuation redundancy. To deal with the optimal design of a redundantly actuated parallel robot used for ankle rehabilitati...Dimensional synthesis is one of the most difficult issues in the field of parallel robots with actuation redundancy. To deal with the optimal design of a redundantly actuated parallel robot used for ankle rehabilitation, a methodology of dimensional synthesis based on multi-objective optimization is presented. First, the dimensional synthesis of the redundant parallel robot is formulated as a nonlinear constrained multi-objective optimization problem. Then four objective functions, separately reflecting occupied space, input/output transmission and torque performances, and multi-criteria constraints, such as dimension, interference and kinematics, are defined. In consideration of the passive exercise of plantar/dorsiflexion requiring large output moment, a torque index is proposed. To cope with the actuation redundancy of the parallel robot, a new output transmission index is defined as well. The multi-objective optimization problem is solved by using a modified Differential Evolution(DE) algorithm, which is characterized by new selection and mutation strategies. Meanwhile, a special penalty method is presented to tackle the multi-criteria constraints. Finally, numerical experiments for different optimization algorithms are implemented. The computation results show that the proposed indices of output transmission and torque, and constraint handling are effective for the redundant parallel robot; the modified DE algorithm is superior to the other tested algorithms, in terms of the ability of global search and the number of non-dominated solutions. The proposed methodology of multi-objective optimization can be also applied to the dimensional synthesis of other redundantly actuated parallel robots only with rotational movements.展开更多
The method of establishing data structures plays an important role in the efficiency of parallel multilevel fast multipole algorithm(PMLFMA).Considering the main complements of multilevel fast multipole algorithm(M...The method of establishing data structures plays an important role in the efficiency of parallel multilevel fast multipole algorithm(PMLFMA).Considering the main complements of multilevel fast multipole algorithm(MLFMA) memory,a new parallelization strategy and a modified data octree construction scheme are proposed to further reduce communication in order to improve parallel efficiency.For far interaction,a new scheme called dynamic memory allocation is developed.To analyze the workload balancing performance of a parallel implementation,the original concept of workload balancing factor is introduced and verified by numerical examples.Numerical results show that the above measures improve the parallel efficiency and are suitable for the analysis of electrical large-scale scattering objects.展开更多
文摘This work investigates a multi-product parallel disassembly line balancing problem considering multi-skilled workers.A mathematical model for the parallel disassembly line is established to achieve maximized disassembly profit and minimized workstation cycle time.Based on a product’s AND/OR graph,matrices for task-skill,worker-skill,precedence relationships,and disassembly correlations are developed.A multi-objective discrete chemical reaction optimization algorithm is designed.To enhance solution diversity,improvements are made to four reactions:decomposition,synthesis,intermolecular ineffective collision,and wall invalid collision reaction,completing the evolution of molecular individuals.The established model and improved algorithm are applied to ball pen,flashlight,washing machine,and radio combinations,respectively.Introducing a Collaborative Resource Allocation(CRA)strategy based on a Decomposition-Based Multi-Objective Evolutionary Algorithm,the experimental results are compared with four classical algorithms:MOEA/D,MOEAD-CRA,Non-dominated Sorting Genetic Algorithm Ⅱ(NSGA-Ⅱ),and Non-dominated Sorting Genetic Algorithm Ⅲ(NSGA-Ⅲ).This validates the feasibility and superiority of the proposed algorithm in parallel disassembly production lines.
基金supported by the National Key Research and Development Program of China(No.2020YFB1901900)the National Natural Science Foundation of China(Nos.U20B2011,12175138)the Shanghai Rising-Star Program。
文摘The heterogeneous variational nodal method(HVNM)has emerged as a potential approach for solving high-fidelity neutron transport problems.However,achieving accurate results with HVNM in large-scale problems using high-fidelity models has been challenging due to the prohibitive computational costs.This paper presents an efficient parallel algorithm tailored for HVNM based on the Message Passing Interface standard.The algorithm evenly distributes the response matrix sets among processors during the matrix formation process,thus enabling independent construction without communication.Once the formation tasks are completed,a collective operation merges and shares the matrix sets among the processors.For the solution process,the problem domain is decomposed into subdomains assigned to specific processors,and the red-black Gauss-Seidel iteration is employed within each subdomain to solve the response matrix equation.Point-to-point communication is conducted between adjacent subdomains to exchange data along the boundaries.The accuracy and efficiency of the parallel algorithm are verified using the KAIST and JRR-3 test cases.Numerical results obtained with multiple processors agree well with those obtained from Monte Carlo calculations.The parallelization of HVNM results in eigenvalue errors of 31 pcm/-90 pcm and fission rate RMS errors of 1.22%/0.66%,respectively,for the 3D KAIST problem and the 3D JRR-3 problem.In addition,the parallel algorithm significantly reduces computation time,with an efficiency of 68.51% using 36 processors in the KAIST problem and 77.14% using 144 processors in the JRR-3 problem.
文摘This study explores the application of parallel algorithms to enhance large-scale sorting, focusing on the QuickSort method. Implemented in both sequential and parallel forms, the paper provides a detailed comparison of their performance. This study investigates the efficacy of both techniques through the lens of array generation and pivot selection to manage datasets of varying sizes. This study meticulously documents the performance metrics, recording 16,499.2 milliseconds for the serial implementation and 16,339 milliseconds for the parallel implementation when sorting an array by using C++ chrono library. These results suggest that while the performance gains of the parallel approach over its serial counterpart are not immediately pronounced for smaller datasets, the benefits are expected to be more substantial as the dataset size increases.
文摘Ray tracing is a computer graphics method that renders images realistically. As the name suggests, this technique primarily traces the path of light rays interacting with objects in a scene [1], permitting the calculation of lighting and reflecting impact [2]. As ray tracing is a time-consuming process, the need for parallelization to solve this problem arises. One downside of this solution is the existence of race conditions. In this work, we explore and experiment with a different, well-known solution for this race condition. Starting with the introduction and the background section, a brief overview of the topic is followed by a detailed part of how the race conditions may occur in the case of the ray tracing algorithm. Continuing with the methods and results section, we have used OpenMP to parallelize the Ray tracing algorithm with the different compiler directives critical, atomic, and first-private. Hence, it concluded that both critical and atomic are not efficient solutions to produce a good-quality picture, but first-private succeeded in producing a high-quality picture.
基金the National Natural Science Foundation of China(Grant Number 61573264).
文摘This study focuses on the scheduling problem of unrelated parallel batch processing machines(BPM)with release times,a scenario derived from the moulding process in a foundry.In this process,a batch is initially formed,placed in a sandbox,and then the sandbox is positioned on a BPM formoulding.The complexity of the scheduling problem increases due to the consideration of BPM capacity and sandbox volume.To minimize the makespan,a new cooperated imperialist competitive algorithm(CICA)is introduced.In CICA,the number of empires is not a parameter,and four empires aremaintained throughout the search process.Two types of assimilations are achieved:The strongest and weakest empires cooperate in their assimilation,while the remaining two empires,having a close normalization total cost,combine in their assimilation.A new form of imperialist competition is proposed to prevent insufficient competition,and the unique features of the problem are effectively utilized.Computational experiments are conducted across several instances,and a significant amount of experimental results show that the newstrategies of CICAare effective,indicating promising advantages for the considered BPMscheduling problems.
基金Supported by the Natural Science Foundation of Chongqing(General Program,NO.CSTB2022NSCQ-MSX0884)Discipline Teaching Special Project of Yangtze Normal University(csxkjx14)。
文摘In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and the greatest common divisor.We further provided several suggestions for teaching.
文摘Due to the complex high-temperature characteristics of hydrocarbon fuel,the research on the long-term working process of parallel channel structure under variable working conditions,especially under high heat-mass ratio,has not been systematically carried out.In this paper,the heat transfer and flow characteristics of related high temperature fuels are studied by using typical engine parallel channel structure.Through numeri⁃cal simulation and systematic experimental verification,the flow and heat transfer characteristics of parallel chan⁃nels under typical working conditions are obtained,and the effectiveness of high-precision calculation method is preliminarily established.It is known that the stable time required for hot start of regenerative cooling engine is about 50 s,and the flow resistance of parallel channel structure first increases and then decreases with the in⁃crease of equivalence ratio(The following equivalence ratio is expressed byΦ),and there is a flow resistance peak in the range ofΦ=0.5~0.8.This is mainly caused by the coupling effect of high temperature physical proper⁃ties,flow rate and pressure of fuel in parallel channels.At the same time,the cooling and heat transfer character⁃istics of parallel channels under some conditions of high heat-mass ratio are obtained,and the main factors affect⁃ing the heat transfer of parallel channels such as improving surface roughness and strengthening heat transfer are mastered.In the experiment,whenΦis less than 0.9,the phenomenon of local heat transfer enhancement and deterioration can be obviously observed,and the temperature rise of local structures exceeds 200℃,which is the risk of structural damage.Therefore,the reliability of long-term parallel channel structure under the condition of high heat-mass ratio should be fully considered in structural design.
基金Shanxi Province Higher Education Science and Technology Innovation Fund Project(2022-676)Shanxi Soft Science Program Research Fund Project(2016041008-6)。
文摘In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant(No.51677058).
文摘Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.
基金supported by Yunnan Provincial Basic Research Project(202401AT070344,202301AT070443)National Natural Science Foundation of China(62263014,52207105)+1 种基金Yunnan Lancang-Mekong International Electric Power Technology Joint Laboratory(202203AP140001)Major Science and Technology Projects in Yunnan Province(202402AG050006).
文摘Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy.
基金funded by Ministry of Higher Education(MoHE)Malaysia,under Transdisciplinary Research Grant Scheme(TRGS/1/2019/UKM/01/4/2).
文摘The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamically choose heuristics during the search process.Numerous selection hyper-heuristics have different imple-mentation strategies.However,comparisons between them are lacking in the literature,and previous works have not highlighted the beneficial and detrimental implementation methods of different components.The question is how to effectively employ them to produce an efficient search heuristic.Furthermore,the algorithms that competed in the inaugural CHeSC have not been collectively reviewed.This work conducts a review analysis of the top twenty competitors from this competition to identify effective and ineffective strategies influencing algorithmic performance.A summary of the main characteristics and classification of the algorithms is presented.The analysis underlines efficient and inefficient methods in eight key components,including search points,search phases,heuristic selection,move acceptance,feedback,Tabu mechanism,restart mechanism,and low-level heuristic parameter control.This review analyzes the components referencing the competition’s final leaderboard and discusses future research directions for these components.The effective approaches,identified as having the highest quality index,are mixed search point,iterated search phases,relay hybridization selection,threshold acceptance,mixed learning,Tabu heuristics,stochastic restart,and dynamic parameters.Findings are also compared with recent trends in hyper-heuristics.This work enhances the understanding of selection hyper-heuristics,offering valuable insights for researchers and practitioners aiming to develop effective search algorithms for diverse problem domains.
基金supported by the Serbian Ministry of Education and Science under Grant No.TR35006 and COST Action:CA23155—A Pan-European Network of Ocean Tribology(OTC)The research of B.Rosic and M.Rosic was supported by the Serbian Ministry of Education and Science under Grant TR35029.
文摘This paper introduces a hybrid multi-objective optimization algorithm,designated HMODESFO,which amalgamates the exploratory prowess of Differential Evolution(DE)with the rapid convergence attributes of the Sailfish Optimization(SFO)algorithm.The primary objective is to address multi-objective optimization challenges within mechanical engineering,with a specific emphasis on planetary gearbox optimization.The algorithm is equipped with the ability to dynamically select the optimal mutation operator,contingent upon an adaptive normalized population spacing parameter.The efficacy of HMODESFO has been substantiated through rigorous validation against estab-lished industry benchmarks,including a suite of Zitzler-Deb-Thiele(ZDT)and Zeb-Thiele-Laumanns-Zitzler(DTLZ)problems,where it exhibited superior performance.The outcomes underscore the algorithm’s markedly enhanced optimization capabilities relative to existing methods,particularly in tackling highly intricate multi-objective planetary gearbox optimization problems.Additionally,the performance of HMODESFO is evaluated against selected well-known mechanical engineering test problems,further accentuating its adeptness in resolving complex optimization challenges within this domain.
基金supported by the National Natural Science Foundation of China under Grant Nos.U21A20464,62066005Innovation Project of Guangxi Graduate Education under Grant No.YCSW2024313.
文摘Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic models,and there is a significant gap between the research results and actual wireless sensor networks.Some scholars have now modeled data fusion networks to make them more suitable for practical applications.This paper will explore the deployment problem of a stochastic data fusion wireless sensor network(SDFWSN),a model that reflects the randomness of environmental monitoring and uses data fusion techniques widely used in actual sensor networks for information collection.The deployment problem of SDFWSN is modeled as a multi-objective optimization problem.The network life cycle,spatiotemporal coverage,detection rate,and false alarm rate of SDFWSN are used as optimization objectives to optimize the deployment of network nodes.This paper proposes an enhanced multi-objective mongoose optimization algorithm(EMODMOA)to solve the deployment problem of SDFWSN.First,to overcome the shortcomings of the DMOA algorithm,such as its low convergence and tendency to get stuck in a local optimum,an encircling and hunting strategy is introduced into the original algorithm to propose the EDMOA algorithm.The EDMOA algorithm is designed as the EMODMOA algorithm by selecting reference points using the K-Nearest Neighbor(KNN)algorithm.To verify the effectiveness of the proposed algorithm,the EMODMOA algorithm was tested at CEC 2020 and achieved good results.In the SDFWSN deployment problem,the algorithm was compared with the Non-dominated Sorting Genetic Algorithm II(NSGAII),Multiple Objective Particle Swarm Optimization(MOPSO),Multi-Objective Evolutionary Algorithm based on Decomposition(MOEA/D),and Multi-Objective Grey Wolf Optimizer(MOGWO).By comparing and analyzing the performance evaluation metrics and optimization results of the objective functions of the multi-objective algorithms,the algorithm outperforms the other algorithms in the SDFWSN deployment results.To better demonstrate the superiority of the algorithm,simulations of diverse test cases were also performed,and good results were obtained.
基金supported by National Key S&T Special Projects of Marine Carbonate(No.2008ZX05000-004)CNPC Projects(No.2008E-0610-10)
文摘With the development of parallel computing technology,non-linear inversion calculation efficiency has been improving.However,for single-point search-based non-linear inversion methods,the implementation of parallel algorithms is a difficult issue.We introduce the idea of group search to the single-point search-based non-linear inversion algorithm, taking the quantum Monte Carlo method as an example for two-dimensional seismic wave velocity inversion and practical impedance inversion and test the calculation efficiency of using different node numbers.The results show the parallel algorithm in theoretical and practical data inversion is feasible and effective.The parallel algorithm has good versatility. The algorithm efficiency increases with increasing node numbers but the algorithm efficiency rate of increase gradually decreases as the node numbers increase.
基金The National High Technology Research and Development Program of China (863Program) ( No2009AA01Z235,2006AA01Z263)the Research Fund of the National Mobile Communications Research Laboratory of Southeast University(No2008A10)
文摘An improved parallel weighted bit-flipping(PWBF) algorithm is presented. To accelerate the information exchanges between check nodes and variable nodes, the bit-flipping step and the check node updating step of the original algorithm are parallelized. The simulation experiments demonstrate that the improved PWBF algorithm provides about 0. 1 to 0. 3 dB coding gain over the original PWBF algorithm. And the improved algorithm achieves a higher convergence rate. The choice of the threshold is also discussed, which is used to determine whether a bit should be flipped during each iteration. The appropriate threshold can ensure that most error bits be flipped, and keep the right ones untouched at the same time. The improvement is particularly effective for decoding quasi-cyclic low-density paritycheck(QC-LDPC) codes.
文摘Using the method of mathematical morphology,this paper fulfills filtration,segmentation and extraction of morphological features of the satellite cloud image.It also gives out the relative algorithms,which is realized by parallel C programming based on Transputer networks.It has been successfully used to process the typhoon and the low tornado cloud image.And it will be used in weather forecast.
基金The Weaponry Equipment Foundation of PLA Equipment Ministry (No.51406020105JB8103).
文摘To decrease the time of generating a closure, a parallel algorithm of generating the closure of a resource description framework schema (RDFS) source is presented. In the algorithm, RDFS triples in the source are classified according to the forms of triples in the entailment rules and it reduces the scope of searching for specific triples. The dependence among the classes of triples is analyzed. Based on the classification, the initial RDFS source is partitioned into several subsets. The subsets are distributed to each process, and the closure is generated in parallel by applying the RDFS entailment rules. Parallel generating the closure of an RDFS source takes less time and increases efficiency.
基金jointly sponsored by the National Natural Science Foundation of China(Grant No.41374078)the Geological Survey Projects of the Ministry of Land and Resources of China(Grant Nos.12120113086100 and 12120113101300)Beijing Higher Education Young Elite Teacher Project
文摘Traditional two-dimensional(2D) complex resistivity forward modeling is based on Poisson's equation but spectral induced polarization(SIP) data are the coproducts of the induced polarization(IP) and the electromagnetic induction(EMI) effects.This is especially true under high frequencies,where the EMI effect can exceed the IP effect.2D inversion that only considers the IP effect reduces the reliability of the inversion data.In this paper,we derive differential equations using Maxwell's equations.With the introduction of the Cole-Cole model,we use the finite-element method to conduct2 D SIP forward modeling that considers the EMI and IP effects simultaneously.The data-space Occam method,in which different constraints to the model smoothness and parametric boundaries are introduced,is then used to simultaneously obtain the four parameters of the Cole-Cole model using multi-array electric field data.This approach not only improves the stability of the inversion but also significantly reduces the solution ambiguity.To improve the computational efficiency,message passing interface programming was used to accelerate the 2D SIP forward modeling and inversion.Synthetic datasets were tested using both serial and parallel algorithms,and the tests suggest that the proposed parallel algorithm is robust and efficient.
基金Supported by National Natural Science Foundation of China(Grant No.51175029)Beijing Municipal Natural Science Foundation of China(Grant No.3132019)
文摘Dimensional synthesis is one of the most difficult issues in the field of parallel robots with actuation redundancy. To deal with the optimal design of a redundantly actuated parallel robot used for ankle rehabilitation, a methodology of dimensional synthesis based on multi-objective optimization is presented. First, the dimensional synthesis of the redundant parallel robot is formulated as a nonlinear constrained multi-objective optimization problem. Then four objective functions, separately reflecting occupied space, input/output transmission and torque performances, and multi-criteria constraints, such as dimension, interference and kinematics, are defined. In consideration of the passive exercise of plantar/dorsiflexion requiring large output moment, a torque index is proposed. To cope with the actuation redundancy of the parallel robot, a new output transmission index is defined as well. The multi-objective optimization problem is solved by using a modified Differential Evolution(DE) algorithm, which is characterized by new selection and mutation strategies. Meanwhile, a special penalty method is presented to tackle the multi-criteria constraints. Finally, numerical experiments for different optimization algorithms are implemented. The computation results show that the proposed indices of output transmission and torque, and constraint handling are effective for the redundant parallel robot; the modified DE algorithm is superior to the other tested algorithms, in terms of the ability of global search and the number of non-dominated solutions. The proposed methodology of multi-objective optimization can be also applied to the dimensional synthesis of other redundantly actuated parallel robots only with rotational movements.
基金supported by the National Basic Research Program of China (973 Program) (61320)
文摘The method of establishing data structures plays an important role in the efficiency of parallel multilevel fast multipole algorithm(PMLFMA).Considering the main complements of multilevel fast multipole algorithm(MLFMA) memory,a new parallelization strategy and a modified data octree construction scheme are proposed to further reduce communication in order to improve parallel efficiency.For far interaction,a new scheme called dynamic memory allocation is developed.To analyze the workload balancing performance of a parallel implementation,the original concept of workload balancing factor is introduced and verified by numerical examples.Numerical results show that the above measures improve the parallel efficiency and are suitable for the analysis of electrical large-scale scattering objects.