Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount impo...Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.展开更多
With the growing need for renewable energy,wind farms are playing an important role in generating clean power from wind resources.The best wind turbine architecture in a wind farm has a major influence on the energy e...With the growing need for renewable energy,wind farms are playing an important role in generating clean power from wind resources.The best wind turbine architecture in a wind farm has a major influence on the energy extraction efficiency.This paper describes a unique strategy for optimizing wind turbine locations on a wind farm that combines the capabilities of particle swarm optimization(PSO)and artificial neural networks(ANNs).The PSO method was used to explore the solution space and develop preliminary turbine layouts,and the ANN model was used to fine-tune the placements based on the predicted energy generation.The proposed hybrid technique seeks to increase energy output while considering site-specific wind patterns and topographical limits.The efficacy and superiority of the hybrid PSO-ANN methodology are proved through comprehensive simulations and comparisons with existing approaches,giving exciting prospects for developing more efficient and sustainable wind farms.The integration of ANNs and PSO in our methodology is of paramount importance because it leverages the complementary strengths of both techniques.Furthermore,this novel methodology harnesses historical data through ANNs to identify optimal turbine positions that align with the wind speed and direction and enhance energy extraction efficiency.A notable increase in power generation is observed across various scenarios.The percentage increase in the power generation ranged from approximately 7.7%to 11.1%.Owing to its versatility and adaptability to site-specific conditions,the hybrid model offers promising prospects for advancing the field of wind farm layout optimization and contributing to a greener and more sustainable energy future.展开更多
Blockchain technology has witnessed a burgeoning integration into diverse realms of economic and societal development.Nevertheless,scalability challenges,characterized by diminished broadcast efficiency,heightened com...Blockchain technology has witnessed a burgeoning integration into diverse realms of economic and societal development.Nevertheless,scalability challenges,characterized by diminished broadcast efficiency,heightened communication overhead,and escalated storage costs,have significantly constrained the broad-scale application of blockchain.This paper introduces a novel Encode-and CRT-based Scalability Scheme(ECSS),meticulously refined to enhance both block broadcasting and storage.Primarily,ECSS categorizes nodes into distinct domains,thereby reducing the network diameter and augmenting transmission efficiency.Secondly,ECSS streamlines block transmission through a compact block protocol and robust RS coding,which not only reduces the size of broadcasted blocks but also ensures transmission reliability.Finally,ECSS utilizes the Chinese remainder theorem,designating the block body as the compression target and mapping it to multiple modules to achieve efficient storage,thereby alleviating the storage burdens on nodes.To evaluate ECSS’s performance,we established an experimental platformand conducted comprehensive assessments.Empirical results demonstrate that ECSS attains superior network scalability and stability,reducing communication overhead by an impressive 72% and total storage costs by a substantial 63.6%.展开更多
Many young elite athletes do not meet their daily energy and nutrient requirements. However, little research has been done on why these athletes do not meet their daily needs. The aim was to research the barriers and ...Many young elite athletes do not meet their daily energy and nutrient requirements. However, little research has been done on why these athletes do not meet their daily needs. The aim was to research the barriers and motivators of young Dutch elite athletes to optimize their nutritional intake. Quantitative and qualitative research was conducted among 8 handball and 4 volleyball players at the Dutch National Sports Center (17.2 ± 0.8 years). First, the nutritional intake was tracked through food diaries and analyzed in Nutritics. Thereupon, five semi-structured interviews based on the COM-B model were carried out. The interviews were transcribed and coded. The athletes had a reduced intake of energy, carbohydrates, vitamins A, C, E, D, calcium, potassium, zinc, and iron compared to their requirements. Seven themes for optimizing their nutritional intake emerged in the interviews: needs assessment, practical translation, portion size, lack of time, involvement, individuality, and food distribution. Barriers that the athletes experienced were that they did not know what their total daily nutritional needs were and how this translates into practice. In addition, the portion size at dinner was too small. They also had little time to eat a full meal due to time pressure from training and school. On the other hand, motivators were receiving meal options to translate their needs into practice with a distribution of moments when they need to eat. Covering these topics in nutritional workshops where athletes actively participate with more individual focus, could contribute to the optimization of their nutritional intake.展开更多
This study delves into biodiesel synthesis from non-edible oils and algae oil sources using Response Surface Methodology(RSM)and an Artificial Neural Network(ANN)model to optimize biodiesel yield.Blend of C.vulgaris a...This study delves into biodiesel synthesis from non-edible oils and algae oil sources using Response Surface Methodology(RSM)and an Artificial Neural Network(ANN)model to optimize biodiesel yield.Blend of C.vulgaris and Karanja oils is utilized,aiming to reduce free fatty acid content to 1%through single-step transesterification.Optimization reveals peak biodiesel yield conditions:1%catalyst quantity,91.47 min reaction time,56.86℃reaction temperature,and 8.46:1 methanol to oil molar ratio.The ANN model outperforms RSM in yield prediction accuracy.Environmental impact assessment yields an E-factor of 0.0251 at maximum yield,indicating responsible production with minimal waste.Economic analysis reveals significant cost savings:30%-50%reduction in raw material costs by using non-edible oils,10%-15%increase in production efficiency,20%reduction in catalyst costs,and 15%-20%savings in energy consumption.The optimized process reduces waste disposal costs by 10%-15%,enhancing overall economic viability.Overall,the widespread adoption of biodiesel offers economic,environmental,and social benefits to a diverse range of stakeholders,including farmers,producers,consumers,governments,environmental organizations,and the transportation industry.Collaboration among these stakeholders is essential for realizing the full potential of biodiesel as a sustainable energy solution.展开更多
Atom-level modulation of the coordination environment for single-atom catalysts(SACs)is considered as an effective strategy for elevating the catalytic performance.For the MNxsite,breaking the symmetrical geometry and...Atom-level modulation of the coordination environment for single-atom catalysts(SACs)is considered as an effective strategy for elevating the catalytic performance.For the MNxsite,breaking the symmetrical geometry and charge distribution by introducing relatively weak electronegative atoms into the first/second shell is an efficient way,but it remains challenging for elucidating the underlying mechanism of interaction.Herein,a practical strategy was reported to rationally design single cobalt atoms coordinated with both phosphorus and nitrogen atoms in a hierarchically porous carbon derived from metal-organic frameworks.X-ray absorption spectrum reveals that atomically dispersed Co sites are coordinated with four N atoms in the first shell and varying numbers of P atoms in the second shell(denoted as Co-N/P-C).The prepared catalyst exhibits excellent oxygen reduction reaction(ORR)activity as well as zinc-air battery performance.The introduction of P atoms in the Co-SACs weakens the interaction between Co and N,significantly promoting the adsorption process of ^(*)OOH,resulting in the acceleration of reaction kinetics and reduction of thermodynamic barrier,responsible for the increased intrinsic activity.Our discovery provides insights into an ultimate design of single-atom catalysts with adjustable electrocatalytic activities for efficient electrochemical energy conversion.展开更多
Grey Wolf Optimization (GWO) is a nature-inspired metaheuristic algorithm that has gained popularity for solving optimization problems. In GWO, the success of the algorithm heavily relies on the efficient updating of ...Grey Wolf Optimization (GWO) is a nature-inspired metaheuristic algorithm that has gained popularity for solving optimization problems. In GWO, the success of the algorithm heavily relies on the efficient updating of the agents’ positions relative to the leader wolves. In this paper, we provide a brief overview of the Grey Wolf Optimization technique and its significance in solving complex optimization problems. Building upon the foundation of GWO, we introduce a novel technique for updating agents’ positions, which aims to enhance the algorithm’s effectiveness and efficiency. To evaluate the performance of our proposed approach, we conduct comprehensive experiments and compare the results with the original Grey Wolf Optimization technique. Our comparative analysis demonstrates that the proposed technique achieves superior optimization outcomes. These findings underscore the potential of our approach in addressing optimization challenges effectively and efficiently, making it a valuable contribution to the field of optimization algorithms.展开更多
Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these adv...Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.展开更多
Objective: This paper aims to explore the impact of optimizing details in the operating room on the level of knowledge, attitude, and practice of hospital infection prevention and control by surgeons, as well as the e...Objective: This paper aims to explore the impact of optimizing details in the operating room on the level of knowledge, attitude, and practice of hospital infection prevention and control by surgeons, as well as the effectiveness of infection control. Methods: From January 2022 to June 2023, a total of 120 patients were screened and randomly divided into a control group (routine care and hospital infection management) and a study group (optimizing details in the operating room). Results: Significant differences were found between the two groups in the data of surgeons’ level of knowledge, attitude, and practice in hospital infection prevention and control, infection rates, and nursing satisfaction, with the study group showing better results (P Conclusion: The use of optimizing details in the operating room among surgeons can effectively improve surgeons’ level of knowledge, attitude, and practice in hospital infection prevention and control, reduce infection occurrence, and is worth promoting.展开更多
The large-scale optimization problem requires some optimization techniques, and the Metaheuristics approach is highly useful for solving difficult optimization problems in practice. The purpose of the research is to o...The large-scale optimization problem requires some optimization techniques, and the Metaheuristics approach is highly useful for solving difficult optimization problems in practice. The purpose of the research is to optimize the transportation system with the help of this approach. We selected forest vehicle routing data as the case study to minimize the total cost and the distance of the forest transportation system. Matlab software helps us find the best solution for this case by applying three algorithms of Metaheuristics: Genetic Algorithm (GA), Ant Colony Optimization (ACO), and Extended Great Deluge (EGD). The results show that GA, compared to ACO and EGD, provides the best solution for the cost and the length of our case study. EGD is the second preferred approach, and ACO offers the last solution.展开更多
This study is trying to address the critical need for efficient routing in Mobile Ad Hoc Networks(MANETs)from dynamic topologies that pose great challenges because of the mobility of nodes.Themain objective was to del...This study is trying to address the critical need for efficient routing in Mobile Ad Hoc Networks(MANETs)from dynamic topologies that pose great challenges because of the mobility of nodes.Themain objective was to delve into and refine the application of the Dijkstra’s algorithm in this context,a method conventionally esteemed for its efficiency in static networks.Thus,this paper has carried out a comparative theoretical analysis with the Bellman-Ford algorithm,considering adaptation to the dynamic network conditions that are typical for MANETs.This paper has shown through detailed algorithmic analysis that Dijkstra’s algorithm,when adapted for dynamic updates,yields a very workable solution to the problem of real-time routing in MANETs.The results indicate that with these changes,Dijkstra’s algorithm performs much better computationally and 30%better in routing optimization than Bellman-Ford when working with configurations of sparse networks.The theoretical framework adapted,with the adaptation of the Dijkstra’s algorithm for dynamically changing network topologies,is novel in this work and quite different from any traditional application.The adaptation should offer more efficient routing and less computational overhead,most apt in the limited resource environment of MANETs.Thus,from these findings,one may derive a conclusion that the proposed version of Dijkstra’s algorithm is the best and most feasible choice of the routing protocol for MANETs given all pertinent key performance and resource consumption indicators and further that the proposed method offers a marked improvement over traditional methods.This paper,therefore,operationalizes the theoretical model into practical scenarios and also further research with empirical simulations to understand more about its operational effectiveness.展开更多
Objective:To analyze the effect of optimizing the emergency nursing process in the resuscitation of patients with acute chest pain and the impact on the resuscitation success rate.Methods:66 patients with acute chest ...Objective:To analyze the effect of optimizing the emergency nursing process in the resuscitation of patients with acute chest pain and the impact on the resuscitation success rate.Methods:66 patients with acute chest pain received by the emergency department of our hospital from January 2022 to December 2023 were selected as the study subjects and divided into two groups according to the differences in the emergency nursing process,i.e.,33 patients receiving routine emergency care were included in the control group,and 33 patients receiving the optimization of emergency nursing process intervention were included in the observation group.Patients’resuscitation effect and satisfaction with nursing care in the two groups were compared.Results:The observation group’s consultation assessment time,reception time,admission to the start of resuscitation time,and resuscitation time were shorter than that of the control group,the resuscitation success rate was higher than that of the control group,and the incidence of adverse events was lower than that of the control group,with statistically significant differences(P<0.05);and the observation group’s satisfaction with nursing care was higher than that of the control group,with statistically significant differences(P<0.05).Conclusion:Optimization of emergency nursing process intervention in the resuscitation of acute chest pain patients can greatly shorten the rescue time and improve the success rate of resuscitation,with higher patient satisfaction.展开更多
The paper addresses the challenge of transmitting a big number offiles stored in a data center(DC),encrypting them by compilers,and sending them through a network at an acceptable time.Face to the big number offiles,o...The paper addresses the challenge of transmitting a big number offiles stored in a data center(DC),encrypting them by compilers,and sending them through a network at an acceptable time.Face to the big number offiles,only one compiler may not be sufficient to encrypt data in an acceptable time.In this paper,we consider the problem of several compilers and the objective is tofind an algorithm that can give an efficient schedule for the givenfiles to be compiled by the compilers.The main objective of the work is to minimize the gap in the total size of assignedfiles between compilers.This minimization ensures the fair distribution offiles to different compilers.This problem is considered to be a very hard problem.This paper presents two research axes.Thefirst axis is related to architecture.We propose a novel pre-compiler architecture in this context.The second axis is algorithmic development.We develop six algorithms to solve the problem,in this context.These algorithms are based on the dispatching rules method,decomposition method,and an iterative approach.These algorithms give approximate solutions for the studied problem.An experimental result is imple-mented to show the performance of algorithms.Several indicators are used to measure the performance of the proposed algorithms.In addition,five classes are proposed to test the algorithms with a total of 2350 instances.A comparison between the proposed algorithms is presented in different tables discussed to show the performance of each algorithm.The result showed that the best algorithm is the Iterative-mixed Smallest-Longest-Heuristic(ISL)with a percentage equal to 97.7%and an average running time equal to 0.148 s.All other algorithms did not exceed 22%as a percentage.The best algorithm excluding ISL is Iterative-mixed Longest-Smallest Heuristic(ILS)with a percentage equal to 21,4%and an average running time equal to 0.150 s.展开更多
Weighted vertex cover(WVC)is one of the most important combinatorial optimization problems.In this paper,we provide a new game optimization to achieve efficiency and time of solutions for the WVC problem of weighted n...Weighted vertex cover(WVC)is one of the most important combinatorial optimization problems.In this paper,we provide a new game optimization to achieve efficiency and time of solutions for the WVC problem of weighted networks.We first model the WVC problem as a general game on weighted networks.Under the framework of a game,we newly define several cover states to describe the WVC problem.Moreover,we reveal the relationship among these cover states of the weighted network and the strict Nash equilibriums(SNEs)of the game.Then,we propose a game-based asynchronous algorithm(GAA),which can theoretically guarantee that all cover states of vertices converging in an SNE with polynomial time.Subsequently,we improve the GAA by adding 2-hop and 3-hop adjustment mechanisms,termed the improved game-based asynchronous algorithm(IGAA),in which we prove that it can obtain a better solution to the WVC problem than using a the GAA.Finally,numerical simulations demonstrate that the proposed IGAA can obtain a better approximate solution in promising computation time compared with the existing representative algorithms.展开更多
To reduce the comprehensive costs of the construction and operation of microgrids and to minimize the power fluctuations caused by randomness and intermittency in distributed generation,a double-layer optimizing confi...To reduce the comprehensive costs of the construction and operation of microgrids and to minimize the power fluctuations caused by randomness and intermittency in distributed generation,a double-layer optimizing configuration method of hybrid energy storage microgrid based on improved grey wolf optimization(IGWO)is proposed.Firstly,building a microgrid system containing a wind-solar power station and electric-hydrogen coupling hybrid energy storage system.Secondly,the minimum comprehensive cost of the construction and operation of the microgrid is taken as the outer objective function,and the minimum peak-to-valley of the microgrid’s daily output is taken as the inner objective function.By iterating through the outer and inner layers,the system improves operational stability while achieving economic configuration.Then,using the energy-self-smoothness of the microgrid as the evaluation index,a double-layer optimizing configuration method of the microgrid is constructed.Finally,to improve the disadvantages of grey wolf optimization(GWO),such as slow convergence in the later period and easy falling into local optima,by introducing the convergence factor nonlinear adjustment strategy and Cauchy mutation operator,an IGWO with excellent global performance is proposed.After testing with the typical test functions,the superiority of IGWO is verified.Next,using IGWO to solve the double-layer model.The case analysis shows that compared to GWO and particle swarm optimization(PSO),the IGWO reduced the comprehensive cost by 15.6%and 18.8%,respectively.Therefore,the proposed double-layer optimizationmethod of capacity configuration ofmicrogrid with wind-solar-hybrid energy storage based on IGWO could effectively improve the independence and stability of the microgrid and significantly reduce the comprehensive cost.展开更多
The diversity of software and hardware forces programmers to spend a great deal of time optimizing their source code,which often requires specific treatment for each platform.The problem becomes critical on embedded d...The diversity of software and hardware forces programmers to spend a great deal of time optimizing their source code,which often requires specific treatment for each platform.The problem becomes critical on embedded devices,where computational and memory resources are strictly constrained.Compilers play an essential role in deploying source code on a target device through the backend.In this work,a novel backend for the Open Neural Network Compiler(ONNC)is proposed,which exploits machine learning to optimize code for the ARM Cortex-M device.The backend requires minimal changes to Open Neural Network Exchange(ONNX)models.Several novel optimization techniques are also incorporated in the backend,such as quantizing the ONNX model’s weight and automatically tuning the dimensions of operators in computations.The performance of the proposed framework is evaluated for two applications:handwritten digit recognition on the Modified National Institute of Standards and Technology(MNIST)dataset and model,and image classification on the Canadian Institute For Advanced Research and 10(CIFAR-10)dataset with the AlexNet-Light model.The system achieves 98.90%and 90.55%accuracy for handwritten digit recognition and image classification,respectively.Furthermore,the proposed architecture is significantly more lightweight than other state-of-theart models in terms of both computation time and generated source code complexity.From the system perspective,this work provides a novel approach to deploying direct computations from the available ONNX models to target devices by optimizing compilers while maintaining high efficiency in accuracy performance.展开更多
Cloud computingmakes dynamic resource provisioning more accessible.Monitoring a functioning service is crucial,and changes are made when particular criteria are surpassed.This research explores the decentralized multi...Cloud computingmakes dynamic resource provisioning more accessible.Monitoring a functioning service is crucial,and changes are made when particular criteria are surpassed.This research explores the decentralized multi-cloud environment for allocating resources and ensuring the Quality of Service(QoS),estimating the required resources,and modifying allotted resources depending on workload and parallelism due to resources.Resource allocation is a complex challenge due to the versatile service providers and resource providers.The engagement of different service and resource providers needs a cooperation strategy for a sustainable quality of service.The objective of a coherent and rational resource allocation is to attain the quality of service.It also includes identifying critical parameters to develop a resource allocation mechanism.A framework is proposed based on the specified parameters to formulate a resource allocation process in a decentralized multi-cloud environment.The three main parameters of the proposed framework are data accessibility,optimization,and collaboration.Using an optimization technique,these three segments are further divided into subsets for resource allocation and long-term service quality.The CloudSim simulator has been used to validate the suggested framework.Several experiments have been conducted to find the best configurations suited for enhancing collaboration and resource allocation to achieve sustained QoS.The results support the suggested structure for a decentralized multi-cloud environment and the parameters that have been determined.展开更多
In this current century,most industries are moving towards automation,where human intervention is dramatically reduced.This revolution leads to industrial revolution 4.0,which uses the Internet of Things(IoT)and wirel...In this current century,most industries are moving towards automation,where human intervention is dramatically reduced.This revolution leads to industrial revolution 4.0,which uses the Internet of Things(IoT)and wireless sensor networks(WSN).With its associated applications,this IoT device is used to compute the receivedWSN data from devices and transfer it to remote locations for assistance.In general,WSNs,the gateways are a long distance from the base station(BS)and are communicated through the gateways nearer to the BS.At the gateway,which is closer to the BS,energy drains faster because of the heavy load,which leads to energy issues around the BS.Since the sensors are battery-operated,either replacement or recharging of those sensor node batteries is not possible after it is deployed to their corresponding areas.In that situation,energy plays a vital role in sensor survival.Concerning reducing the network energy consumption and increasing the network lifetime,this paper proposed an efficient cluster head selection using Improved Social spider Optimization with a Rough Set(ISSRS)and routing path selection to reduce the network load using the Improved Grey wolf optimization(IGWO)approach.(i)Using ISSRS,the initial clusters are formed with the local nodes,and the cluster head is chosen.(ii)Load balancing through routing path selection using IGWO.The simulation results prove that the proposed optimization-based approaches efficiently reduce the energy through load balancing compared to existing systems in terms of energy efficiency,packet delivery ratio,network throughput,and packet loss percentage.展开更多
The integrated circuit (IC) manufacturing process is capital intensive and complex. The production process of unit product (or die, as it is commonly referred to) takes several weeks. Semiconductor factories (fabs) co...The integrated circuit (IC) manufacturing process is capital intensive and complex. The production process of unit product (or die, as it is commonly referred to) takes several weeks. Semiconductor factories (fabs) continuously attempt to improve their productivity, as measured in output and cycle time (or mean flow time). The conflicting objective of producing maximum units at minimal production cycle time and at the highest quality, as measured by die yield, is discussed in this paper. The inter-related effects are characterized, and a model is proposed to address this multi-objective function. We then show that, with this model, die cost can be optimized for any given operating conditions of a fab. A numerical example is provided to illustrate the practicality of the model and the proposed optimization method.展开更多
Fusing satellite(remote sensing)images is an interesting topic in processing satellite images.The result image is achieved through fusing information from spectral and panchromatic images for sharpening.In this paper,...Fusing satellite(remote sensing)images is an interesting topic in processing satellite images.The result image is achieved through fusing information from spectral and panchromatic images for sharpening.In this paper,a new algorithm based on based the Artificial bee colony(ABC)algorithm with peak signalto-noise ratio(PSNR)index optimization is proposed to fusing remote sensing images in this paper.Firstly,Wavelet transform is used to split the input images into components over the high and low frequency domains.Then,two fusing rules are used for obtaining the fused images.The first rule is“the high frequency components are fused by using the average values”.The second rule is“the low frequency components are fused by using the combining rule with parameter”.The parameter for fusing the low frequency components is defined by using ABC algorithm,an algorithm based on PSNR index optimization.The experimental results on different input images show that the proposed algorithm is better than some recent methods.展开更多
文摘Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.
文摘With the growing need for renewable energy,wind farms are playing an important role in generating clean power from wind resources.The best wind turbine architecture in a wind farm has a major influence on the energy extraction efficiency.This paper describes a unique strategy for optimizing wind turbine locations on a wind farm that combines the capabilities of particle swarm optimization(PSO)and artificial neural networks(ANNs).The PSO method was used to explore the solution space and develop preliminary turbine layouts,and the ANN model was used to fine-tune the placements based on the predicted energy generation.The proposed hybrid technique seeks to increase energy output while considering site-specific wind patterns and topographical limits.The efficacy and superiority of the hybrid PSO-ANN methodology are proved through comprehensive simulations and comparisons with existing approaches,giving exciting prospects for developing more efficient and sustainable wind farms.The integration of ANNs and PSO in our methodology is of paramount importance because it leverages the complementary strengths of both techniques.Furthermore,this novel methodology harnesses historical data through ANNs to identify optimal turbine positions that align with the wind speed and direction and enhance energy extraction efficiency.A notable increase in power generation is observed across various scenarios.The percentage increase in the power generation ranged from approximately 7.7%to 11.1%.Owing to its versatility and adaptability to site-specific conditions,the hybrid model offers promising prospects for advancing the field of wind farm layout optimization and contributing to a greener and more sustainable energy future.
文摘Blockchain technology has witnessed a burgeoning integration into diverse realms of economic and societal development.Nevertheless,scalability challenges,characterized by diminished broadcast efficiency,heightened communication overhead,and escalated storage costs,have significantly constrained the broad-scale application of blockchain.This paper introduces a novel Encode-and CRT-based Scalability Scheme(ECSS),meticulously refined to enhance both block broadcasting and storage.Primarily,ECSS categorizes nodes into distinct domains,thereby reducing the network diameter and augmenting transmission efficiency.Secondly,ECSS streamlines block transmission through a compact block protocol and robust RS coding,which not only reduces the size of broadcasted blocks but also ensures transmission reliability.Finally,ECSS utilizes the Chinese remainder theorem,designating the block body as the compression target and mapping it to multiple modules to achieve efficient storage,thereby alleviating the storage burdens on nodes.To evaluate ECSS’s performance,we established an experimental platformand conducted comprehensive assessments.Empirical results demonstrate that ECSS attains superior network scalability and stability,reducing communication overhead by an impressive 72% and total storage costs by a substantial 63.6%.
文摘Many young elite athletes do not meet their daily energy and nutrient requirements. However, little research has been done on why these athletes do not meet their daily needs. The aim was to research the barriers and motivators of young Dutch elite athletes to optimize their nutritional intake. Quantitative and qualitative research was conducted among 8 handball and 4 volleyball players at the Dutch National Sports Center (17.2 ± 0.8 years). First, the nutritional intake was tracked through food diaries and analyzed in Nutritics. Thereupon, five semi-structured interviews based on the COM-B model were carried out. The interviews were transcribed and coded. The athletes had a reduced intake of energy, carbohydrates, vitamins A, C, E, D, calcium, potassium, zinc, and iron compared to their requirements. Seven themes for optimizing their nutritional intake emerged in the interviews: needs assessment, practical translation, portion size, lack of time, involvement, individuality, and food distribution. Barriers that the athletes experienced were that they did not know what their total daily nutritional needs were and how this translates into practice. In addition, the portion size at dinner was too small. They also had little time to eat a full meal due to time pressure from training and school. On the other hand, motivators were receiving meal options to translate their needs into practice with a distribution of moments when they need to eat. Covering these topics in nutritional workshops where athletes actively participate with more individual focus, could contribute to the optimization of their nutritional intake.
基金the financial support provided for this research project entitled“Enhancement of Cold Flow Properties of Waste Cooking Biodiesel and Diesel”under the File Number A/RD/RP-2/345 for the above publication.
文摘This study delves into biodiesel synthesis from non-edible oils and algae oil sources using Response Surface Methodology(RSM)and an Artificial Neural Network(ANN)model to optimize biodiesel yield.Blend of C.vulgaris and Karanja oils is utilized,aiming to reduce free fatty acid content to 1%through single-step transesterification.Optimization reveals peak biodiesel yield conditions:1%catalyst quantity,91.47 min reaction time,56.86℃reaction temperature,and 8.46:1 methanol to oil molar ratio.The ANN model outperforms RSM in yield prediction accuracy.Environmental impact assessment yields an E-factor of 0.0251 at maximum yield,indicating responsible production with minimal waste.Economic analysis reveals significant cost savings:30%-50%reduction in raw material costs by using non-edible oils,10%-15%increase in production efficiency,20%reduction in catalyst costs,and 15%-20%savings in energy consumption.The optimized process reduces waste disposal costs by 10%-15%,enhancing overall economic viability.Overall,the widespread adoption of biodiesel offers economic,environmental,and social benefits to a diverse range of stakeholders,including farmers,producers,consumers,governments,environmental organizations,and the transportation industry.Collaboration among these stakeholders is essential for realizing the full potential of biodiesel as a sustainable energy solution.
基金supported by the National Natural Science Foundation of China(51872115,12234018 and 52101256)Beijing Synchrotron Radiation Facility(BSRF,4B9A)。
文摘Atom-level modulation of the coordination environment for single-atom catalysts(SACs)is considered as an effective strategy for elevating the catalytic performance.For the MNxsite,breaking the symmetrical geometry and charge distribution by introducing relatively weak electronegative atoms into the first/second shell is an efficient way,but it remains challenging for elucidating the underlying mechanism of interaction.Herein,a practical strategy was reported to rationally design single cobalt atoms coordinated with both phosphorus and nitrogen atoms in a hierarchically porous carbon derived from metal-organic frameworks.X-ray absorption spectrum reveals that atomically dispersed Co sites are coordinated with four N atoms in the first shell and varying numbers of P atoms in the second shell(denoted as Co-N/P-C).The prepared catalyst exhibits excellent oxygen reduction reaction(ORR)activity as well as zinc-air battery performance.The introduction of P atoms in the Co-SACs weakens the interaction between Co and N,significantly promoting the adsorption process of ^(*)OOH,resulting in the acceleration of reaction kinetics and reduction of thermodynamic barrier,responsible for the increased intrinsic activity.Our discovery provides insights into an ultimate design of single-atom catalysts with adjustable electrocatalytic activities for efficient electrochemical energy conversion.
文摘Grey Wolf Optimization (GWO) is a nature-inspired metaheuristic algorithm that has gained popularity for solving optimization problems. In GWO, the success of the algorithm heavily relies on the efficient updating of the agents’ positions relative to the leader wolves. In this paper, we provide a brief overview of the Grey Wolf Optimization technique and its significance in solving complex optimization problems. Building upon the foundation of GWO, we introduce a novel technique for updating agents’ positions, which aims to enhance the algorithm’s effectiveness and efficiency. To evaluate the performance of our proposed approach, we conduct comprehensive experiments and compare the results with the original Grey Wolf Optimization technique. Our comparative analysis demonstrates that the proposed technique achieves superior optimization outcomes. These findings underscore the potential of our approach in addressing optimization challenges effectively and efficiently, making it a valuable contribution to the field of optimization algorithms.
文摘Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.
文摘Objective: This paper aims to explore the impact of optimizing details in the operating room on the level of knowledge, attitude, and practice of hospital infection prevention and control by surgeons, as well as the effectiveness of infection control. Methods: From January 2022 to June 2023, a total of 120 patients were screened and randomly divided into a control group (routine care and hospital infection management) and a study group (optimizing details in the operating room). Results: Significant differences were found between the two groups in the data of surgeons’ level of knowledge, attitude, and practice in hospital infection prevention and control, infection rates, and nursing satisfaction, with the study group showing better results (P Conclusion: The use of optimizing details in the operating room among surgeons can effectively improve surgeons’ level of knowledge, attitude, and practice in hospital infection prevention and control, reduce infection occurrence, and is worth promoting.
文摘The large-scale optimization problem requires some optimization techniques, and the Metaheuristics approach is highly useful for solving difficult optimization problems in practice. The purpose of the research is to optimize the transportation system with the help of this approach. We selected forest vehicle routing data as the case study to minimize the total cost and the distance of the forest transportation system. Matlab software helps us find the best solution for this case by applying three algorithms of Metaheuristics: Genetic Algorithm (GA), Ant Colony Optimization (ACO), and Extended Great Deluge (EGD). The results show that GA, compared to ACO and EGD, provides the best solution for the cost and the length of our case study. EGD is the second preferred approach, and ACO offers the last solution.
基金supported by Northern Border University,Arar,Kingdom of Saudi Arabia,through the Project Number“NBU-FFR-2024-2248-03”.
文摘This study is trying to address the critical need for efficient routing in Mobile Ad Hoc Networks(MANETs)from dynamic topologies that pose great challenges because of the mobility of nodes.Themain objective was to delve into and refine the application of the Dijkstra’s algorithm in this context,a method conventionally esteemed for its efficiency in static networks.Thus,this paper has carried out a comparative theoretical analysis with the Bellman-Ford algorithm,considering adaptation to the dynamic network conditions that are typical for MANETs.This paper has shown through detailed algorithmic analysis that Dijkstra’s algorithm,when adapted for dynamic updates,yields a very workable solution to the problem of real-time routing in MANETs.The results indicate that with these changes,Dijkstra’s algorithm performs much better computationally and 30%better in routing optimization than Bellman-Ford when working with configurations of sparse networks.The theoretical framework adapted,with the adaptation of the Dijkstra’s algorithm for dynamically changing network topologies,is novel in this work and quite different from any traditional application.The adaptation should offer more efficient routing and less computational overhead,most apt in the limited resource environment of MANETs.Thus,from these findings,one may derive a conclusion that the proposed version of Dijkstra’s algorithm is the best and most feasible choice of the routing protocol for MANETs given all pertinent key performance and resource consumption indicators and further that the proposed method offers a marked improvement over traditional methods.This paper,therefore,operationalizes the theoretical model into practical scenarios and also further research with empirical simulations to understand more about its operational effectiveness.
文摘Objective:To analyze the effect of optimizing the emergency nursing process in the resuscitation of patients with acute chest pain and the impact on the resuscitation success rate.Methods:66 patients with acute chest pain received by the emergency department of our hospital from January 2022 to December 2023 were selected as the study subjects and divided into two groups according to the differences in the emergency nursing process,i.e.,33 patients receiving routine emergency care were included in the control group,and 33 patients receiving the optimization of emergency nursing process intervention were included in the observation group.Patients’resuscitation effect and satisfaction with nursing care in the two groups were compared.Results:The observation group’s consultation assessment time,reception time,admission to the start of resuscitation time,and resuscitation time were shorter than that of the control group,the resuscitation success rate was higher than that of the control group,and the incidence of adverse events was lower than that of the control group,with statistically significant differences(P<0.05);and the observation group’s satisfaction with nursing care was higher than that of the control group,with statistically significant differences(P<0.05).Conclusion:Optimization of emergency nursing process intervention in the resuscitation of acute chest pain patients can greatly shorten the rescue time and improve the success rate of resuscitation,with higher patient satisfaction.
基金The author would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project Number No.R-2022-85.
文摘The paper addresses the challenge of transmitting a big number offiles stored in a data center(DC),encrypting them by compilers,and sending them through a network at an acceptable time.Face to the big number offiles,only one compiler may not be sufficient to encrypt data in an acceptable time.In this paper,we consider the problem of several compilers and the objective is tofind an algorithm that can give an efficient schedule for the givenfiles to be compiled by the compilers.The main objective of the work is to minimize the gap in the total size of assignedfiles between compilers.This minimization ensures the fair distribution offiles to different compilers.This problem is considered to be a very hard problem.This paper presents two research axes.Thefirst axis is related to architecture.We propose a novel pre-compiler architecture in this context.The second axis is algorithmic development.We develop six algorithms to solve the problem,in this context.These algorithms are based on the dispatching rules method,decomposition method,and an iterative approach.These algorithms give approximate solutions for the studied problem.An experimental result is imple-mented to show the performance of algorithms.Several indicators are used to measure the performance of the proposed algorithms.In addition,five classes are proposed to test the algorithms with a total of 2350 instances.A comparison between the proposed algorithms is presented in different tables discussed to show the performance of each algorithm.The result showed that the best algorithm is the Iterative-mixed Smallest-Longest-Heuristic(ISL)with a percentage equal to 97.7%and an average running time equal to 0.148 s.All other algorithms did not exceed 22%as a percentage.The best algorithm excluding ISL is Iterative-mixed Longest-Smallest Heuristic(ILS)with a percentage equal to 21,4%and an average running time equal to 0.150 s.
基金partly supported by the National Natural Science Foundation of China(61751303,U20A2068,11771013)the Zhejiang Provincial Natural Science Foundation of China(LD19A010001)the Fundamental Research Funds for the Central Universities。
文摘Weighted vertex cover(WVC)is one of the most important combinatorial optimization problems.In this paper,we provide a new game optimization to achieve efficiency and time of solutions for the WVC problem of weighted networks.We first model the WVC problem as a general game on weighted networks.Under the framework of a game,we newly define several cover states to describe the WVC problem.Moreover,we reveal the relationship among these cover states of the weighted network and the strict Nash equilibriums(SNEs)of the game.Then,we propose a game-based asynchronous algorithm(GAA),which can theoretically guarantee that all cover states of vertices converging in an SNE with polynomial time.Subsequently,we improve the GAA by adding 2-hop and 3-hop adjustment mechanisms,termed the improved game-based asynchronous algorithm(IGAA),in which we prove that it can obtain a better solution to the WVC problem than using a the GAA.Finally,numerical simulations demonstrate that the proposed IGAA can obtain a better approximate solution in promising computation time compared with the existing representative algorithms.
基金supported by the NationalNatural Science Foundation of China Under Grant 61961017Key R&D Plan Projects in Hubei Province 2022BAA060.
文摘To reduce the comprehensive costs of the construction and operation of microgrids and to minimize the power fluctuations caused by randomness and intermittency in distributed generation,a double-layer optimizing configuration method of hybrid energy storage microgrid based on improved grey wolf optimization(IGWO)is proposed.Firstly,building a microgrid system containing a wind-solar power station and electric-hydrogen coupling hybrid energy storage system.Secondly,the minimum comprehensive cost of the construction and operation of the microgrid is taken as the outer objective function,and the minimum peak-to-valley of the microgrid’s daily output is taken as the inner objective function.By iterating through the outer and inner layers,the system improves operational stability while achieving economic configuration.Then,using the energy-self-smoothness of the microgrid as the evaluation index,a double-layer optimizing configuration method of the microgrid is constructed.Finally,to improve the disadvantages of grey wolf optimization(GWO),such as slow convergence in the later period and easy falling into local optima,by introducing the convergence factor nonlinear adjustment strategy and Cauchy mutation operator,an IGWO with excellent global performance is proposed.After testing with the typical test functions,the superiority of IGWO is verified.Next,using IGWO to solve the double-layer model.The case analysis shows that compared to GWO and particle swarm optimization(PSO),the IGWO reduced the comprehensive cost by 15.6%and 18.8%,respectively.Therefore,the proposed double-layer optimizationmethod of capacity configuration ofmicrogrid with wind-solar-hybrid energy storage based on IGWO could effectively improve the independence and stability of the microgrid and significantly reduce the comprehensive cost.
基金This work was supported in part by the Ministry of Science and Technology of Taiwan,R.O.C.,the Grant Number of project 108-2218-E-194-007.
文摘The diversity of software and hardware forces programmers to spend a great deal of time optimizing their source code,which often requires specific treatment for each platform.The problem becomes critical on embedded devices,where computational and memory resources are strictly constrained.Compilers play an essential role in deploying source code on a target device through the backend.In this work,a novel backend for the Open Neural Network Compiler(ONNC)is proposed,which exploits machine learning to optimize code for the ARM Cortex-M device.The backend requires minimal changes to Open Neural Network Exchange(ONNX)models.Several novel optimization techniques are also incorporated in the backend,such as quantizing the ONNX model’s weight and automatically tuning the dimensions of operators in computations.The performance of the proposed framework is evaluated for two applications:handwritten digit recognition on the Modified National Institute of Standards and Technology(MNIST)dataset and model,and image classification on the Canadian Institute For Advanced Research and 10(CIFAR-10)dataset with the AlexNet-Light model.The system achieves 98.90%and 90.55%accuracy for handwritten digit recognition and image classification,respectively.Furthermore,the proposed architecture is significantly more lightweight than other state-of-theart models in terms of both computation time and generated source code complexity.From the system perspective,this work provides a novel approach to deploying direct computations from the available ONNX models to target devices by optimizing compilers while maintaining high efficiency in accuracy performance.
文摘Cloud computingmakes dynamic resource provisioning more accessible.Monitoring a functioning service is crucial,and changes are made when particular criteria are surpassed.This research explores the decentralized multi-cloud environment for allocating resources and ensuring the Quality of Service(QoS),estimating the required resources,and modifying allotted resources depending on workload and parallelism due to resources.Resource allocation is a complex challenge due to the versatile service providers and resource providers.The engagement of different service and resource providers needs a cooperation strategy for a sustainable quality of service.The objective of a coherent and rational resource allocation is to attain the quality of service.It also includes identifying critical parameters to develop a resource allocation mechanism.A framework is proposed based on the specified parameters to formulate a resource allocation process in a decentralized multi-cloud environment.The three main parameters of the proposed framework are data accessibility,optimization,and collaboration.Using an optimization technique,these three segments are further divided into subsets for resource allocation and long-term service quality.The CloudSim simulator has been used to validate the suggested framework.Several experiments have been conducted to find the best configurations suited for enhancing collaboration and resource allocation to achieve sustained QoS.The results support the suggested structure for a decentralized multi-cloud environment and the parameters that have been determined.
基金This work was supported by the Collabo R&D between Industry,Academy,and Research Institute(S3250534)funded by the Ministry of SMEs and Startups(MSS,Korea)the Soonchunhyang University Research Fund。
文摘In this current century,most industries are moving towards automation,where human intervention is dramatically reduced.This revolution leads to industrial revolution 4.0,which uses the Internet of Things(IoT)and wireless sensor networks(WSN).With its associated applications,this IoT device is used to compute the receivedWSN data from devices and transfer it to remote locations for assistance.In general,WSNs,the gateways are a long distance from the base station(BS)and are communicated through the gateways nearer to the BS.At the gateway,which is closer to the BS,energy drains faster because of the heavy load,which leads to energy issues around the BS.Since the sensors are battery-operated,either replacement or recharging of those sensor node batteries is not possible after it is deployed to their corresponding areas.In that situation,energy plays a vital role in sensor survival.Concerning reducing the network energy consumption and increasing the network lifetime,this paper proposed an efficient cluster head selection using Improved Social spider Optimization with a Rough Set(ISSRS)and routing path selection to reduce the network load using the Improved Grey wolf optimization(IGWO)approach.(i)Using ISSRS,the initial clusters are formed with the local nodes,and the cluster head is chosen.(ii)Load balancing through routing path selection using IGWO.The simulation results prove that the proposed optimization-based approaches efficiently reduce the energy through load balancing compared to existing systems in terms of energy efficiency,packet delivery ratio,network throughput,and packet loss percentage.
文摘The integrated circuit (IC) manufacturing process is capital intensive and complex. The production process of unit product (or die, as it is commonly referred to) takes several weeks. Semiconductor factories (fabs) continuously attempt to improve their productivity, as measured in output and cycle time (or mean flow time). The conflicting objective of producing maximum units at minimal production cycle time and at the highest quality, as measured by die yield, is discussed in this paper. The inter-related effects are characterized, and a model is proposed to address this multi-objective function. We then show that, with this model, die cost can be optimized for any given operating conditions of a fab. A numerical example is provided to illustrate the practicality of the model and the proposed optimization method.
文摘Fusing satellite(remote sensing)images is an interesting topic in processing satellite images.The result image is achieved through fusing information from spectral and panchromatic images for sharpening.In this paper,a new algorithm based on based the Artificial bee colony(ABC)algorithm with peak signalto-noise ratio(PSNR)index optimization is proposed to fusing remote sensing images in this paper.Firstly,Wavelet transform is used to split the input images into components over the high and low frequency domains.Then,two fusing rules are used for obtaining the fused images.The first rule is“the high frequency components are fused by using the average values”.The second rule is“the low frequency components are fused by using the combining rule with parameter”.The parameter for fusing the low frequency components is defined by using ABC algorithm,an algorithm based on PSNR index optimization.The experimental results on different input images show that the proposed algorithm is better than some recent methods.