High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation lear...High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation learning to an HDI matrix,whose hyper-parameter adaptation can be implemented through a particle swarm optimizer(PSO) to meet scalable requirements.However, conventional PSO is limited by its premature issues,which leads to the accuracy loss of a resultant LFA model. To address this thorny issue, this study merges the information of each particle's state migration into its evolution process following the principle of a generalized momentum method for improving its search ability, thereby building a state-migration particle swarm optimizer(SPSO), whose theoretical convergence is rigorously proved in this study. It is then incorporated into an LFA model for implementing efficient hyper-parameter adaptation without accuracy loss. Experiments on six HDI matrices indicate that an SPSO-incorporated LFA model outperforms state-of-the-art LFA models in terms of prediction accuracy for missing data of an HDI matrix with competitive computational efficiency.Hence, SPSO's use ensures efficient and reliable hyper-parameter adaptation in an LFA model, thus ensuring practicality and accurate representation learning for HDI matrices.展开更多
Reestablishment in power system brings in significant transformation in the power sector by extinguishing the possession of sound consolidated assistance.However,the collaboration of various manufacturing agencies,aut...Reestablishment in power system brings in significant transformation in the power sector by extinguishing the possession of sound consolidated assistance.However,the collaboration of various manufacturing agencies,autonomous power manufacturers,and buyers have created complex installation processes.The regular active load and inefficiency of best measures among varied associates is a huge hazard.Any sudden load deviation will give rise to immediate amendment in frequency and tie-line power errors.It is essential to deal with every zone’s frequency and tie-line power within permitted confines followed by fluctuations within the load.Therefore,it can be proficient by implementing Load Frequency Control under the Bilateral case,stabilizing the power and frequency distinction within the interrelated power grid.Balancing the net deviation in multiple areas is possible by minimizing the unbalance of Bilateral Contracts with the help of proportional integral and advanced controllers like Harris Hawks Optimizer.We proposed the advanced controller Harris Hawk optimizer-based model and validated it on a test bench.The experiment results show that the delay time is 0.0029 s and the settling time of 20.86 s only.This model can also be leveraged to examine the decision boundaries of the Bilateral case.展开更多
A large-scale dynamically weighted directed network(DWDN)involving numerous entities and massive dynamic interaction is an essential data source in many big-data-related applications,like in a terminal interaction pat...A large-scale dynamically weighted directed network(DWDN)involving numerous entities and massive dynamic interaction is an essential data source in many big-data-related applications,like in a terminal interaction pattern analysis system(TIPAS).It can be represented by a high-dimensional and incomplete(HDI)tensor whose entries are mostly unknown.Yet such an HDI tensor contains a wealth knowledge regarding various desired patterns like potential links in a DWDN.A latent factorization-of-tensors(LFT)model proves to be highly efficient in extracting such knowledge from an HDI tensor,which is commonly achieved via a stochastic gradient descent(SGD)solver.However,an SGD-based LFT model suffers from slow convergence that impairs its efficiency on large-scale DWDNs.To address this issue,this work proposes a proportional-integralderivative(PID)-incorporated LFT model.It constructs an adjusted instance error based on the PID control principle,and then substitutes it into an SGD solver to improve the convergence rate.Empirical studies on two DWDNs generated by a real TIPAS show that compared with state-of-the-art models,the proposed model achieves significant efficiency gain as well as highly competitive prediction accuracy when handling the task of missing link prediction for a given DWDN.展开更多
The recent development of channel technology has promised to reduce the transaction verification time in blockchain operations.When transactions are transmitted through the channels created by nodes,the nodes need to ...The recent development of channel technology has promised to reduce the transaction verification time in blockchain operations.When transactions are transmitted through the channels created by nodes,the nodes need to cooperate with each other.If one party refuses to do so,the channel is unstable.A stable channel is thus required.Because nodes may show uncooperative behavior,they may have a negative impact on the stability of such channels.In order to address this issue,this work proposes a dynamic evolutionary game model based on node behavior.This model considers various defense strategies'cost and attack success ratio under them.Nodes can dynamically adjust their strategies according to the behavior of attackers to achieve their effective defense.The equilibrium stability of the proposed model can be achieved.The proposed model can be applied to general channel networks.It is compared with two state-of-the-art blockchain channels:Lightning network and Spirit channels.The experimental results show that the proposed model can be used to improve a channel's stability and keep it in a good cooperative stable state.Thus its use enables a blockchain to enjoy higher transaction success ratio and lower transaction transmission delay than the use of its two peers.展开更多
Dear editor,This letter presents a deep learning-based prediction model for the quality-of-service(QoS)of cloud services.Specifically,to improve the QoS prediction accuracy of cloud services,a new QoS prediction model...Dear editor,This letter presents a deep learning-based prediction model for the quality-of-service(QoS)of cloud services.Specifically,to improve the QoS prediction accuracy of cloud services,a new QoS prediction model is proposed,which is based on multi-staged multi-metric feature fusion with individual evaluations.The multi-metric features include global,local,and individual ones.Experimental results show that the proposed model can provide more accurate QoS prediction results of cloud services than several state-of-the-art methods.展开更多
A cyber physical system(CPS)is a complex system that integrates sensing,computation,control and networking into physical processes and objects over Internet.It plays a key role in modern industry since it connects phy...A cyber physical system(CPS)is a complex system that integrates sensing,computation,control and networking into physical processes and objects over Internet.It plays a key role in modern industry since it connects physical and cyber worlds.In order to meet ever-changing industrial requirements,its structures and functions are constantly improved.Meanwhile,new security issues have arisen.A ubiquitous problem is the fact that cyber attacks can cause significant damage to industrial systems,and thus has gained increasing attention from researchers and practitioners.This paper presents a survey of state-of-the-art results of cyber attacks on cyber physical systems.First,as typical system models are employed to study these systems,time-driven and event-driven systems are reviewed.Then,recent advances on three types of attacks,i.e.,those on availability,integrity,and confidentiality are discussed.In particular,the detailed studies on availability and integrity attacks are introduced from the perspective of attackers and defenders.Namely,both attack and defense strategies are discussed based on different system models.Some challenges and open issues are indicated to guide future research and inspire the further exploration of this increasingly important area.展开更多
Group scheduling problems have attracted much attention owing to their many practical applications.This work proposes a new bi-objective serial-batch group scheduling problem considering the constraints of sequence-de...Group scheduling problems have attracted much attention owing to their many practical applications.This work proposes a new bi-objective serial-batch group scheduling problem considering the constraints of sequence-dependent setup time,release time,and due time.It is originated from an important industrial process,i.e.,wire rod and bar rolling process in steel production systems.Two objective functions,i.e.,the number of late jobs and total setup time,are minimized.A mixed integer linear program is established to describe the problem.To obtain its Pareto solutions,we present a memetic algorithm that integrates a population-based nondominated sorting genetic algorithm II and two single-solution-based improvement methods,i.e.,an insertion-based local search and an iterated greedy algorithm.The computational results on extensive industrial data with the scale of a one-week schedule show that the proposed algorithm has great performance in solving the concerned problem and outperforms its peers.Its high accuracy and efficiency imply its great potential to be applied to solve industrial-size group scheduling problems.展开更多
An increasing number of enterprises have adopted cloud computing to manage their important business applications in distributed green cloud(DGC)systems for low response time and high cost-effectiveness in recent years...An increasing number of enterprises have adopted cloud computing to manage their important business applications in distributed green cloud(DGC)systems for low response time and high cost-effectiveness in recent years.Task scheduling and resource allocation in DGCs have gained more attention in both academia and industry as they are costly to manage because of high energy consumption.Many factors in DGCs,e.g.,prices of power grid,and the amount of green energy express strong spatial variations.The dramatic increase of arriving tasks brings a big challenge to minimize the energy cost of a DGC provider in a market where above factors all possess spatial variations.This work adopts a G/G/1 queuing system to analyze the performance of servers in DGCs.Based on it,a single-objective constrained optimization problem is formulated and solved by a proposed simulated-annealing-based bees algorithm(SBA)to find SBA can minimize the energy cost of a DGC provider by optimally allocating tasks of heterogeneous applications among multiple DGCs,and specifying the running speed of each server and the number of powered-on servers in each GC while strictly meeting response time limits of tasks of all applications.Realistic databased experimental results prove that SBA achieves lower energy cost than several benchmark scheduling methods do.展开更多
A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and...A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems.Aiming at addressing this issue,this study proposes a momentum-incorporated parallel stochastic gradient descent(MPSGD)algorithm,whose main idea is two-fold:a)implementing parallelization via a novel datasplitting strategy,and b)accelerating convergence rate by integrating momentum effects into its training process.With it,an MPSGD-based latent factor(MLF)model is achieved,which is capable of performing efficient and high-quality recommendations.Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm,an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability.展开更多
In this paper,a deadlock prevention policy for robotic manufacturing cells with uncontrollable and unobservable events is proposed based on a Petri net formalism.First,a Petri net for the deadlock control of such syst...In this paper,a deadlock prevention policy for robotic manufacturing cells with uncontrollable and unobservable events is proposed based on a Petri net formalism.First,a Petri net for the deadlock control of such systems is defined.Its admissible markings and first-met inadmissible markings(FIMs)are introduced.Next,place invariants are designed via an integer linear program(ILP)to survive all admissible markings and prohibit all FIMs,keeping the underlying system from reaching deadlocks,livelocks,bad markings,and the markings that may evolve into them by firing uncontrollable transitions.ILP also ensures that the obtained deadlock-free supervisor does not observe any unobservable transition.In addition,the supervisor is guaranteed to be admissible and structurally minimal in terms of both control places and added arcs.The condition under which the supervisor is maximally permissive in behavior is given.Finally,experimental results with the proposed method and existing ones are given to show its effectiveness.展开更多
It is well-recognized that obsolete or discarded products can cause serious environmental pollution if they are poorly be handled.They contain reusable resource that can be recycled and used to generate desired econom...It is well-recognized that obsolete or discarded products can cause serious environmental pollution if they are poorly be handled.They contain reusable resource that can be recycled and used to generate desired economic benefits.Therefore,performing their efficient disassembly is highly important in green manufacturing and sustainable economic development.Their typical examples are electronic appliances and electromechanical/mechanical products.This paper presents a survey on the state of the art of disassembly sequence planning.It can help new researchers or decision makers to search for the right solution for optimal disassembly planning.It reviews the disassembly theory and methods that are applied for the processing,repair,and maintenance of obsolete/discarded products.This paper discusses the recent progress of disassembly sequencing planning in four major aspects:product disassembly modeling methods,mathematical programming methods,artificial intelligence methods,and uncertainty handling.This survey should stimulate readers to be engaged in the research,development and applications of disassembly and remanufacturing methodologies in the Industry 4.0 era.展开更多
Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interacti...Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interaction(PPI)data have been generated,making it very difficult to analyze them efficiently.To address this problem,this paper presents a distributed framework by reimplementing one of state-of-the-art algorithms,i.e.,CoFex,using MapReduce.To do so,an in-depth analysis of its limitations is conducted from the perspectives of efficiency and memory consumption when applying it for large-scale PPI data analysis and prediction.Respective solutions are then devised to overcome these limitations.In particular,we adopt a novel tree-based data structure to reduce the heavy memory consumption caused by the huge sequence information of proteins.After that,its procedure is modified by following the MapReduce framework to take the prediction task distributively.A series of extensive experiments have been conducted to evaluate the performance of our framework in terms of both efficiency and accuracy.Experimental results well demonstrate that the proposed framework can considerably improve its computational efficiency by more than two orders of magnitude while retaining the same high accuracy.展开更多
Recently,multimodal multiobjective optimization problems(MMOPs)have received increasing attention.Their goal is to find a Pareto front and as many equivalent Pareto optimal solutions as possible.Although some evolutio...Recently,multimodal multiobjective optimization problems(MMOPs)have received increasing attention.Their goal is to find a Pareto front and as many equivalent Pareto optimal solutions as possible.Although some evolutionary algorithms for them have been proposed,they mainly focus on the convergence rate in the decision space while ignoring solutions diversity.In this paper,we propose a new multiobjective fireworks algorithm for them,which is able to balance exploitation and exploration in the decision space.We first extend a latest single-objective fireworks algorithm to handle MMOPs.Then we make improvements by incorporating an adaptive strategy and special archive guidance into it,where special archives are established for each firework,and two strategies(i.e.,explosion and random strategies)are adaptively selected to update the positions of sparks generated by fireworks with the guidance of special archives.Finally,we compare the proposed algorithm with eight state-of-the-art multimodal multiobjective algorithms on all 22 MMOPs from CEC2019 and several imbalanced distance minimization problems.Experimental results show that the proposed algorithm is superior to compared algorithms in solving them.Also,its runtime is less than its peers'.展开更多
Power grids include entities such as home-microgrids(H-MGs),consumers,and retailers,each of which has a unique and sometimes contradictory objective compared with others while exchanging electricity and heat with othe...Power grids include entities such as home-microgrids(H-MGs),consumers,and retailers,each of which has a unique and sometimes contradictory objective compared with others while exchanging electricity and heat with other H-MGs.Therefore,there is the need for a smart structure to handle the new situation.This paper proposes a bilevel hierarchical structure for designing and planning distributed energy resources(DERs)and energy storage in H-MGs by considering the demand response(DR).In general,the upper-level structure is based on H-MG generation competition to maximize their individual and/or group income in the process of forming a coalition with other H-MGs.The upper-level problem is decomposed into a set of low-level market clearing problems.Both electricity and heat markets are simultaneously modeled in this paper.DERs,including wind turbines(WTs),combined heat and power(CHP)systems,electric boilers(EBs),electric heat pumps(EHPs),and electric energy storage systems,participate in the electricity markets.In addition,CHP systems,gas boilers(GBs),EBs,EHPs,solar thermal panels,and thermal energy storage systems participate in the heat market.Results show that the formation of a coalition among H-MGs present in one grid will not only have a significant effect on programming and regulating the value of the power generated by the generation resources,but also impact the demand consumption and behavior of consumers participating in the DR program with a cheaper market clearing price.展开更多
Evolutionary computation is a rapidly evolving field and the related algorithms have been successfully used to solve various real-world optimization problems.The past decade has also witnessed their fast progress to s...Evolutionary computation is a rapidly evolving field and the related algorithms have been successfully used to solve various real-world optimization problems.The past decade has also witnessed their fast progress to solve a class of challenging optimization problems called high-dimensional expensive problems(HEPs).The evaluation of their objective fitness requires expensive resource due to their use of time-consuming physical experiments or computer simulations.Moreover,it is hard to traverse the huge search space within reasonable resource as problem dimension increases.Traditional evolutionary algorithms(EAs)tend to fail to solve HEPs competently because they need to conduct many such expensive evaluations before achieving satisfactory results.To reduce such evaluations,many novel surrogate-assisted algorithms emerge to cope with HEPs in recent years.Yet there lacks a thorough review of the state of the art in this specific and important area.This paper provides a comprehensive survey of these evolutionary algorithms for HEPs.We start with a brief introduction to the research status and the basic concepts of HEPs.Then,we present surrogate-assisted evolutionary algorithms for HEPs from four main aspects.We also give comparative results of some representative algorithms and application examples.Finally,we indicate open challenges and several promising directions to advance the progress in evolutionary optimization algorithms for HEPs.展开更多
This study presents an autoencoder-embedded optimization(AEO)algorithm which involves a bi-population cooperative strategy for medium-scale expensive problems(MEPs).A huge search space can be compressed to an informat...This study presents an autoencoder-embedded optimization(AEO)algorithm which involves a bi-population cooperative strategy for medium-scale expensive problems(MEPs).A huge search space can be compressed to an informative lowdimensional space by using an autoencoder as a dimension reduction tool.The search operation conducted in this low space facilitates the population with fast convergence towards the optima.To strike the balance between exploration and exploitation during optimization,two phases of a tailored teaching-learning-based optimization(TTLBO)are adopted to coevolve solutions in a distributed fashion,wherein one is assisted by an autoencoder and the other undergoes a regular evolutionary process.Also,a dynamic size adjustment scheme according to problem dimension and evolutionary progress is proposed to promote information exchange between these two phases and accelerate evolutionary convergence speed.The proposed algorithm is validated by testing benchmark functions with dimensions varying from 50 to 200.As indicated in our experiments,TTLBO is suitable for dealing with medium-scale problems and thus incorporated into the AEO framework as a base optimizer.Compared with the state-of-the-art algorithms for MEPs,AEO shows extraordinarily high efficiency for these challenging problems,t hus opening new directions for various evolutionary algorithms under AEO to tackle MEPs and greatly advancing the field of medium-scale computationally expensive optimization.展开更多
Ru/CeO_2[RC] and Ru/CeO_2/ethylene glycol(EG) [RCE] nanoparticles were produced by performing a simple hydrothermal reaction at 200℃ for 24 h and found to have two distinct morphologies. The RC nanoparticles are ph...Ru/CeO_2[RC] and Ru/CeO_2/ethylene glycol(EG) [RCE] nanoparticles were produced by performing a simple hydrothermal reaction at 200℃ for 24 h and found to have two distinct morphologies. The RC nanoparticles are phase pureCeO_2; triangular highly crystallineCeCO_3OH nanoparticles are formed from the solution containing EG under the same hydrothermal reaction conditions at p H 8.5. EG plays an important role in the formation of the triangularCeCO_3OH nanoparticles. The polycrystallineCeCO_3OH nanoparticles retain their triangular structure even after calcination at 600℃in air but are transformed into a pureCeO_2 phase. The room temperature photoluminescence of the RC and RCE nanoparticles and of RCE calcined at 600℃[RCE-600] was also investigated. It was found that the high crystallinity triangular RCE-600 sample exhibits the highest photoluminescence intensity.展开更多
Mechanistic studies of the palladium catalyst activation with halide additives are of great importance to palladium catalysis.In this work,XAFS spectroscopy and cyclic voltammetry were utilized to study real structure...Mechanistic studies of the palladium catalyst activation with halide additives are of great importance to palladium catalysis.In this work,XAFS spectroscopy and cyclic voltammetry were utilized to study real structures of Pd(OAc)2 in solution with different inorgan-ic halogen additives.XAFS results demonstrate that Pd maintained+2 in the presence of excessive LiCl,LiBr,ZnCl_(2),ZnBr_(2),and Nal.Fitting results of EXAFS spectra revealed that the Pd's first shell is replaced by halogen partially or completely depending on the hal-ogen species.DFT calculations were conducted to identify the most reliable solvated structures.The combined experimental and computational studies elucidate the critical role of inorganic halide additives in Pd chemistry.展开更多
基金supported in part by the National Natural Science Foundation of China (62372385, 62272078, 62002337)the Chongqing Natural Science Foundation (CSTB2022NSCQ-MSX1486, CSTB2023NSCQ-LZX0069)the Deanship of Scientific Research at King Abdulaziz University, Jeddah, Saudi Arabia (RG-12-135-43)。
文摘High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation learning to an HDI matrix,whose hyper-parameter adaptation can be implemented through a particle swarm optimizer(PSO) to meet scalable requirements.However, conventional PSO is limited by its premature issues,which leads to the accuracy loss of a resultant LFA model. To address this thorny issue, this study merges the information of each particle's state migration into its evolution process following the principle of a generalized momentum method for improving its search ability, thereby building a state-migration particle swarm optimizer(SPSO), whose theoretical convergence is rigorously proved in this study. It is then incorporated into an LFA model for implementing efficient hyper-parameter adaptation without accuracy loss. Experiments on six HDI matrices indicate that an SPSO-incorporated LFA model outperforms state-of-the-art LFA models in terms of prediction accuracy for missing data of an HDI matrix with competitive computational efficiency.Hence, SPSO's use ensures efficient and reliable hyper-parameter adaptation in an LFA model, thus ensuring practicality and accurate representation learning for HDI matrices.
基金The Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,Saudi Arabia has funded this project,under grant no.(FP-221-43).
文摘Reestablishment in power system brings in significant transformation in the power sector by extinguishing the possession of sound consolidated assistance.However,the collaboration of various manufacturing agencies,autonomous power manufacturers,and buyers have created complex installation processes.The regular active load and inefficiency of best measures among varied associates is a huge hazard.Any sudden load deviation will give rise to immediate amendment in frequency and tie-line power errors.It is essential to deal with every zone’s frequency and tie-line power within permitted confines followed by fluctuations within the load.Therefore,it can be proficient by implementing Load Frequency Control under the Bilateral case,stabilizing the power and frequency distinction within the interrelated power grid.Balancing the net deviation in multiple areas is possible by minimizing the unbalance of Bilateral Contracts with the help of proportional integral and advanced controllers like Harris Hawks Optimizer.We proposed the advanced controller Harris Hawk optimizer-based model and validated it on a test bench.The experiment results show that the delay time is 0.0029 s and the settling time of 20.86 s only.This model can also be leveraged to examine the decision boundaries of the Bilateral case.
基金supported in part by the National Natural Science Foundation of China(61772493)the CAAI-Huawei MindSpore Open Fund(CAAIXSJLJJ-2020-004B)+4 种基金in part by the Natural Science Foundation of Chongqing of China(cstc2019jcyjjq X0013)in part by the Pioneer Hundred Talents Program of Chinese Academy of Sciencesin part by the Deanship of Scientific Research(DSR)at King Abdulaziz UniversityJeddahSaudi Arabia(FP-165-43)。
文摘A large-scale dynamically weighted directed network(DWDN)involving numerous entities and massive dynamic interaction is an essential data source in many big-data-related applications,like in a terminal interaction pattern analysis system(TIPAS).It can be represented by a high-dimensional and incomplete(HDI)tensor whose entries are mostly unknown.Yet such an HDI tensor contains a wealth knowledge regarding various desired patterns like potential links in a DWDN.A latent factorization-of-tensors(LFT)model proves to be highly efficient in extracting such knowledge from an HDI tensor,which is commonly achieved via a stochastic gradient descent(SGD)solver.However,an SGD-based LFT model suffers from slow convergence that impairs its efficiency on large-scale DWDNs.To address this issue,this work proposes a proportional-integralderivative(PID)-incorporated LFT model.It constructs an adjusted instance error based on the PID control principle,and then substitutes it into an SGD solver to improve the convergence rate.Empirical studies on two DWDNs generated by a real TIPAS show that compared with state-of-the-art models,the proposed model achieves significant efficiency gain as well as highly competitive prediction accuracy when handling the task of missing link prediction for a given DWDN.
基金supported by the National Natural Science Foundation of China(61872006)Scientific Research Activities Foundation of Academic and Technical Leaders and Reserve Candidates in Anhui Province(2020H233)+2 种基金Top-notch Discipline(specialty)Talents Foundation in Colleges and Universities of Anhui Province(gxbj2020057)the Startup Foundation for Introducing Talent of NUISTby Institutional Fund Projects from Ministry of Education and Deanship of Scientific Research(DSR),King Abdulaziz University(KAU),Jeddah,Saudi Arabia(IFPDP-216-22)。
文摘The recent development of channel technology has promised to reduce the transaction verification time in blockchain operations.When transactions are transmitted through the channels created by nodes,the nodes need to cooperate with each other.If one party refuses to do so,the channel is unstable.A stable channel is thus required.Because nodes may show uncooperative behavior,they may have a negative impact on the stability of such channels.In order to address this issue,this work proposes a dynamic evolutionary game model based on node behavior.This model considers various defense strategies'cost and attack success ratio under them.Nodes can dynamically adjust their strategies according to the behavior of attackers to achieve their effective defense.The equilibrium stability of the proposed model can be achieved.The proposed model can be applied to general channel networks.It is compared with two state-of-the-art blockchain channels:Lightning network and Spirit channels.The experimental results show that the proposed model can be used to improve a channel's stability and keep it in a good cooperative stable state.Thus its use enables a blockchain to enjoy higher transaction success ratio and lower transaction transmission delay than the use of its two peers.
基金supported by the National Natural Science Foundation of China(61872006)the Startup Foundation for New Talents of NUIST,Institutional Fund Projects(IFPNC-001-135-2020)the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,Saudi Arabia under grant no.GCV19-37-1441.
文摘Dear editor,This letter presents a deep learning-based prediction model for the quality-of-service(QoS)of cloud services.Specifically,to improve the QoS prediction accuracy of cloud services,a new QoS prediction model is proposed,which is based on multi-staged multi-metric feature fusion with individual evaluations.The multi-metric features include global,local,and individual ones.Experimental results show that the proposed model can provide more accurate QoS prediction results of cloud services than several state-of-the-art methods.
基金supported by Institutional Fund Projects(IFPNC-001-135-2020)technical and financial support from the Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia。
文摘A cyber physical system(CPS)is a complex system that integrates sensing,computation,control and networking into physical processes and objects over Internet.It plays a key role in modern industry since it connects physical and cyber worlds.In order to meet ever-changing industrial requirements,its structures and functions are constantly improved.Meanwhile,new security issues have arisen.A ubiquitous problem is the fact that cyber attacks can cause significant damage to industrial systems,and thus has gained increasing attention from researchers and practitioners.This paper presents a survey of state-of-the-art results of cyber attacks on cyber physical systems.First,as typical system models are employed to study these systems,time-driven and event-driven systems are reviewed.Then,recent advances on three types of attacks,i.e.,those on availability,integrity,and confidentiality are discussed.In particular,the detailed studies on availability and integrity attacks are introduced from the perspective of attackers and defenders.Namely,both attack and defense strategies are discussed based on different system models.Some challenges and open issues are indicated to guide future research and inspire the further exploration of this increasingly important area.
基金This work was supported by the China Scholarship Council Scholarship,the National Key Research and Development Program of China(2017YFB0306400)the National Natural Science Foundation of China(62073069)the Deanship of Scientific Research(DSR)at King Abdulaziz University(RG-48-135-40).
文摘Group scheduling problems have attracted much attention owing to their many practical applications.This work proposes a new bi-objective serial-batch group scheduling problem considering the constraints of sequence-dependent setup time,release time,and due time.It is originated from an important industrial process,i.e.,wire rod and bar rolling process in steel production systems.Two objective functions,i.e.,the number of late jobs and total setup time,are minimized.A mixed integer linear program is established to describe the problem.To obtain its Pareto solutions,we present a memetic algorithm that integrates a population-based nondominated sorting genetic algorithm II and two single-solution-based improvement methods,i.e.,an insertion-based local search and an iterated greedy algorithm.The computational results on extensive industrial data with the scale of a one-week schedule show that the proposed algorithm has great performance in solving the concerned problem and outperforms its peers.Its high accuracy and efficiency imply its great potential to be applied to solve industrial-size group scheduling problems.
基金supported in part by the National Natural Science Foundation of China(61802015,61703011)the Major Science and Technology Program for Water Pollution Control and Treatment of China(2018ZX07111005)+1 种基金the National Defense Pre-Research Foundation of China(41401020401,41401050102)the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah(D-422-135-1441)。
文摘An increasing number of enterprises have adopted cloud computing to manage their important business applications in distributed green cloud(DGC)systems for low response time and high cost-effectiveness in recent years.Task scheduling and resource allocation in DGCs have gained more attention in both academia and industry as they are costly to manage because of high energy consumption.Many factors in DGCs,e.g.,prices of power grid,and the amount of green energy express strong spatial variations.The dramatic increase of arriving tasks brings a big challenge to minimize the energy cost of a DGC provider in a market where above factors all possess spatial variations.This work adopts a G/G/1 queuing system to analyze the performance of servers in DGCs.Based on it,a single-objective constrained optimization problem is formulated and solved by a proposed simulated-annealing-based bees algorithm(SBA)to find SBA can minimize the energy cost of a DGC provider by optimally allocating tasks of heterogeneous applications among multiple DGCs,and specifying the running speed of each server and the number of powered-on servers in each GC while strictly meeting response time limits of tasks of all applications.Realistic databased experimental results prove that SBA achieves lower energy cost than several benchmark scheduling methods do.
基金supported in part by the National Natural Science Foundation of China(61772493)the Deanship of Scientific Research(DSR)at King Abdulaziz University(RG-48-135-40)+1 种基金Guangdong Province Universities and College Pearl River Scholar Funded Scheme(2019)the Natural Science Foundation of Chongqing(cstc2019jcyjjqX0013)。
文摘A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems.Aiming at addressing this issue,this study proposes a momentum-incorporated parallel stochastic gradient descent(MPSGD)algorithm,whose main idea is two-fold:a)implementing parallelization via a novel datasplitting strategy,and b)accelerating convergence rate by integrating momentum effects into its training process.With it,an MPSGD-based latent factor(MLF)model is achieved,which is capable of performing efficient and high-quality recommendations.Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm,an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability.
基金supported by the National Natural Science Foundation of China(61773206)the Natural Science Foundation of Jiangsu Province of China(BK20170131)+1 种基金Jiangsu Overseas Visiting Scholar Program for University Prominent Young&Middle-aged Teachers and Presidents(2019-19)the Deanship of Scientific Research(DSR)at King Abdulaziz University(RG-20-135-38)。
文摘In this paper,a deadlock prevention policy for robotic manufacturing cells with uncontrollable and unobservable events is proposed based on a Petri net formalism.First,a Petri net for the deadlock control of such systems is defined.Its admissible markings and first-met inadmissible markings(FIMs)are introduced.Next,place invariants are designed via an integer linear program(ILP)to survive all admissible markings and prohibit all FIMs,keeping the underlying system from reaching deadlocks,livelocks,bad markings,and the markings that may evolve into them by firing uncontrollable transitions.ILP also ensures that the obtained deadlock-free supervisor does not observe any unobservable transition.In addition,the supervisor is guaranteed to be admissible and structurally minimal in terms of both control places and added arcs.The condition under which the supervisor is maximally permissive in behavior is given.Finally,experimental results with the proposed method and existing ones are given to show its effectiveness.
基金the Research Foundation of China(L2019027)Liaoning Revitalization Talents Program(XLYC1907166)the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah(KEP-2-135-39)。
文摘It is well-recognized that obsolete or discarded products can cause serious environmental pollution if they are poorly be handled.They contain reusable resource that can be recycled and used to generate desired economic benefits.Therefore,performing their efficient disassembly is highly important in green manufacturing and sustainable economic development.Their typical examples are electronic appliances and electromechanical/mechanical products.This paper presents a survey on the state of the art of disassembly sequence planning.It can help new researchers or decision makers to search for the right solution for optimal disassembly planning.It reviews the disassembly theory and methods that are applied for the processing,repair,and maintenance of obsolete/discarded products.This paper discusses the recent progress of disassembly sequencing planning in four major aspects:product disassembly modeling methods,mathematical programming methods,artificial intelligence methods,and uncertainty handling.This survey should stimulate readers to be engaged in the research,development and applications of disassembly and remanufacturing methodologies in the Industry 4.0 era.
基金This work was supported in part by the National Natural Science Foundation of China(61772493)the CAAI-Huawei MindSpore Open Fund(CAAIXSJLJJ-2020-004B)+4 种基金the Natural Science Foundation of Chongqing(China)(cstc2019jcyjjqX0013)Chongqing Research Program of Technology Innovation and Application(cstc2019jscx-fxydX0024,cstc2019jscx-fxydX0027,cstc2018jszx-cyzdX0041)Guangdong Province Universities and College Pearl River Scholar Funded Scheme(2019)the Pioneer Hundred Talents Program of Chinese Academy of Sciencesthe Deanship of Scientific Research(DSR)at King Abdulaziz University(G-21-135-38).
文摘Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interaction(PPI)data have been generated,making it very difficult to analyze them efficiently.To address this problem,this paper presents a distributed framework by reimplementing one of state-of-the-art algorithms,i.e.,CoFex,using MapReduce.To do so,an in-depth analysis of its limitations is conducted from the perspectives of efficiency and memory consumption when applying it for large-scale PPI data analysis and prediction.Respective solutions are then devised to overcome these limitations.In particular,we adopt a novel tree-based data structure to reduce the heavy memory consumption caused by the huge sequence information of proteins.After that,its procedure is modified by following the MapReduce framework to take the prediction task distributively.A series of extensive experiments have been conducted to evaluate the performance of our framework in terms of both efficiency and accuracy.Experimental results well demonstrate that the proposed framework can considerably improve its computational efficiency by more than two orders of magnitude while retaining the same high accuracy.
基金supported in part by the National Natural Science Foundation of China(62071230,62061146002)the Natural Science Foundation of Jiangsu Province(BK20211567)the Deanship of Scientific Research(DSR)at King Abdulaziz University(KAU),Jeddah,Saudi Arabia(FP-147-43)。
文摘Recently,multimodal multiobjective optimization problems(MMOPs)have received increasing attention.Their goal is to find a Pareto front and as many equivalent Pareto optimal solutions as possible.Although some evolutionary algorithms for them have been proposed,they mainly focus on the convergence rate in the decision space while ignoring solutions diversity.In this paper,we propose a new multiobjective fireworks algorithm for them,which is able to balance exploitation and exploration in the decision space.We first extend a latest single-objective fireworks algorithm to handle MMOPs.Then we make improvements by incorporating an adaptive strategy and special archive guidance into it,where special archives are established for each firework,and two strategies(i.e.,explosion and random strategies)are adaptively selected to update the positions of sparks generated by fireworks with the guidance of special archives.Finally,we compare the proposed algorithm with eight state-of-the-art multimodal multiobjective algorithms on all 22 MMOPs from CEC2019 and several imbalanced distance minimization problems.Experimental results show that the proposed algorithm is superior to compared algorithms in solving them.Also,its runtime is less than its peers'.
基金funded partially by the National Science Foundation(NSF)(No.1917308)the British Council(No.IND/CONT/GA/18-19/22)
文摘Power grids include entities such as home-microgrids(H-MGs),consumers,and retailers,each of which has a unique and sometimes contradictory objective compared with others while exchanging electricity and heat with other H-MGs.Therefore,there is the need for a smart structure to handle the new situation.This paper proposes a bilevel hierarchical structure for designing and planning distributed energy resources(DERs)and energy storage in H-MGs by considering the demand response(DR).In general,the upper-level structure is based on H-MG generation competition to maximize their individual and/or group income in the process of forming a coalition with other H-MGs.The upper-level problem is decomposed into a set of low-level market clearing problems.Both electricity and heat markets are simultaneously modeled in this paper.DERs,including wind turbines(WTs),combined heat and power(CHP)systems,electric boilers(EBs),electric heat pumps(EHPs),and electric energy storage systems,participate in the electricity markets.In addition,CHP systems,gas boilers(GBs),EBs,EHPs,solar thermal panels,and thermal energy storage systems participate in the heat market.Results show that the formation of a coalition among H-MGs present in one grid will not only have a significant effect on programming and regulating the value of the power generated by the generation resources,but also impact the demand consumption and behavior of consumers participating in the DR program with a cheaper market clearing price.
基金supported in part by the Natural Science Foundation of Jiangsu Province(BK20230923,BK20221067)the National Natural Science Foundation of China(62206113,62203093)+1 种基金Institutional Fund Projects Provided by the Ministry of Education and King Abdulaziz University(IFPIP-1532-135-1443)FDCT(Fundo para o Desen-volvimento das Ciencias e da Tecnologia)(0047/2021/A1)。
文摘Evolutionary computation is a rapidly evolving field and the related algorithms have been successfully used to solve various real-world optimization problems.The past decade has also witnessed their fast progress to solve a class of challenging optimization problems called high-dimensional expensive problems(HEPs).The evaluation of their objective fitness requires expensive resource due to their use of time-consuming physical experiments or computer simulations.Moreover,it is hard to traverse the huge search space within reasonable resource as problem dimension increases.Traditional evolutionary algorithms(EAs)tend to fail to solve HEPs competently because they need to conduct many such expensive evaluations before achieving satisfactory results.To reduce such evaluations,many novel surrogate-assisted algorithms emerge to cope with HEPs in recent years.Yet there lacks a thorough review of the state of the art in this specific and important area.This paper provides a comprehensive survey of these evolutionary algorithms for HEPs.We start with a brief introduction to the research status and the basic concepts of HEPs.Then,we present surrogate-assisted evolutionary algorithms for HEPs from four main aspects.We also give comparative results of some representative algorithms and application examples.Finally,we indicate open challenges and several promising directions to advance the progress in evolutionary optimization algorithms for HEPs.
基金supported in part by the National Natural Science Foundation of China(72171172,62088101)in part by the Shanghai Science and Technology Major Special Project of Shanghai Development and Reform Commission(2021SHZDZX0100)+2 种基金in part by the Shanghai Commission of Science and Technology(19511132100,19511132101)in part by the China Scholarship Councilin part by the Deanship of Scientific Research(DSR)at King Abdulaziz University(KAU),Jeddah,Saudi Arabia(FP-146-43)。
文摘This study presents an autoencoder-embedded optimization(AEO)algorithm which involves a bi-population cooperative strategy for medium-scale expensive problems(MEPs).A huge search space can be compressed to an informative lowdimensional space by using an autoencoder as a dimension reduction tool.The search operation conducted in this low space facilitates the population with fast convergence towards the optima.To strike the balance between exploration and exploitation during optimization,two phases of a tailored teaching-learning-based optimization(TTLBO)are adopted to coevolve solutions in a distributed fashion,wherein one is assisted by an autoencoder and the other undergoes a regular evolutionary process.Also,a dynamic size adjustment scheme according to problem dimension and evolutionary progress is proposed to promote information exchange between these two phases and accelerate evolutionary convergence speed.The proposed algorithm is validated by testing benchmark functions with dimensions varying from 50 to 200.As indicated in our experiments,TTLBO is suitable for dealing with medium-scale problems and thus incorporated into the AEO framework as a base optimizer.Compared with the state-of-the-art algorithms for MEPs,AEO shows extraordinarily high efficiency for these challenging problems,t hus opening new directions for various evolutionary algorithms under AEO to tackle MEPs and greatly advancing the field of medium-scale computationally expensive optimization.
基金support provided by King Abdulaziz City for Science and Technology(KACST)through the Science&Technology Unit at King Fahd University of Petroleum&Minerals(KFUPM)for funding this work through project No.AT-32-21
文摘Ru/CeO_2[RC] and Ru/CeO_2/ethylene glycol(EG) [RCE] nanoparticles were produced by performing a simple hydrothermal reaction at 200℃ for 24 h and found to have two distinct morphologies. The RC nanoparticles are phase pureCeO_2; triangular highly crystallineCeCO_3OH nanoparticles are formed from the solution containing EG under the same hydrothermal reaction conditions at p H 8.5. EG plays an important role in the formation of the triangularCeCO_3OH nanoparticles. The polycrystallineCeCO_3OH nanoparticles retain their triangular structure even after calcination at 600℃in air but are transformed into a pureCeO_2 phase. The room temperature photoluminescence of the RC and RCE nanoparticles and of RCE calcined at 600℃[RCE-600] was also investigated. It was found that the high crystallinity triangular RCE-600 sample exhibits the highest photoluminescence intensity.
基金supported by the National Natural Science Foundation of China(No.22031008)the Science Foundationof Wuhan(No.2020010601012192).
文摘Mechanistic studies of the palladium catalyst activation with halide additives are of great importance to palladium catalysis.In this work,XAFS spectroscopy and cyclic voltammetry were utilized to study real structures of Pd(OAc)2 in solution with different inorgan-ic halogen additives.XAFS results demonstrate that Pd maintained+2 in the presence of excessive LiCl,LiBr,ZnCl_(2),ZnBr_(2),and Nal.Fitting results of EXAFS spectra revealed that the Pd's first shell is replaced by halogen partially or completely depending on the hal-ogen species.DFT calculations were conducted to identify the most reliable solvated structures.The combined experimental and computational studies elucidate the critical role of inorganic halide additives in Pd chemistry.