With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these...With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.展开更多
As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and...As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency.展开更多
In the era of Internet of Things(Io T),mobile edge computing(MEC)and wireless power transfer(WPT)provide a prominent solution for computation-intensive applications to enhance computation capability and achieve sustai...In the era of Internet of Things(Io T),mobile edge computing(MEC)and wireless power transfer(WPT)provide a prominent solution for computation-intensive applications to enhance computation capability and achieve sustainable energy supply.A wireless-powered mobile edge computing(WPMEC)system consisting of a hybrid access point(HAP)combined with MEC servers and many users is considered in this paper.In particular,a novel multiuser cooperation scheme based on orthogonal frequency division multiple access(OFDMA)is provided to improve the computation performance,where users can split the computation tasks into various parts for local computing,offloading to corresponding helper,and HAP for remote execution respectively with the aid of helper.Specifically,we aim at maximizing the weighted sum computation rate(WSCR)by optimizing time assignment,computation-task allocation,and transmission power at the same time while keeping energy neutrality in mind.We transform the original non-convex optimization problem to a convex optimization problem and then obtain a semi-closed form expression of the optimal solution by considering the convex optimization techniques.Simulation results demonstrate that the proposed multi-user cooperationassisted WPMEC scheme greatly improves the WSCR of all users than the existing schemes.In addition,OFDMA protocol increases the fairness and decreases delay among the users when compared to TDMA protocol.展开更多
A novel dynamic software allocation algorithm suitable for pervasive computing environments is proposed to minimize power consumption of mobile devices. Considering the power cost incurred by the computation, communic...A novel dynamic software allocation algorithm suitable for pervasive computing environments is proposed to minimize power consumption of mobile devices. Considering the power cost incurred by the computation, communication and migration of software components, a power consumption model of component assignments between a mobile device and a server is set up. Also, the mobility of components and the mobility relationships between components are taken into account in software allocation. By using network flow theory, the optimization problem of power conservation is transformed into the optimal bipartition problem of a flow network which can be partitioned by the max-flow rain-cut algorithm. Simulation results show that the proposed algorithm can save si^nificantlv more energy than existing algorithms.展开更多
In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computi...In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on.展开更多
The construction of new power systems presents higher requirements for the Power Internet of Things(PIoT)technology.The“source-grid-load-storage”architecture of a new power system requires PIoT to have a stronger mu...The construction of new power systems presents higher requirements for the Power Internet of Things(PIoT)technology.The“source-grid-load-storage”architecture of a new power system requires PIoT to have a stronger multi-source heterogeneous data fusion ability.Native graph databases have great advantages in dealing with multi-source heterogeneous data,which make them suitable for an increasing number of analytical computing tasks.However,only few existing graph database products have native support for matrix operation-related interfaces or functions,resulting in low efficiency when handling matrix calculations that are commonly encountered in power grids.In this paper,the matrix computation process is expressed by a strategy called graph description,which relies on the natural connection between the matrix and structure of the graph.Based on that,we implement matrix operations on graph database,including matrix multiplication,matrix decomposition,etc.Specifically,only the nodes relevant to the computation and their neighbors are concerned in the process,which prunes the influence of zero elements in the matrix and avoids useless iterations compared to the conventional matrix computation.Based on the graph description,a series of power grid computations can be implemented on graph database,which reduces redundant data import and export operations while leveraging the parallel computing capability of graph database.It promotes the efficiency of PIoT when handling multi-source heterogeneous data.An comprehensive experimental study over two different scale power system datasets compares the proposed method with Python and MATLAB baselines.The results reveal the superior performance of our proposed method in both power flow and N-1 contingency computations.展开更多
The unmanned aerial vehicle(UAV)-enabled mobile edge computing(MEC) architecture is expected to be a powerful technique to facilitate 5 G and beyond ubiquitous wireless connectivity and diverse vertical applications a...The unmanned aerial vehicle(UAV)-enabled mobile edge computing(MEC) architecture is expected to be a powerful technique to facilitate 5 G and beyond ubiquitous wireless connectivity and diverse vertical applications and services, anytime and anywhere. Wireless power transfer(WPT) is another promising technology to prolong the operation time of low-power wireless devices in the era of Internet of Things(IoT). However, the integration of WPT and UAV-enabled MEC systems is far from being well studied, especially in dynamic environments. In order to tackle this issue, this paper aims to investigate the stochastic computation offloading and trajectory scheduling for the UAV-enabled wireless powered MEC system. A UAV offers both RF wireless power transmission and computation services for IoT devices. Considering the stochastic task arrivals and random channel conditions, a long-term average energyefficiency(EE) minimization problem is formulated.Due to non-convexity and the time domain coupling of the variables in the formulated problem, a lowcomplexity online computation offloading and trajectory scheduling algorithm(OCOTSA) is proposed by exploiting Lyapunov optimization. Simulation results verify that there exists a balance between EE and the service delay, and demonstrate that the system EE performance obtained by the proposed scheme outperforms other benchmark schemes.展开更多
With the development of smart grid, the electric power supervisory control and data acquisition (SCADA) system is limited by the traditional IT infrastructure, leading to low resource utilization and poor scalabilit...With the development of smart grid, the electric power supervisory control and data acquisition (SCADA) system is limited by the traditional IT infrastructure, leading to low resource utilization and poor scalability. Information islands are formed due to poor system interoperability. The development of innovative applications is limited, and the launching period of new businesses is long. Management costs and risks increase, and equipment utilization declines. To address these issues, a professional private cloud solution is introduced to integrate the electric power SCADA system, and conduct experimental study of its applicability, reliability, security, and real time. The experimental results show that the professional private cloud solution is technical and commercial feasible, meeting the requirements of the electric power SCADA system.展开更多
Benefited from wireless power transfer(WPT)and mobile-edge computing(MEC),wireless powered MEC systems have attracted widespread attention.Specifically,we design an online offloading scheme based on deep reinforcement...Benefited from wireless power transfer(WPT)and mobile-edge computing(MEC),wireless powered MEC systems have attracted widespread attention.Specifically,we design an online offloading scheme based on deep reinforcement learning that maximizes the computation rate and minimizes the energy consumption of all wireless devices(WDs).Extensive results validate that the proposed scheme can achieve better tradeoff between energy consumption and computation delay.展开更多
Traditional digital processing approaches are based on semiconductor transistors, which suffer from high power consumption, aggravating with technology node scaling. To solve definitively this problem, a number of eme...Traditional digital processing approaches are based on semiconductor transistors, which suffer from high power consumption, aggravating with technology node scaling. To solve definitively this problem, a number of emerging non-volatile nanodevices are under intense investigations. Meanwhile, novel computing circuits are invented to dig the full potential of the nanodevices. The combination of non-volatile nanodevices with suitable computing paradigms have many merits compared with the complementary metal-oxide-semiconductor transistor (CMOS) technology based structures, such as zero standby power, ultra-high density, non-volatility, and acceptable access speed. In this paper, we overview and compare the computing paradigms based on the emerging nanodevices towards ultra-low dissipation.展开更多
Ubiquitous computing must incorporate a certain level of security. For the severely resource constrained applications, the energy-efficient and small size cryptography algorithm implementation is a critical problem. H...Ubiquitous computing must incorporate a certain level of security. For the severely resource constrained applications, the energy-efficient and small size cryptography algorithm implementation is a critical problem. Hardware implementations of the advanced encryption standard (AES) for authentication and encryption are presented. An energy consumption variable is derived to evaluate low-power design strategies for battery-powered devices. It proves that compact AES architectures fail to optimize the AES hardware energy, whereas reducing invalid switching activities and implementing power-optimized sub-modules are the reasonable methods. Implementations of different substitution box (S-Boxes) structures are presented with 0.25μm 1.8 V CMOS (complementary metal oxide semiconductor) standard cell library. The comparisons and trade-offs among area, security, and power are explored. The experimental results show that Galois field composite S-Boxes have smaller size and highest security but consume considerably more power, whereas decoder-switch-encoder S-Boxes have the best power characteristics with disadvantages in terms of size and security. The combination of these two type S-Boxes instead of homogeneous S-Boxes in AES circuit will lead to optimal schemes. The technique of latch-dividing data path is analyzed, and the quantitative simulation results demonstrate that this approach diminishes the glitches effectively at a very low hardware cost.展开更多
In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the r...In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.展开更多
With the expansion of cloud computing,optimizing the energy efficiency and cost of the cloud paradigm is considered significantly important,since it directly affects providers’revenue and customers’payment.Thus,prov...With the expansion of cloud computing,optimizing the energy efficiency and cost of the cloud paradigm is considered significantly important,since it directly affects providers’revenue and customers’payment.Thus,providing prediction information of the cloud services can be very beneficial for the service providers,as they need to carefully predict their business growths and efficiently manage their resources.To optimize the use of cloud services,predictive mechanisms can be applied to improve resource utilization and reduce energy-related costs.However,such mechanisms need to be provided with energy awareness not only at the level of the Physical Machine(PM)but also at the level of the Virtual Machine(VM)in order to make improved cost decisions.Therefore,this paper presents a comprehensive literature review on the subject of energy-related cost issues and prediction models in cloud computing environments,along with an overall discussion of the closely related works.The outcomes of this research can be used and incorporated by predictive resource management techniques to make improved cost decisions assisted with energy awareness and leverage cloud resources efficiently.展开更多
The paper presents a computer code system 'SRDAAR- QNPP' for the real-time dose as-sessment of an accident release for Qinshan Nuclear Power Plant. It includes three parts:thereal-time data acquisition system,...The paper presents a computer code system 'SRDAAR- QNPP' for the real-time dose as-sessment of an accident release for Qinshan Nuclear Power Plant. It includes three parts:thereal-time data acquisition system, assessment computer. and the assessment operating code system. InSRDAAR-QNPP, the wind field of the surface and the lower levels are determined hourly by using amass consistent three-dimension diasnosis model with the topographic following coordinate system.A Lagrangin Puff model under changing meteorological condition is adopted for atmosphericdispersion, the correction for dry and wet depositions. physical decay and partial plume penetrationof the top inversion and the deviation of plume axis caused by complex terrain have been taken in-to account. The calculation domain areas include three square grid areas with the sideline 10 km, 40krn and 160 km and a grid interval 0.5 km, 2.0 km, 8.0 km respectively. Three exposure pathwaysare taken into account:the external exposure from immersion cloud and passing puff, the internalexposure from inhalation and the external exposure from contaminated ground. This system is ableto provide the results of concentration and dose distributions within 10 minutes after the data havebeen inputed.展开更多
Since the raising of the cloud computing, the applications of web service have been extended rapidly. However, the data centers of cloud computing also cause the problem of power consumption and the resources usually ...Since the raising of the cloud computing, the applications of web service have been extended rapidly. However, the data centers of cloud computing also cause the problem of power consumption and the resources usually have not been used effectively. Decreasing the power consumption and enhancing resource utilization become main issues in cloud computing environment. In this paper, we propose a method, called MBFDP (modified best fit decreasing packing), to decrease power consumption and enhance resource utilization of cloud computing servers. From the results of experiments, the proposed solution can reduce power consumption effectively and enhance the utilization of resources of servers.展开更多
Edge computing refers to the computing paradigm in which the processing power, communication capabilities and intelligence are pushed down to the edge of the networking system like gateways and devices, where the data...Edge computing refers to the computing paradigm in which the processing power, communication capabilities and intelligence are pushed down to the edge of the networking system like gateways and devices, where the data originates. In doing so, edge computing enables an infrastructure for processing the data directly from devices with low latency, battery consumption and bandwidth cost. With opportunities for research and advanced applications such as augmented reality and wearable cognitive assistance come new challenges. This special issue reports the current re? search on various topics related to edge computing, addressing the challenges in the enabling technologies and practical implementations.展开更多
Accurate forecasting for photovoltaic power generation is one of the key enablers for the integration of solar photovoltaic systems into power grids.Existing deep-learning-based methods can perform well if there are s...Accurate forecasting for photovoltaic power generation is one of the key enablers for the integration of solar photovoltaic systems into power grids.Existing deep-learning-based methods can perform well if there are sufficient training data and enough computational resources.However,there are challenges in building models through centralized shared data due to data privacy concerns and industry competition.Federated learning is a new distributed machine learning approach which enables training models across edge devices while data reside locally.In this paper,we propose an efficient semi-asynchronous federated learning framework for short-term solar power forecasting and evaluate the framework performance using a CNN-LSTM model.We design a personalization technique and a semi-asynchronous aggregation strategy to improve the efficiency of the proposed federated forecasting approach.Thorough evaluations using a real-world dataset demonstrate that the federated models can achieve significantly higher forecasting performance than fully local models while protecting data privacy,and the proposed semi-asynchronous aggregation and the personalization technique can make the forecasting framework more robust in real-world scenarios.展开更多
The voltagefluctuation in electric circuits has been identified as key issue in different electric systems.As the usage of electricity growing in rapid way,there exist higherfluctuations in powerflow.To maintain theflow or...The voltagefluctuation in electric circuits has been identified as key issue in different electric systems.As the usage of electricity growing in rapid way,there exist higherfluctuations in powerflow.To maintain theflow or stabi-lity of power in any electric circuit,there are many circuit models are discussed in literature.However,they suffer to maintain the output voltage and not capable of maintaining power stability.To improve the performance in power stabilization,an efficient IC pattern based power factor maximization model(ICPFMM)in this article.The model is focused on improving the power stability with the use of IC(Inductor and Conductor)towards identifying most efficient circuit for the current duty cycle according to the input voltage,voltage in capacitor and output voltage required.The model with boost converter diverts the incoming voltage through number of conductors and inductors.By triggering specific inductor,a specific capacitor gets charged and a particular circuit gets on.The model maintains num-ber of IC(Inductor and Conductor)patterns through which the powerflow occurs.According to that,the pattern available,the mofset controls the level of power to be regulated through any circuit.From the pattern,the model computes the Cir-cuits Switching Loss and Circuits Conduction Loss for various circuits.Accord-ing to the input voltage,the model estimates Circuit Power Stabilization Support(CPSS)according to the voltage available in any capacitor and input voltage.Using the value of CPSS,the model trigger optimal number of circuits to maintain voltage stability.In this approach,more than one circuit has been triggered to maintain output voltage and to get charged.The proposed model not only main-tains power stability but also reduces the wastage in voltage which is not utilized.The proposed model improves the performance in voltage stability with less switching loss.展开更多
The most dangerous places in ships are their power plants. Particularly, they are very unsafe for operators carrying out various necessary operation and maintenance activities. For this reason, ship machinery should b...The most dangerous places in ships are their power plants. Particularly, they are very unsafe for operators carrying out various necessary operation and maintenance activities. For this reason, ship machinery should be designed to ensure the maximum safety for its operators. It is a very difficult task. Therefore, it could not be solved by means of conventional design methods, which are used for design of uncomplicated technical equipment. One of the possible ways of solving this problem is to provide appropriate tools, which allow us to take the operator's safety into account during a design process, especially at its early stages. A computer-aided system supporting design of safe ship power plants could be such a tool. This paper deals with developing process of a prototype of the computer-aided system for hazard zone identification in ship power plants.展开更多
基金supported by the National Science Foundation of China under Grant 62271062 and 62071063by the Zhijiang Laboratory Open Project Fund 2020LCOAB01。
文摘With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.
基金supported by the National Natural Science Foundation of China under Grant 62272391in part by the Key Industry Innovation Chain of Shaanxi under Grant 2021ZDLGY05-08.
文摘As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency.
基金supported in part by the National Natural Science Foundation of China(NSFC)under Grant No.62071306in part by Shenzhen Science and Technology Program under Grants JCYJ20200109113601723,JSGG20210802154203011 and JSGG20210420091805014。
文摘In the era of Internet of Things(Io T),mobile edge computing(MEC)and wireless power transfer(WPT)provide a prominent solution for computation-intensive applications to enhance computation capability and achieve sustainable energy supply.A wireless-powered mobile edge computing(WPMEC)system consisting of a hybrid access point(HAP)combined with MEC servers and many users is considered in this paper.In particular,a novel multiuser cooperation scheme based on orthogonal frequency division multiple access(OFDMA)is provided to improve the computation performance,where users can split the computation tasks into various parts for local computing,offloading to corresponding helper,and HAP for remote execution respectively with the aid of helper.Specifically,we aim at maximizing the weighted sum computation rate(WSCR)by optimizing time assignment,computation-task allocation,and transmission power at the same time while keeping energy neutrality in mind.We transform the original non-convex optimization problem to a convex optimization problem and then obtain a semi-closed form expression of the optimal solution by considering the convex optimization techniques.Simulation results demonstrate that the proposed multi-user cooperationassisted WPMEC scheme greatly improves the WSCR of all users than the existing schemes.In addition,OFDMA protocol increases the fairness and decreases delay among the users when compared to TDMA protocol.
基金The National Natural Science Foundation of China(No60503041)the Science and Technology Commission of ShanghaiInternational Cooperation Project (No05SN07114)
文摘A novel dynamic software allocation algorithm suitable for pervasive computing environments is proposed to minimize power consumption of mobile devices. Considering the power cost incurred by the computation, communication and migration of software components, a power consumption model of component assignments between a mobile device and a server is set up. Also, the mobility of components and the mobility relationships between components are taken into account in software allocation. By using network flow theory, the optimization problem of power conservation is transformed into the optimal bipartition problem of a flow network which can be partitioned by the max-flow rain-cut algorithm. Simulation results show that the proposed algorithm can save si^nificantlv more energy than existing algorithms.
基金This work was supported by the National Key R&D Program of China No.2019YFB1802800.
文摘In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on.
基金supported by the National Key R&D Program of China(2020YFB0905900).
文摘The construction of new power systems presents higher requirements for the Power Internet of Things(PIoT)technology.The“source-grid-load-storage”architecture of a new power system requires PIoT to have a stronger multi-source heterogeneous data fusion ability.Native graph databases have great advantages in dealing with multi-source heterogeneous data,which make them suitable for an increasing number of analytical computing tasks.However,only few existing graph database products have native support for matrix operation-related interfaces or functions,resulting in low efficiency when handling matrix calculations that are commonly encountered in power grids.In this paper,the matrix computation process is expressed by a strategy called graph description,which relies on the natural connection between the matrix and structure of the graph.Based on that,we implement matrix operations on graph database,including matrix multiplication,matrix decomposition,etc.Specifically,only the nodes relevant to the computation and their neighbors are concerned in the process,which prunes the influence of zero elements in the matrix and avoids useless iterations compared to the conventional matrix computation.Based on the graph description,a series of power grid computations can be implemented on graph database,which reduces redundant data import and export operations while leveraging the parallel computing capability of graph database.It promotes the efficiency of PIoT when handling multi-source heterogeneous data.An comprehensive experimental study over two different scale power system datasets compares the proposed method with Python and MATLAB baselines.The results reveal the superior performance of our proposed method in both power flow and N-1 contingency computations.
基金supported in part by the U.S. National Science Foundation under Grant CNS-2007995in part by the National Natural Science Foundation of China under Grant 92067201,62171231in part by Jiangsu Provincial Key Research and Development Program under Grant BE2020084-1。
文摘The unmanned aerial vehicle(UAV)-enabled mobile edge computing(MEC) architecture is expected to be a powerful technique to facilitate 5 G and beyond ubiquitous wireless connectivity and diverse vertical applications and services, anytime and anywhere. Wireless power transfer(WPT) is another promising technology to prolong the operation time of low-power wireless devices in the era of Internet of Things(IoT). However, the integration of WPT and UAV-enabled MEC systems is far from being well studied, especially in dynamic environments. In order to tackle this issue, this paper aims to investigate the stochastic computation offloading and trajectory scheduling for the UAV-enabled wireless powered MEC system. A UAV offers both RF wireless power transmission and computation services for IoT devices. Considering the stochastic task arrivals and random channel conditions, a long-term average energyefficiency(EE) minimization problem is formulated.Due to non-convexity and the time domain coupling of the variables in the formulated problem, a lowcomplexity online computation offloading and trajectory scheduling algorithm(OCOTSA) is proposed by exploiting Lyapunov optimization. Simulation results verify that there exists a balance between EE and the service delay, and demonstrate that the system EE performance obtained by the proposed scheme outperforms other benchmark schemes.
文摘With the development of smart grid, the electric power supervisory control and data acquisition (SCADA) system is limited by the traditional IT infrastructure, leading to low resource utilization and poor scalability. Information islands are formed due to poor system interoperability. The development of innovative applications is limited, and the launching period of new businesses is long. Management costs and risks increase, and equipment utilization declines. To address these issues, a professional private cloud solution is introduced to integrate the electric power SCADA system, and conduct experimental study of its applicability, reliability, security, and real time. The experimental results show that the professional private cloud solution is technical and commercial feasible, meeting the requirements of the electric power SCADA system.
基金National Natural Science Foundation of China(No.61902060)Fundamental Research Fund for the Central Universities,China(No.2232019D3-51)Shanghai Sailing Program,China(No.19YF1402100).
文摘Benefited from wireless power transfer(WPT)and mobile-edge computing(MEC),wireless powered MEC systems have attracted widespread attention.Specifically,we design an online offloading scheme based on deep reinforcement learning that maximizes the computation rate and minimizes the energy consumption of all wireless devices(WDs).Extensive results validate that the proposed scheme can achieve better tradeoff between energy consumption and computation delay.
文摘Traditional digital processing approaches are based on semiconductor transistors, which suffer from high power consumption, aggravating with technology node scaling. To solve definitively this problem, a number of emerging non-volatile nanodevices are under intense investigations. Meanwhile, novel computing circuits are invented to dig the full potential of the nanodevices. The combination of non-volatile nanodevices with suitable computing paradigms have many merits compared with the complementary metal-oxide-semiconductor transistor (CMOS) technology based structures, such as zero standby power, ultra-high density, non-volatility, and acceptable access speed. In this paper, we overview and compare the computing paradigms based on the emerging nanodevices towards ultra-low dissipation.
基金the "863" High Technology Research and Development Program of China (2006AA01Z226)the Scientific Research Foundation of Huazhong University of Science and Technology (2006Z011B)the Program for New Century Excellent Talents in University (NCET-07-0328).
文摘Ubiquitous computing must incorporate a certain level of security. For the severely resource constrained applications, the energy-efficient and small size cryptography algorithm implementation is a critical problem. Hardware implementations of the advanced encryption standard (AES) for authentication and encryption are presented. An energy consumption variable is derived to evaluate low-power design strategies for battery-powered devices. It proves that compact AES architectures fail to optimize the AES hardware energy, whereas reducing invalid switching activities and implementing power-optimized sub-modules are the reasonable methods. Implementations of different substitution box (S-Boxes) structures are presented with 0.25μm 1.8 V CMOS (complementary metal oxide semiconductor) standard cell library. The comparisons and trade-offs among area, security, and power are explored. The experimental results show that Galois field composite S-Boxes have smaller size and highest security but consume considerably more power, whereas decoder-switch-encoder S-Boxes have the best power characteristics with disadvantages in terms of size and security. The combination of these two type S-Boxes instead of homogeneous S-Boxes in AES circuit will lead to optimal schemes. The technique of latch-dividing data path is analyzed, and the quantitative simulation results demonstrate that this approach diminishes the glitches effectively at a very low hardware cost.
基金supported by the National Natural Science Foundation of China(6147219261202004)+1 种基金the Special Fund for Fast Sharing of Science Paper in Net Era by CSTD(2013116)the Natural Science Fund of Higher Education of Jiangsu Province(14KJB520014)
文摘In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.
文摘With the expansion of cloud computing,optimizing the energy efficiency and cost of the cloud paradigm is considered significantly important,since it directly affects providers’revenue and customers’payment.Thus,providing prediction information of the cloud services can be very beneficial for the service providers,as they need to carefully predict their business growths and efficiently manage their resources.To optimize the use of cloud services,predictive mechanisms can be applied to improve resource utilization and reduce energy-related costs.However,such mechanisms need to be provided with energy awareness not only at the level of the Physical Machine(PM)but also at the level of the Virtual Machine(VM)in order to make improved cost decisions.Therefore,this paper presents a comprehensive literature review on the subject of energy-related cost issues and prediction models in cloud computing environments,along with an overall discussion of the closely related works.The outcomes of this research can be used and incorporated by predictive resource management techniques to make improved cost decisions assisted with energy awareness and leverage cloud resources efficiently.
文摘The paper presents a computer code system 'SRDAAR- QNPP' for the real-time dose as-sessment of an accident release for Qinshan Nuclear Power Plant. It includes three parts:thereal-time data acquisition system, assessment computer. and the assessment operating code system. InSRDAAR-QNPP, the wind field of the surface and the lower levels are determined hourly by using amass consistent three-dimension diasnosis model with the topographic following coordinate system.A Lagrangin Puff model under changing meteorological condition is adopted for atmosphericdispersion, the correction for dry and wet depositions. physical decay and partial plume penetrationof the top inversion and the deviation of plume axis caused by complex terrain have been taken in-to account. The calculation domain areas include three square grid areas with the sideline 10 km, 40krn and 160 km and a grid interval 0.5 km, 2.0 km, 8.0 km respectively. Three exposure pathwaysare taken into account:the external exposure from immersion cloud and passing puff, the internalexposure from inhalation and the external exposure from contaminated ground. This system is ableto provide the results of concentration and dose distributions within 10 minutes after the data havebeen inputed.
文摘Since the raising of the cloud computing, the applications of web service have been extended rapidly. However, the data centers of cloud computing also cause the problem of power consumption and the resources usually have not been used effectively. Decreasing the power consumption and enhancing resource utilization become main issues in cloud computing environment. In this paper, we propose a method, called MBFDP (modified best fit decreasing packing), to decrease power consumption and enhance resource utilization of cloud computing servers. From the results of experiments, the proposed solution can reduce power consumption effectively and enhance the utilization of resources of servers.
文摘Edge computing refers to the computing paradigm in which the processing power, communication capabilities and intelligence are pushed down to the edge of the networking system like gateways and devices, where the data originates. In doing so, edge computing enables an infrastructure for processing the data directly from devices with low latency, battery consumption and bandwidth cost. With opportunities for research and advanced applications such as augmented reality and wearable cognitive assistance come new challenges. This special issue reports the current re? search on various topics related to edge computing, addressing the challenges in the enabling technologies and practical implementations.
基金The research is supported by the National Natural Science Foundation of China(62072469)the National Key R&D Program of China(2018AAA0101502)+2 种基金Shandong Natural Science Foundation(ZR2019MF049)West Coast artificial intelligence technology innovation center(2019-1-5,2019-1-6)the Opening Project of Shanghai Trusted Industrial Control Platform(TICPSH202003015-ZC).
文摘Accurate forecasting for photovoltaic power generation is one of the key enablers for the integration of solar photovoltaic systems into power grids.Existing deep-learning-based methods can perform well if there are sufficient training data and enough computational resources.However,there are challenges in building models through centralized shared data due to data privacy concerns and industry competition.Federated learning is a new distributed machine learning approach which enables training models across edge devices while data reside locally.In this paper,we propose an efficient semi-asynchronous federated learning framework for short-term solar power forecasting and evaluate the framework performance using a CNN-LSTM model.We design a personalization technique and a semi-asynchronous aggregation strategy to improve the efficiency of the proposed federated forecasting approach.Thorough evaluations using a real-world dataset demonstrate that the federated models can achieve significantly higher forecasting performance than fully local models while protecting data privacy,and the proposed semi-asynchronous aggregation and the personalization technique can make the forecasting framework more robust in real-world scenarios.
文摘The voltagefluctuation in electric circuits has been identified as key issue in different electric systems.As the usage of electricity growing in rapid way,there exist higherfluctuations in powerflow.To maintain theflow or stabi-lity of power in any electric circuit,there are many circuit models are discussed in literature.However,they suffer to maintain the output voltage and not capable of maintaining power stability.To improve the performance in power stabilization,an efficient IC pattern based power factor maximization model(ICPFMM)in this article.The model is focused on improving the power stability with the use of IC(Inductor and Conductor)towards identifying most efficient circuit for the current duty cycle according to the input voltage,voltage in capacitor and output voltage required.The model with boost converter diverts the incoming voltage through number of conductors and inductors.By triggering specific inductor,a specific capacitor gets charged and a particular circuit gets on.The model maintains num-ber of IC(Inductor and Conductor)patterns through which the powerflow occurs.According to that,the pattern available,the mofset controls the level of power to be regulated through any circuit.From the pattern,the model computes the Cir-cuits Switching Loss and Circuits Conduction Loss for various circuits.Accord-ing to the input voltage,the model estimates Circuit Power Stabilization Support(CPSS)according to the voltage available in any capacitor and input voltage.Using the value of CPSS,the model trigger optimal number of circuits to maintain voltage stability.In this approach,more than one circuit has been triggered to maintain output voltage and to get charged.The proposed model not only main-tains power stability but also reduces the wastage in voltage which is not utilized.The proposed model improves the performance in voltage stability with less switching loss.
文摘The most dangerous places in ships are their power plants. Particularly, they are very unsafe for operators carrying out various necessary operation and maintenance activities. For this reason, ship machinery should be designed to ensure the maximum safety for its operators. It is a very difficult task. Therefore, it could not be solved by means of conventional design methods, which are used for design of uncomplicated technical equipment. One of the possible ways of solving this problem is to provide appropriate tools, which allow us to take the operator's safety into account during a design process, especially at its early stages. A computer-aided system supporting design of safe ship power plants could be such a tool. This paper deals with developing process of a prototype of the computer-aided system for hazard zone identification in ship power plants.