Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this pap...Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this paper,we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network.A reputation model is proposed to update the credibility of the fog nodes(FN),which is used to select blockchain nodes(BN)from FNs to participate in the consensus process.According to the Rivest-Shamir-Adleman(RSA)encryption algorithm applied to the blockchain system,FNs could verify the identity of the node through its public key to avoid malicious attacks.Additionally,to reduce the computation complexity of the consensus algorithms and the network overhead,we propose a dynamic offloading and resource allocation(DORA)algorithm and a reputation-based democratic byzantine fault tolerant(R-DBFT)algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security.Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead,and obtain a considerable performance improvement compared to the related algorithms in the previous literature.展开更多
With the rapid development of the internet of things(IoT),electricity consumption data can be captured and recorded in the IoT cloud center.This provides a credible data source for enterprise credit scoring,which is o...With the rapid development of the internet of things(IoT),electricity consumption data can be captured and recorded in the IoT cloud center.This provides a credible data source for enterprise credit scoring,which is one of the most vital elements during the financial decision-making process.Accordingly,this paper proposes to use deep learning to train an enterprise credit scoring model by inputting the electricity consumption data.Instead of predicting the credit rating,our method can generate an absolute credit score by a novel deep ranking model–ranking extreme gradient boosting net(rankXGB).To boost the performance,the rankXGB model combines several weak ranking models into a strong model.Due to the high computational cost and the vast amounts of data,we design an edge computing framework to reduce the latency of enterprise credit evaluation.Specially,we design a two-stage deep learning task architecture,including a cloud-based weak credit ranking and an edge-based credit score calculation.In the first stage,we send the electricity consumption data of the evaluated enterprise to the computing cloud server,where multiple weak-ranking networks are executed in parallel to produce multiple weak-ranking results.In the second stage,the edge device fuses multiple ranking results generated in the cloud server to produce a more reliable ranking result,which is used to calculate an absolute credit score by score normalization.The experiments demonstrate that our method can achieve accurate enterprise credit evaluation quickly.展开更多
The Internet of Things(IoT)links various devices to digital services and significantly improves the quality of our lives.However,as IoT connectivity is growing rapidly,so do the risks of network vulnerabilities and th...The Internet of Things(IoT)links various devices to digital services and significantly improves the quality of our lives.However,as IoT connectivity is growing rapidly,so do the risks of network vulnerabilities and threats.Many interesting Intrusion Detection Systems(IDSs)are presented based on machine learning(ML)techniques to overcome this problem.Given the resource limitations of fog computing environments,a lightweight IDS is essential.This paper introduces a hybrid deep learning(DL)method that combines convolutional neural networks(CNN)and long short-term memory(LSTM)to build an energy-aware,anomaly-based IDS.We test this system on a recent dataset,focusing on reducing overhead while maintaining high accuracy and a low false alarm rate.We compare CICIoT2023,KDD-99 and NSL-KDD datasets to evaluate the performance of the proposed IDS model based on key metrics,including latency,energy consumption,false alarm rate and detection rate metrics.Our findings show an accuracy rate over 92%and a false alarm rate below 0.38%.These results demonstrate that our system provides strong security without excessive resource use.The practicality of deploying IDS with limited resources is demonstrated by the successful implementation of IDS functionality on a Raspberry Pi acting as a Fog node.The proposed lightweight model,with a maximum power consumption of 6.12 W,demonstrates its potential to operate effectively on energy-limited devices such as low-power fog nodes or edge devices.We prioritize energy efficiency whilemaintaining high accuracy,distinguishing our scheme fromexisting approaches.Extensive experiments demonstrate a significant reduction in false positives,ensuring accurate identification of genuine security threats while minimizing unnecessary alerts.展开更多
Reliability,QoS and energy consumption are three important concerns of cloud service providers.Most of the current research on reliable task deployment in cloud computing focuses on only one or two of the three concer...Reliability,QoS and energy consumption are three important concerns of cloud service providers.Most of the current research on reliable task deployment in cloud computing focuses on only one or two of the three concerns.However,these three factors have intrinsic trade-off relationships.The existing studies show that load concentration can reduce the number of servers and hence save energy.In this paper,we deal with the problem of reliable task deployment in data centers,with the goal of minimizing the number of servers used in cloud data centers under the constraint that the job execution deadline can be met upon single server failure.We propose a QoS-Constrained,Reliable and Energy-efficient task replica deployment(QSRE)algorithm for the problem by combining task replication and re-execution.For each task in a job that cannot finish executing by re-execution within deadline,we initiate two replicas for the task:main task and task replica.Each main task runs on an individual server.The associated task replica is deployed on a backup server and completes part of the whole task load before the main task failure.Different from the main tasks,multiple task replicas can be allocated to the same backup server to reduce the energy consumption of cloud data centers by minimizing the number of servers required for running the task replicas.Specifically,QSRE assigns the task replicas with the longest and the shortest execution time to the backup servers in turn,such that the task replicas can meet the QoS-specified job execution deadline under the main task failure.We conduct experiments through simulations.The experimental results show that QSRE can effectively reduce the number of servers used,while ensuring the reliability and QoS of job execution.展开更多
Phase-change material(PCM)is generating widespread interest as a new candidate for artificial synapses in bioinspired computer systems.However,the amorphization process of PCM devices tends to be abrupt,unlike continu...Phase-change material(PCM)is generating widespread interest as a new candidate for artificial synapses in bioinspired computer systems.However,the amorphization process of PCM devices tends to be abrupt,unlike continuous synaptic depression.The relatively large power consumption and poor analog behavior of PCM devices greatly limit their applications.Here,we fabricate a GeTe/Sb2Te3 superlattice-like PCM device which allows a progressive RESET process.Our devices feature low-power consumption operation and potential high-density integration,which can effectively simulate biological synaptic characteristics.The programming energy can be further reduced by properly selecting the resistance range and operating method.The fabricated devices are implemented in both artificial neural networks(ANN)and convolutional neural network(CNN)simulations,demonstrating high accuracy in brain-like pattern recognition.展开更多
Handling the massive amount of data generated by Smart Mobile Devices(SMDs)is a challenging computational problem.Edge Computing is an emerging computation paradigm that is employed to conquer this problem.It can brin...Handling the massive amount of data generated by Smart Mobile Devices(SMDs)is a challenging computational problem.Edge Computing is an emerging computation paradigm that is employed to conquer this problem.It can bring computation power closer to the end devices to reduce their computation latency and energy consumption.Therefore,this paradigm increases the computational ability of SMDs by collaboration with edge servers.This is achieved by computation offloading from the mobile devices to the edge nodes or servers.However,not all applications benefit from computation offloading,which is only suitable for certain types of tasks.Task properties,SMD capability,wireless channel state,and other factors must be counted when making computation offloading decisions.Hence,optimization methods are important tools in scheduling computation offloading tasks in Edge Computing networks.In this paper,we review six types of optimization methods-they are Lyapunov optimization,convex optimization,heuristic techniques,game theory,machine learning,and others.For each type,we focus on the objective functions,application areas,types of offloading methods,evaluation methods,as well as the time complexity of the proposed algorithms.We discuss a few research problems that are still open.Our purpose for this review is to provide a concise summary that can help new researchers get started with their computation offloading researches for Edge Computing networks.展开更多
Mobile Edge Computing(MEC)-based computation offloading is a promising application paradigm for serving large numbers of users with various delay and energy requirements.In this paper,we propose a flexible MECbased re...Mobile Edge Computing(MEC)-based computation offloading is a promising application paradigm for serving large numbers of users with various delay and energy requirements.In this paper,we propose a flexible MECbased requirement-adaptive partial offloading model to accommodate each user's specific preference regarding delay and energy consumption.To address the dimensional differences between time and energy,we introduce two normalized parameters and then derive the computational overhead of processing tasks.Different from existing works,this paper considers practical variations in the user request patterns,and exploits a flexible partial offloading mode to minimize computation overheads subject to tolerable delay,task workload and power constraints.Since the resulting problem is non-convex,we decouple it into two convex subproblems and present an iterative algorithm to obtain a feasible offloading solution.Numerical experiments show that our proposed scheme achieves a significant improvement in computation overheads compared with existing schemes.展开更多
In this paper,the Internet ofMedical Things(IoMT)is identified as a promising solution,which integrates with the cloud computing environment to provide remote health monitoring solutions and improve the quality of ser...In this paper,the Internet ofMedical Things(IoMT)is identified as a promising solution,which integrates with the cloud computing environment to provide remote health monitoring solutions and improve the quality of service(QoS)in the healthcare sector.However,problems with the present architectural models such as those related to energy consumption,service latency,execution cost,and resource usage,remain a major concern for adopting IoMT applications.To address these problems,this work presents a four-tier IoMT-edge-fog-cloud architecture along with an optimization model formulated using Mixed Integer Linear Programming(MILP),with the objective of efficiently processing and placing IoMT applications in the edge-fog-cloud computing environment,while maintaining certain quality standards(e.g.,energy consumption,service latency,network utilization).A modeling environment is used to assess and validate the proposed model by considering different traffic loads and processing requirements.In comparison to the other existing models,the performance analysis of the proposed approach shows a maximum saving of 38%in energy consumption and a 73%reduction in service latency.The results also highlight that offloading the IoMT application to the edge and fog nodes compared to the cloud is highly dependent on the tradeoff between the network journey time saved vs.the extra power consumed by edge or fog resources.展开更多
This paper examined the impacts of the total energy consumption control policy and energy quota allocation plans on China′s regional economy. This research analyzed the influences of different energy quota allocation...This paper examined the impacts of the total energy consumption control policy and energy quota allocation plans on China′s regional economy. This research analyzed the influences of different energy quota allocation plans with various weights of equity and efficiency, using a dynamic computable general equilibrium(CGE) model for 30 province-level administrative regions. The results show that the efficiency-first allocation plan costs the least but widens regional income gap, whereas the outcomes of equity-first allocation plan and intensity target-based allocation plan are similar and are both opposite to the efficiency-first allocation plan′ outcome. The plan featuring a balance between efficiency and equity is more feasible, which can bring regional economic losses evenly and prevent massive interregional migration of energy-related industries. Furthermore, the effects of possible induced energy technology improvements in different energy quota allocation plans were studied. Induced energy technology improvements can add more feasibility to all allocation plans under the total energy consumption control policy. In the long term, if the policy of the total energy consumption control continues and more market-based tools are implemented to allocate energy quotas, the positive consequences of induced energy technology improvements will become much more obvious.展开更多
Fog computing is an emerging paradigm of cloud computing which to meet the growing computation demand of mobile application. It can help mobile devices to overcome resource constraints by offloading the computationall...Fog computing is an emerging paradigm of cloud computing which to meet the growing computation demand of mobile application. It can help mobile devices to overcome resource constraints by offloading the computationally intensive tasks to cloud servers. The challenge of the cloud is to minimize the time of data transfer and task execution to the user, whose location changes owing to mobility, and the energy consumption for the mobile device. To provide satisfactory computation performance is particularly challenging in the fog computing environment. In this paper, we propose a novel fog computing model and offloading policy which can effectively bring the fog computing power closer to the mobile user. The fog computing model consist of remote cloud nodes and local cloud nodes, which is attached to wireless access infrastructure. And we give task offloading policy taking into account executi+on, energy consumption and other expenses. We finally evaluate the performance of our method through experimental simulations. The experimental results show that this method has a significant effect on reducing the execution time of tasks and energy consumption of mobile devices.展开更多
How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data cente...How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.展开更多
With the increasing maritime activities and the rapidly developing maritime economy, the fifth-generation(5G) mobile communication system is expected to be deployed at the ocean. New technologies need to be explored t...With the increasing maritime activities and the rapidly developing maritime economy, the fifth-generation(5G) mobile communication system is expected to be deployed at the ocean. New technologies need to be explored to meet the requirements of ultra-reliable and low latency communications(URLLC) in the maritime communication network(MCN). Mobile edge computing(MEC) can achieve high energy efficiency in MCN at the cost of suffering from high control plane latency and low reliability. In terms of this issue, the mobile edge communications, computing, and caching(MEC3) technology is proposed to sink mobile computing, network control, and storage to the edge of the network. New methods that enable resource-efficient configurations and reduce redundant data transmissions can enable the reliable implementation of computing-intension and latency-sensitive applications. The key technologies of MEC3 to enable URLLC are analyzed and optimized in MCN. The best response-based offloading algorithm(BROA) is adopted to optimize task offloading. The simulation results show that the task latency can be decreased by 26.5’ ms, and the energy consumption in terminal users can be reduced to 66.6%.展开更多
Emerging memristive devices offer enormous advantages for applications such as non-volatile memories and inmemory computing(IMC),but there is a rising interest in using memristive technologies for security application...Emerging memristive devices offer enormous advantages for applications such as non-volatile memories and inmemory computing(IMC),but there is a rising interest in using memristive technologies for security applications in the era of internet of things(IoT).In this review article,for achieving secure hardware systems in IoT,lowpower design techniques based on emerging memristive technology for hardware security primitives/systems are presented.By reviewing the state-of-the-art in three highlighted memristive application areas,i.e.memristive non-volatile memory,memristive reconfigurable logic computing and memristive artificial intelligent computing,their application-level impacts on the novel implementations of secret key generation,crypto functions and machine learning attacks are explored,respectively.For the low-power security applications in IoT,it is essential to understand how to best realize cryptographic circuitry using memristive circuitries,and to assess the implications of memristive crypto implementations on security and to develop novel computing paradigms that will enhance their security.This review article aims to help researchers to explore security solutions,to analyze new possible threats and to develop corresponding protections for the secure hardware systems based on low-cost memristive circuit designs.展开更多
In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the r...In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.展开更多
This study is the result of ongoing research for a European Union 7th Framework Program Project regarding energy converters for very low heads, and aims to analyze optimization of new cost-effective hydraulic turbine ...This study is the result of ongoing research for a European Union 7th Framework Program Project regarding energy converters for very low heads, and aims to analyze optimization of new cost-effective hydraulic turbine designs for possible implementation in water supply systems (WSSs) or in other pressurized water pipe infrastructures, such as irrigation, wastewater, or drainage systems. A new methodology is presented based on a theoretical, technical and economic analysis. Viability studies focused on small power values for different pipe systems were investigated. Detailed analyses of alternative typical volumetric energy converters were conducted on the basis of mathematical and physical fundamentals as well as computational fluid dynamics (CFD) associated with the interaction between the flow conditions and the system operation. Important constraints (e.g., size, stability, efficiency, and continuous steady flow conditions) can be identified and a search for alternative rotary yolumetric converters is being conducted. As promising cost-effective solutions for the coming years, adapted rotor-dynamic turbomachines and non-conventional axial propeller devices were analyzed based on the basic principles of pumps operating as turbines, as well as through an extensive comparison between simulations and experimental tests.展开更多
Traditional digital processing approaches are based on semiconductor transistors, which suffer from high power consumption, aggravating with technology node scaling. To solve definitively this problem, a number of eme...Traditional digital processing approaches are based on semiconductor transistors, which suffer from high power consumption, aggravating with technology node scaling. To solve definitively this problem, a number of emerging non-volatile nanodevices are under intense investigations. Meanwhile, novel computing circuits are invented to dig the full potential of the nanodevices. The combination of non-volatile nanodevices with suitable computing paradigms have many merits compared with the complementary metal-oxide-semiconductor transistor (CMOS) technology based structures, such as zero standby power, ultra-high density, non-volatility, and acceptable access speed. In this paper, we overview and compare the computing paradigms based on the emerging nanodevices towards ultra-low dissipation.展开更多
With the expansion of cloud computing,optimizing the energy efficiency and cost of the cloud paradigm is considered significantly important,since it directly affects providers’revenue and customers’payment.Thus,prov...With the expansion of cloud computing,optimizing the energy efficiency and cost of the cloud paradigm is considered significantly important,since it directly affects providers’revenue and customers’payment.Thus,providing prediction information of the cloud services can be very beneficial for the service providers,as they need to carefully predict their business growths and efficiently manage their resources.To optimize the use of cloud services,predictive mechanisms can be applied to improve resource utilization and reduce energy-related costs.However,such mechanisms need to be provided with energy awareness not only at the level of the Physical Machine(PM)but also at the level of the Virtual Machine(VM)in order to make improved cost decisions.Therefore,this paper presents a comprehensive literature review on the subject of energy-related cost issues and prediction models in cloud computing environments,along with an overall discussion of the closely related works.The outcomes of this research can be used and incorporated by predictive resource management techniques to make improved cost decisions assisted with energy awareness and leverage cloud resources efficiently.展开更多
Since the raising of the cloud computing, the applications of web service have been extended rapidly. However, the data centers of cloud computing also cause the problem of power consumption and the resources usually ...Since the raising of the cloud computing, the applications of web service have been extended rapidly. However, the data centers of cloud computing also cause the problem of power consumption and the resources usually have not been used effectively. Decreasing the power consumption and enhancing resource utilization become main issues in cloud computing environment. In this paper, we propose a method, called MBFDP (modified best fit decreasing packing), to decrease power consumption and enhance resource utilization of cloud computing servers. From the results of experiments, the proposed solution can reduce power consumption effectively and enhance the utilization of resources of servers.展开更多
In this letter,the Ta/HfO/BN/TiN resistive switching devices are fabricated and they exhibit low power consumption and high uniformity each.The reset current is reduced for the HfO/BN bilayer device compared with that...In this letter,the Ta/HfO/BN/TiN resistive switching devices are fabricated and they exhibit low power consumption and high uniformity each.The reset current is reduced for the HfO/BN bilayer device compared with that for the Ta/HfO/TiN structure.Furthermore,the reset current decreases with increasing BN thickness.The HfOlayer is a dominating switching layer,while the low-permittivity and high-resistivity BN layer acts as a barrier of electrons injection into TiN electrode.The current conduction mechanism of low resistance state in the HfO/BN bilayer device is space-chargelimited current(SCLC),while it is Ohmic conduction in the HfOdevice.展开更多
For the reliability and power consumption issues of Ethernet data transmission based on the field programmable gate array (FPGA), a low-power consumption design method is proposed, which is suitable for FPGA impleme...For the reliability and power consumption issues of Ethernet data transmission based on the field programmable gate array (FPGA), a low-power consumption design method is proposed, which is suitable for FPGA implementation. To reduce the dynamic power consumption of integrated circuit (IC) design, the proposed method adopts the dynamic control of the clock frequency. For most of the time, when the port is in the idle state or lower-rate state, users can reduce or even turn off the reading clock frequency and reduce the clock flip frequency in order to reduce the dynamic power consumption. When the receiving rate is high, the reading clock frequency will be improved timely to ensure that no data will lost. Simulated and verified by Modelsim, the proposed method can dynamically control the clock frequency, including the dynamic switching of high-speed and low-speed clock flip rates, or stop of the clock flip.展开更多
基金supported in part by the National Natural Science Foundation of China(NSFC)under Grant 62371082 and 62001076in part by the National Key R&D Program of China under Grant 2021YFB1714100in part by the Natural Science Foundation of Chongqing under Grant CSTB2023NSCQ-MSX0726 and cstc2020jcyjmsxmX0878.
文摘Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this paper,we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network.A reputation model is proposed to update the credibility of the fog nodes(FN),which is used to select blockchain nodes(BN)from FNs to participate in the consensus process.According to the Rivest-Shamir-Adleman(RSA)encryption algorithm applied to the blockchain system,FNs could verify the identity of the node through its public key to avoid malicious attacks.Additionally,to reduce the computation complexity of the consensus algorithms and the network overhead,we propose a dynamic offloading and resource allocation(DORA)algorithm and a reputation-based democratic byzantine fault tolerant(R-DBFT)algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security.Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead,and obtain a considerable performance improvement compared to the related algorithms in the previous literature.
基金This research was funded by National Natural Science Foundation of China (61906036)Science and Technology Project of State Grid Jiangsu Power Supply Company (No.J2021034).
文摘With the rapid development of the internet of things(IoT),electricity consumption data can be captured and recorded in the IoT cloud center.This provides a credible data source for enterprise credit scoring,which is one of the most vital elements during the financial decision-making process.Accordingly,this paper proposes to use deep learning to train an enterprise credit scoring model by inputting the electricity consumption data.Instead of predicting the credit rating,our method can generate an absolute credit score by a novel deep ranking model–ranking extreme gradient boosting net(rankXGB).To boost the performance,the rankXGB model combines several weak ranking models into a strong model.Due to the high computational cost and the vast amounts of data,we design an edge computing framework to reduce the latency of enterprise credit evaluation.Specially,we design a two-stage deep learning task architecture,including a cloud-based weak credit ranking and an edge-based credit score calculation.In the first stage,we send the electricity consumption data of the evaluated enterprise to the computing cloud server,where multiple weak-ranking networks are executed in parallel to produce multiple weak-ranking results.In the second stage,the edge device fuses multiple ranking results generated in the cloud server to produce a more reliable ranking result,which is used to calculate an absolute credit score by score normalization.The experiments demonstrate that our method can achieve accurate enterprise credit evaluation quickly.
基金supported by the interdisciplinary center of smart mobility and logistics at King Fahd University of Petroleum and Minerals(Grant number INML2400).
文摘The Internet of Things(IoT)links various devices to digital services and significantly improves the quality of our lives.However,as IoT connectivity is growing rapidly,so do the risks of network vulnerabilities and threats.Many interesting Intrusion Detection Systems(IDSs)are presented based on machine learning(ML)techniques to overcome this problem.Given the resource limitations of fog computing environments,a lightweight IDS is essential.This paper introduces a hybrid deep learning(DL)method that combines convolutional neural networks(CNN)and long short-term memory(LSTM)to build an energy-aware,anomaly-based IDS.We test this system on a recent dataset,focusing on reducing overhead while maintaining high accuracy and a low false alarm rate.We compare CICIoT2023,KDD-99 and NSL-KDD datasets to evaluate the performance of the proposed IDS model based on key metrics,including latency,energy consumption,false alarm rate and detection rate metrics.Our findings show an accuracy rate over 92%and a false alarm rate below 0.38%.These results demonstrate that our system provides strong security without excessive resource use.The practicality of deploying IDS with limited resources is demonstrated by the successful implementation of IDS functionality on a Raspberry Pi acting as a Fog node.The proposed lightweight model,with a maximum power consumption of 6.12 W,demonstrates its potential to operate effectively on energy-limited devices such as low-power fog nodes or edge devices.We prioritize energy efficiency whilemaintaining high accuracy,distinguishing our scheme fromexisting approaches.Extensive experiments demonstrate a significant reduction in false positives,ensuring accurate identification of genuine security threats while minimizing unnecessary alerts.
文摘Reliability,QoS and energy consumption are three important concerns of cloud service providers.Most of the current research on reliable task deployment in cloud computing focuses on only one or two of the three concerns.However,these three factors have intrinsic trade-off relationships.The existing studies show that load concentration can reduce the number of servers and hence save energy.In this paper,we deal with the problem of reliable task deployment in data centers,with the goal of minimizing the number of servers used in cloud data centers under the constraint that the job execution deadline can be met upon single server failure.We propose a QoS-Constrained,Reliable and Energy-efficient task replica deployment(QSRE)algorithm for the problem by combining task replication and re-execution.For each task in a job that cannot finish executing by re-execution within deadline,we initiate two replicas for the task:main task and task replica.Each main task runs on an individual server.The associated task replica is deployed on a backup server and completes part of the whole task load before the main task failure.Different from the main tasks,multiple task replicas can be allocated to the same backup server to reduce the energy consumption of cloud data centers by minimizing the number of servers required for running the task replicas.Specifically,QSRE assigns the task replicas with the longest and the shortest execution time to the backup servers in turn,such that the task replicas can meet the QoS-specified job execution deadline under the main task failure.We conduct experiments through simulations.The experimental results show that QSRE can effectively reduce the number of servers used,while ensuring the reliability and QoS of job execution.
基金Project supported by the National Science and Technology Major Project of China(Grant No.2017ZX02301007-002)the National Key R&D Plan of China(Grant No.2017YFB0701701)the National Natural Science Foundation of China(Grant Nos.61774068 and 51772113).The authors acknowledge the support from Hubei Key Laboratory of Advanced Memories&Hubei Engineering Research Center on Microelectronics.
文摘Phase-change material(PCM)is generating widespread interest as a new candidate for artificial synapses in bioinspired computer systems.However,the amorphization process of PCM devices tends to be abrupt,unlike continuous synaptic depression.The relatively large power consumption and poor analog behavior of PCM devices greatly limit their applications.Here,we fabricate a GeTe/Sb2Te3 superlattice-like PCM device which allows a progressive RESET process.Our devices feature low-power consumption operation and potential high-density integration,which can effectively simulate biological synaptic characteristics.The programming energy can be further reduced by properly selecting the resistance range and operating method.The fabricated devices are implemented in both artificial neural networks(ANN)and convolutional neural network(CNN)simulations,demonstrating high accuracy in brain-like pattern recognition.
基金supported by National Key R&D Program of China under Grant.No.2018YFB1800805National Natural Science Foundation of China under Grant No.61772345,61902257,61972261Shenzhen Science and Technology Program under Grant No.RCYX20200714114645048,No.JCYJ20190808142207420,No.GJHZ20190822095416463.
文摘Handling the massive amount of data generated by Smart Mobile Devices(SMDs)is a challenging computational problem.Edge Computing is an emerging computation paradigm that is employed to conquer this problem.It can bring computation power closer to the end devices to reduce their computation latency and energy consumption.Therefore,this paradigm increases the computational ability of SMDs by collaboration with edge servers.This is achieved by computation offloading from the mobile devices to the edge nodes or servers.However,not all applications benefit from computation offloading,which is only suitable for certain types of tasks.Task properties,SMD capability,wireless channel state,and other factors must be counted when making computation offloading decisions.Hence,optimization methods are important tools in scheduling computation offloading tasks in Edge Computing networks.In this paper,we review six types of optimization methods-they are Lyapunov optimization,convex optimization,heuristic techniques,game theory,machine learning,and others.For each type,we focus on the objective functions,application areas,types of offloading methods,evaluation methods,as well as the time complexity of the proposed algorithms.We discuss a few research problems that are still open.Our purpose for this review is to provide a concise summary that can help new researchers get started with their computation offloading researches for Edge Computing networks.
基金This work was supported in part by the National Natural Science Foundation of China under Grant 62171113 and 61941113in part by the Fundamental Research Funds for the Central Universities under Grant N2116003 and N2116011.
文摘Mobile Edge Computing(MEC)-based computation offloading is a promising application paradigm for serving large numbers of users with various delay and energy requirements.In this paper,we propose a flexible MECbased requirement-adaptive partial offloading model to accommodate each user's specific preference regarding delay and energy consumption.To address the dimensional differences between time and energy,we introduce two normalized parameters and then derive the computational overhead of processing tasks.Different from existing works,this paper considers practical variations in the user request patterns,and exploits a flexible partial offloading mode to minimize computation overheads subject to tolerable delay,task workload and power constraints.Since the resulting problem is non-convex,we decouple it into two convex subproblems and present an iterative algorithm to obtain a feasible offloading solution.Numerical experiments show that our proposed scheme achieves a significant improvement in computation overheads compared with existing schemes.
基金The authors extend their appreciation to the Deputyship for Research and Innovation,Ministry of Education in Saudi Arabia for funding this research work the project number(442/204).
文摘In this paper,the Internet ofMedical Things(IoMT)is identified as a promising solution,which integrates with the cloud computing environment to provide remote health monitoring solutions and improve the quality of service(QoS)in the healthcare sector.However,problems with the present architectural models such as those related to energy consumption,service latency,execution cost,and resource usage,remain a major concern for adopting IoMT applications.To address these problems,this work presents a four-tier IoMT-edge-fog-cloud architecture along with an optimization model formulated using Mixed Integer Linear Programming(MILP),with the objective of efficiently processing and placing IoMT applications in the edge-fog-cloud computing environment,while maintaining certain quality standards(e.g.,energy consumption,service latency,network utilization).A modeling environment is used to assess and validate the proposed model by considering different traffic loads and processing requirements.In comparison to the other existing models,the performance analysis of the proposed approach shows a maximum saving of 38%in energy consumption and a 73%reduction in service latency.The results also highlight that offloading the IoMT application to the edge and fog nodes compared to the cloud is highly dependent on the tradeoff between the network journey time saved vs.the extra power consumed by edge or fog resources.
基金National Natural Science Foundation of China(No.41101556,71173212,71203215)
文摘This paper examined the impacts of the total energy consumption control policy and energy quota allocation plans on China′s regional economy. This research analyzed the influences of different energy quota allocation plans with various weights of equity and efficiency, using a dynamic computable general equilibrium(CGE) model for 30 province-level administrative regions. The results show that the efficiency-first allocation plan costs the least but widens regional income gap, whereas the outcomes of equity-first allocation plan and intensity target-based allocation plan are similar and are both opposite to the efficiency-first allocation plan′ outcome. The plan featuring a balance between efficiency and equity is more feasible, which can bring regional economic losses evenly and prevent massive interregional migration of energy-related industries. Furthermore, the effects of possible induced energy technology improvements in different energy quota allocation plans were studied. Induced energy technology improvements can add more feasibility to all allocation plans under the total energy consumption control policy. In the long term, if the policy of the total energy consumption control continues and more market-based tools are implemented to allocate energy quotas, the positive consequences of induced energy technology improvements will become much more obvious.
基金supported by the NSFC (61602126)the scientific and technological project of Henan province (162102210214)
文摘Fog computing is an emerging paradigm of cloud computing which to meet the growing computation demand of mobile application. It can help mobile devices to overcome resource constraints by offloading the computationally intensive tasks to cloud servers. The challenge of the cloud is to minimize the time of data transfer and task execution to the user, whose location changes owing to mobility, and the energy consumption for the mobile device. To provide satisfactory computation performance is particularly challenging in the fog computing environment. In this paper, we propose a novel fog computing model and offloading policy which can effectively bring the fog computing power closer to the mobile user. The fog computing model consist of remote cloud nodes and local cloud nodes, which is attached to wireless access infrastructure. And we give task offloading policy taking into account executi+on, energy consumption and other expenses. We finally evaluate the performance of our method through experimental simulations. The experimental results show that this method has a significant effect on reducing the execution time of tasks and energy consumption of mobile devices.
基金supported by the National Natural Science Foundation of China(6120200461272084)+9 种基金the National Key Basic Research Program of China(973 Program)(2011CB302903)the Specialized Research Fund for the Doctoral Program of Higher Education(2009322312000120113223110003)the China Postdoctoral Science Foundation Funded Project(2011M5000952012T50514)the Natural Science Foundation of Jiangsu Province(BK2011754BK2009426)the Jiangsu Postdoctoral Science Foundation Funded Project(1102103C)the Natural Science Fund of Higher Education of Jiangsu Province(12KJB520007)the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(yx002001)
文摘How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.
基金the National S&T Major Project (No. 2018ZX03001011)the National Key R&D Program(No.2018YFB1801102)+1 种基金the National Natural Science Foundation of China (No. 61671072)the Beijing Natural Science Foundation (No. L192025)
文摘With the increasing maritime activities and the rapidly developing maritime economy, the fifth-generation(5G) mobile communication system is expected to be deployed at the ocean. New technologies need to be explored to meet the requirements of ultra-reliable and low latency communications(URLLC) in the maritime communication network(MCN). Mobile edge computing(MEC) can achieve high energy efficiency in MCN at the cost of suffering from high control plane latency and low reliability. In terms of this issue, the mobile edge communications, computing, and caching(MEC3) technology is proposed to sink mobile computing, network control, and storage to the edge of the network. New methods that enable resource-efficient configurations and reduce redundant data transmissions can enable the reliable implementation of computing-intension and latency-sensitive applications. The key technologies of MEC3 to enable URLLC are analyzed and optimized in MCN. The best response-based offloading algorithm(BROA) is adopted to optimize task offloading. The simulation results show that the task latency can be decreased by 26.5’ ms, and the energy consumption in terminal users can be reduced to 66.6%.
基金supported by the DFG(German Research Foundation)Priority Program Nano Security,Project MemCrypto(Projektnummer 439827659/funding id DU 1896/2–1,PO 1220/15–1)the funding by the Fraunhofer Internal Programs under Grant No.Attract 600768。
文摘Emerging memristive devices offer enormous advantages for applications such as non-volatile memories and inmemory computing(IMC),but there is a rising interest in using memristive technologies for security applications in the era of internet of things(IoT).In this review article,for achieving secure hardware systems in IoT,lowpower design techniques based on emerging memristive technology for hardware security primitives/systems are presented.By reviewing the state-of-the-art in three highlighted memristive application areas,i.e.memristive non-volatile memory,memristive reconfigurable logic computing and memristive artificial intelligent computing,their application-level impacts on the novel implementations of secret key generation,crypto functions and machine learning attacks are explored,respectively.For the low-power security applications in IoT,it is essential to understand how to best realize cryptographic circuitry using memristive circuitries,and to assess the implications of memristive crypto implementations on security and to develop novel computing paradigms that will enhance their security.This review article aims to help researchers to explore security solutions,to analyze new possible threats and to develop corresponding protections for the secure hardware systems based on low-cost memristive circuit designs.
基金supported by the National Natural Science Foundation of China(6147219261202004)+1 种基金the Special Fund for Fast Sharing of Science Paper in Net Era by CSTD(2013116)the Natural Science Fund of Higher Education of Jiangsu Province(14KJB520014)
文摘In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.
基金supported by the FCT (PTDC/ECM/65731/2006)the 7FP European HYLOW Project (Grant No. 212423)
文摘This study is the result of ongoing research for a European Union 7th Framework Program Project regarding energy converters for very low heads, and aims to analyze optimization of new cost-effective hydraulic turbine designs for possible implementation in water supply systems (WSSs) or in other pressurized water pipe infrastructures, such as irrigation, wastewater, or drainage systems. A new methodology is presented based on a theoretical, technical and economic analysis. Viability studies focused on small power values for different pipe systems were investigated. Detailed analyses of alternative typical volumetric energy converters were conducted on the basis of mathematical and physical fundamentals as well as computational fluid dynamics (CFD) associated with the interaction between the flow conditions and the system operation. Important constraints (e.g., size, stability, efficiency, and continuous steady flow conditions) can be identified and a search for alternative rotary yolumetric converters is being conducted. As promising cost-effective solutions for the coming years, adapted rotor-dynamic turbomachines and non-conventional axial propeller devices were analyzed based on the basic principles of pumps operating as turbines, as well as through an extensive comparison between simulations and experimental tests.
文摘Traditional digital processing approaches are based on semiconductor transistors, which suffer from high power consumption, aggravating with technology node scaling. To solve definitively this problem, a number of emerging non-volatile nanodevices are under intense investigations. Meanwhile, novel computing circuits are invented to dig the full potential of the nanodevices. The combination of non-volatile nanodevices with suitable computing paradigms have many merits compared with the complementary metal-oxide-semiconductor transistor (CMOS) technology based structures, such as zero standby power, ultra-high density, non-volatility, and acceptable access speed. In this paper, we overview and compare the computing paradigms based on the emerging nanodevices towards ultra-low dissipation.
文摘With the expansion of cloud computing,optimizing the energy efficiency and cost of the cloud paradigm is considered significantly important,since it directly affects providers’revenue and customers’payment.Thus,providing prediction information of the cloud services can be very beneficial for the service providers,as they need to carefully predict their business growths and efficiently manage their resources.To optimize the use of cloud services,predictive mechanisms can be applied to improve resource utilization and reduce energy-related costs.However,such mechanisms need to be provided with energy awareness not only at the level of the Physical Machine(PM)but also at the level of the Virtual Machine(VM)in order to make improved cost decisions.Therefore,this paper presents a comprehensive literature review on the subject of energy-related cost issues and prediction models in cloud computing environments,along with an overall discussion of the closely related works.The outcomes of this research can be used and incorporated by predictive resource management techniques to make improved cost decisions assisted with energy awareness and leverage cloud resources efficiently.
文摘Since the raising of the cloud computing, the applications of web service have been extended rapidly. However, the data centers of cloud computing also cause the problem of power consumption and the resources usually have not been used effectively. Decreasing the power consumption and enhancing resource utilization become main issues in cloud computing environment. In this paper, we propose a method, called MBFDP (modified best fit decreasing packing), to decrease power consumption and enhance resource utilization of cloud computing servers. From the results of experiments, the proposed solution can reduce power consumption effectively and enhance the utilization of resources of servers.
基金supported by the National Natural Science Foundation of China(Grant Nos.61274113,11204212,61404091,51502203,and 51502204)the Tianjin Natural Science Foundation,China(Grant Nos.14JCZDJC31500 and 14JCQNJC00800)the Tianjin Science and Technology Developmental Funds of Universities and Colleges,China(Grant No.20130701)
文摘In this letter,the Ta/HfO/BN/TiN resistive switching devices are fabricated and they exhibit low power consumption and high uniformity each.The reset current is reduced for the HfO/BN bilayer device compared with that for the Ta/HfO/TiN structure.Furthermore,the reset current decreases with increasing BN thickness.The HfOlayer is a dominating switching layer,while the low-permittivity and high-resistivity BN layer acts as a barrier of electrons injection into TiN electrode.The current conduction mechanism of low resistance state in the HfO/BN bilayer device is space-chargelimited current(SCLC),while it is Ohmic conduction in the HfOdevice.
基金supported by the Natural Science Foundation of China under Grant No.61376024 and No.61306024Natural Science Foundation of Guangdong Province under Grant No.S2013040014366Basic Research Programme of Shenzhen under Grant No.JCYJ20140417113430642 and No.JCYJ20140901003939020
文摘For the reliability and power consumption issues of Ethernet data transmission based on the field programmable gate array (FPGA), a low-power consumption design method is proposed, which is suitable for FPGA implementation. To reduce the dynamic power consumption of integrated circuit (IC) design, the proposed method adopts the dynamic control of the clock frequency. For most of the time, when the port is in the idle state or lower-rate state, users can reduce or even turn off the reading clock frequency and reduce the clock flip frequency in order to reduce the dynamic power consumption. When the receiving rate is high, the reading clock frequency will be improved timely to ensure that no data will lost. Simulated and verified by Modelsim, the proposed method can dynamically control the clock frequency, including the dynamic switching of high-speed and low-speed clock flip rates, or stop of the clock flip.