The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decoup...The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decouple the control system from the infrastructure plane caused by ossification. The innovation created a problem with controller placement. That is how to effectively place controllers within a network topology to manage the network of data plane devices from the control plane. The study was designed to empirically evaluate and compare the functionalities of two controller placement algorithms: the POCO and MOCO. The methodology adopted in the study is the explorative and comparative investigation techniques. The study evaluated the performances of the Pareto optimal combination (POCO) and multi-objective combination (MOCO) algorithms in relation to calibrated positions of the controller within a software-defined network. The network environment and measurement metrics were held constant for both the POCO and MOCO models during the evaluation. The strengths and weaknesses of the POCO and MOCO models were justified. The results showed that the latencies of the two algorithms in relation to the GoodNet network are 3100 ms and 2500 ms for POCO and MOCO respectively. In Switch to Controller Average Case latency, the performance gives 2598 ms and 2769 ms for POCO and MOCO respectively. In Worst Case Switch to Controller latency, the performance shows 2776 ms and 2987 ms for POCO and MOCO respectively. The latencies of the two algorithms evaluated in relation to the Savvis network, compared as follows: 2912 ms and 2784 ms for POCO and MOCO respectively in Switch to Controller Average Case latency, 3129 ms and 3017 ms for POCO and MOCO respectively in Worst Case Switch to Controller latency, 2789 ms and 2693 ms for POCO and MOCO respectively in Average Case Controller to Controller latency, and 2873 ms and 2756 ms for POCO and MOCO in Worst Case Switch to Controller latency respectively. The latencies of the two algorithms evaluated in relation to the AARNet, network compared as follows: 2473 ms and 2129 ms for POCO and MOCO respectively, in Switch to Controller Average Case latency, 2198 ms and 2268 ms for POCO and MOCO respectively, in Worst Case Switch to Controller latency, 2598 ms and 2471 ms for POCO and MOCO respectively, in Average Case Controller to Controller latency, 2689 ms and 2814 ms for POCO and MOCO respectively Worst Case Controller to Controller latency. The Average Case and Worst-Case latencies for Switch to Controller and Controller to Controller are minimal, and favourable to the POCO model as against the MOCO model when evaluated in the Goodnet, Savvis, and the Aanet networks. This simply indicates that the POCO model has a speed advantage as against the MOCO model, which appears to be more resilient than the POCO model.展开更多
Performance measurement(PM)generates useful data for process control,facilitates communication between different sectors,and helps to align efforts on the most important aspects of the business.Thus,PM plays a key rol...Performance measurement(PM)generates useful data for process control,facilitates communication between different sectors,and helps to align efforts on the most important aspects of the business.Thus,PM plays a key role in the management of projects and organizations.PM is also important in the implementation of lean production principles and methods,such as reducing the share of nonvalue-adding activities,increasing process transparency,building continuous improvement into the process,and benchmarking.Moreover,the adoption of the lean production philosophy requires changes in PM.Despite its importance,limited studies have been conducted on the use of PM systems for assessing the impact of lean production programs in construction projects.In addition,studies on how lean companies(or projects)use performance measurement and to what extent the indicators adopted reflect the result of actions that have been undertaken are limited.This study proposes a set of requirements in PM systems of construction projects from the perspective of lean production and a taxonomy of performance metrics for lean production systems.Five empirical studies have been carried out on construction companies from South America involved in the implementation of lean production systems.The scope of this investigation is limited to the construction projects as production systems rather than PM at the level of construction organizations.展开更多
Quantitative security metrics are desirable for measuring the performance of information security controls. Security metrics help to make functional and business decisions for improving the performance and cost of the...Quantitative security metrics are desirable for measuring the performance of information security controls. Security metrics help to make functional and business decisions for improving the performance and cost of the security controls. However, defining enterprise-level security metrics has already been listed as one of the hard problems in the Info Sec Research Council's hard problems list. Almost all the efforts in defining absolute security metrics for the enterprise security have not been proved fruitful. At the same time, with the maturity of the security industry, there has been a continuous emphasis from the regulatory bodies on establishing measurable security metrics. This paper addresses this need and proposes a relative security metric model that derives three quantitative security metrics named Attack Resiliency Measure(ARM), Performance Improvement Factor(PIF), and Cost/Benefit Measure(CBM) for measuring the performance of the security controls. For the effectiveness evaluation of the proposed security metrics, we took the secure virtual machine(VM) migration protocol as the target of assessment. The virtual-ization technologies are rapidly changing the landscape of the computing world. Devising security metrics for virtualized environment is even more challenging. As secure virtual machine migration is an evolving area and no standard protocol is available specifically for secure VM migration. This paper took the secure virtual machine migration protocol as the target of assessment and applied the proposed relative security metric model for measuring the Attack Resiliency Measure, Performance Improvement Factor, and Cost/Benefit Measure of the secure VM migration protocol.展开更多
Fog computing brings computational services near the network edge to meet the latency constraints of cyber-physical System(CPS)applications.Edge devices enable limited computational capacity and energy availability th...Fog computing brings computational services near the network edge to meet the latency constraints of cyber-physical System(CPS)applications.Edge devices enable limited computational capacity and energy availability that hamper end user performance.We designed a novel performance measurement index to gauge a device’s resource capacity.This examination addresses the offloading mechanism issues,where the end user(EU)offloads a part of its workload to a nearby edge server(ES).Sometimes,the ES further offloads the workload to another ES or cloud server to achieve reliable performance because of limited resources(such as storage and computation).The manuscript aims to reduce the service offloading rate by selecting a potential device or server to accomplish a low average latency and service completion time to meet the deadline constraints of sub-divided services.In this regard,an adaptive online status predictive model design is significant for prognosticating the asset requirement of arrived services to make float decisions.Consequently,the development of a reinforcement learning-based flexible x-scheduling(RFXS)approach resolves the service offloading issues,where x=service/resource for producing the low latency and high performance of the network.Our approach to the theoretical bound and computational complexity is derived by formulating the system efficiency.A quadratic restraint mechanism is employed to formulate the service optimization issue according to a set ofmeasurements,as well as the behavioural association rate and adulation factor.Our system managed an average 0.89%of the service offloading rate,with 39 ms of delay over complex scenarios(using three servers with a 50%service arrival rate).The simulation outcomes confirm that the proposed scheme attained a low offloading uncertainty,and is suitable for simulating heterogeneous CPS frameworks.展开更多
文摘The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decouple the control system from the infrastructure plane caused by ossification. The innovation created a problem with controller placement. That is how to effectively place controllers within a network topology to manage the network of data plane devices from the control plane. The study was designed to empirically evaluate and compare the functionalities of two controller placement algorithms: the POCO and MOCO. The methodology adopted in the study is the explorative and comparative investigation techniques. The study evaluated the performances of the Pareto optimal combination (POCO) and multi-objective combination (MOCO) algorithms in relation to calibrated positions of the controller within a software-defined network. The network environment and measurement metrics were held constant for both the POCO and MOCO models during the evaluation. The strengths and weaknesses of the POCO and MOCO models were justified. The results showed that the latencies of the two algorithms in relation to the GoodNet network are 3100 ms and 2500 ms for POCO and MOCO respectively. In Switch to Controller Average Case latency, the performance gives 2598 ms and 2769 ms for POCO and MOCO respectively. In Worst Case Switch to Controller latency, the performance shows 2776 ms and 2987 ms for POCO and MOCO respectively. The latencies of the two algorithms evaluated in relation to the Savvis network, compared as follows: 2912 ms and 2784 ms for POCO and MOCO respectively in Switch to Controller Average Case latency, 3129 ms and 3017 ms for POCO and MOCO respectively in Worst Case Switch to Controller latency, 2789 ms and 2693 ms for POCO and MOCO respectively in Average Case Controller to Controller latency, and 2873 ms and 2756 ms for POCO and MOCO in Worst Case Switch to Controller latency respectively. The latencies of the two algorithms evaluated in relation to the AARNet, network compared as follows: 2473 ms and 2129 ms for POCO and MOCO respectively, in Switch to Controller Average Case latency, 2198 ms and 2268 ms for POCO and MOCO respectively, in Worst Case Switch to Controller latency, 2598 ms and 2471 ms for POCO and MOCO respectively, in Average Case Controller to Controller latency, 2689 ms and 2814 ms for POCO and MOCO respectively Worst Case Controller to Controller latency. The Average Case and Worst-Case latencies for Switch to Controller and Controller to Controller are minimal, and favourable to the POCO model as against the MOCO model when evaluated in the Goodnet, Savvis, and the Aanet networks. This simply indicates that the POCO model has a speed advantage as against the MOCO model, which appears to be more resilient than the POCO model.
文摘Performance measurement(PM)generates useful data for process control,facilitates communication between different sectors,and helps to align efforts on the most important aspects of the business.Thus,PM plays a key role in the management of projects and organizations.PM is also important in the implementation of lean production principles and methods,such as reducing the share of nonvalue-adding activities,increasing process transparency,building continuous improvement into the process,and benchmarking.Moreover,the adoption of the lean production philosophy requires changes in PM.Despite its importance,limited studies have been conducted on the use of PM systems for assessing the impact of lean production programs in construction projects.In addition,studies on how lean companies(or projects)use performance measurement and to what extent the indicators adopted reflect the result of actions that have been undertaken are limited.This study proposes a set of requirements in PM systems of construction projects from the perspective of lean production and a taxonomy of performance metrics for lean production systems.Five empirical studies have been carried out on construction companies from South America involved in the implementation of lean production systems.The scope of this investigation is limited to the construction projects as production systems rather than PM at the level of construction organizations.
文摘Quantitative security metrics are desirable for measuring the performance of information security controls. Security metrics help to make functional and business decisions for improving the performance and cost of the security controls. However, defining enterprise-level security metrics has already been listed as one of the hard problems in the Info Sec Research Council's hard problems list. Almost all the efforts in defining absolute security metrics for the enterprise security have not been proved fruitful. At the same time, with the maturity of the security industry, there has been a continuous emphasis from the regulatory bodies on establishing measurable security metrics. This paper addresses this need and proposes a relative security metric model that derives three quantitative security metrics named Attack Resiliency Measure(ARM), Performance Improvement Factor(PIF), and Cost/Benefit Measure(CBM) for measuring the performance of the security controls. For the effectiveness evaluation of the proposed security metrics, we took the secure virtual machine(VM) migration protocol as the target of assessment. The virtual-ization technologies are rapidly changing the landscape of the computing world. Devising security metrics for virtualized environment is even more challenging. As secure virtual machine migration is an evolving area and no standard protocol is available specifically for secure VM migration. This paper took the secure virtual machine migration protocol as the target of assessment and applied the proposed relative security metric model for measuring the Attack Resiliency Measure, Performance Improvement Factor, and Cost/Benefit Measure of the secure VM migration protocol.
基金Zulqar and Kim’s research was supported in part by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2021R1A6A1A03039493)in part by the NRF grant funded by the Korea government(MSIT)(NRF-2022R1A2C1004401)+1 种基金Mekala’s research was supported in part by the Basic Science Research Program of the Ministry of Education(NRF-2018R1A2B6005105)in part by the National Research Foundation of Korea(NRF)grant funded by the Korea Government(MSIT)(no.2019R1A5A8080290).
文摘Fog computing brings computational services near the network edge to meet the latency constraints of cyber-physical System(CPS)applications.Edge devices enable limited computational capacity and energy availability that hamper end user performance.We designed a novel performance measurement index to gauge a device’s resource capacity.This examination addresses the offloading mechanism issues,where the end user(EU)offloads a part of its workload to a nearby edge server(ES).Sometimes,the ES further offloads the workload to another ES or cloud server to achieve reliable performance because of limited resources(such as storage and computation).The manuscript aims to reduce the service offloading rate by selecting a potential device or server to accomplish a low average latency and service completion time to meet the deadline constraints of sub-divided services.In this regard,an adaptive online status predictive model design is significant for prognosticating the asset requirement of arrived services to make float decisions.Consequently,the development of a reinforcement learning-based flexible x-scheduling(RFXS)approach resolves the service offloading issues,where x=service/resource for producing the low latency and high performance of the network.Our approach to the theoretical bound and computational complexity is derived by formulating the system efficiency.A quadratic restraint mechanism is employed to formulate the service optimization issue according to a set ofmeasurements,as well as the behavioural association rate and adulation factor.Our system managed an average 0.89%of the service offloading rate,with 39 ms of delay over complex scenarios(using three servers with a 50%service arrival rate).The simulation outcomes confirm that the proposed scheme attained a low offloading uncertainty,and is suitable for simulating heterogeneous CPS frameworks.