Security is a key problem for the development of Cloud Computing. A common service security architecture is a basic abstract to support security research work. The authorization ability in the service security faces m...Security is a key problem for the development of Cloud Computing. A common service security architecture is a basic abstract to support security research work. The authorization ability in the service security faces more complex and variable users and environment. Based on the multidimensional views, the service security architecture is described on three dimensions of service security requirement integrating security attributes and service layers. An attribute-based dynamic access control model is presented to detail the relationships among subjects, objects, roles, attributes, context and extra factors further. The model uses dynamic control policies to support the multiple roles and flexible authority. At last, access control and policies execution mechanism were studied as the implementation suggestion.展开更多
Cloud Computing has become one of the popular buzzwords in the IT area after Web2.0. This is not a new technology, but the concept that binds different existed technologies altogether including Grid Computing, Utility...Cloud Computing has become one of the popular buzzwords in the IT area after Web2.0. This is not a new technology, but the concept that binds different existed technologies altogether including Grid Computing, Utility Computing, distributed system, virtualization and other mature technique. Business Process Management (BPM) is designed for business management using IT infrastructure to focus on process modeling, monitor and management. BPM is composed of business process, business information and IT resources, which help to build a real-time intelligent system, based on business management and IT technologies. This paper describes theory on Cloud Computing and proposes a BPM implement on Cloud environments.展开更多
To solve the lag problem of the traditional storage technology in mass data storage and management,the application platform is designed and built for big data on Hadoop and data warehouse integration platform,which en...To solve the lag problem of the traditional storage technology in mass data storage and management,the application platform is designed and built for big data on Hadoop and data warehouse integration platform,which ensured the convenience for the management and usage of data.In order to break through the master node system bottlenecks,a storage system with better performance is designed through introduction of cloud computing technology,which adopts the design of master-slave distribution patterns by the network access according to the recent principle.Thus the burden of single access the master node is reduced.Also file block update strategy and fault recovery mechanism are provided to solve the management bottleneck problem of traditional storage system on the data update and fault recovery and offer feasible technical solutions to storage management for big data.展开更多
Cloud computing is becoming the developing trend in the information field.It causes many transforms in the related fields.In order to adapt such changes,computer forensics is bound to improve and integrate into the ne...Cloud computing is becoming the developing trend in the information field.It causes many transforms in the related fields.In order to adapt such changes,computer forensics is bound to improve and integrate into the new environment.This paper stands on this point,suggests a computer forensic service framework which is based on security architecture of cloud computing and requirements needed by cloud computing environment.The framework introduces honey farm technique,and pays more attention on active forensics,which can improve case handling efficiency and reduce the cost.展开更多
Ubiquitous breadth of applications and data to quantify the sea and other features need an efficient means to provide information processing technology ensure. This paper analyzes the ubiquitous network application pl...Ubiquitous breadth of applications and data to quantify the sea and other features need an efficient means to provide information processing technology ensure. This paper analyzes the ubiquitous network application platform and its initial request for all kinds of computing and network resources. The current cloud computing technology distributed processing capabilities, which were assessed. And cloud computing in the future will provide a reference to the proposed application of research on cloud computing technology application and a new application fields.展开更多
Cloud computing is one of the main issues of interest to the scientific community of the spatial data. A cloud is referred to computing infrastructure for a representation of network. From the perspective of providers...Cloud computing is one of the main issues of interest to the scientific community of the spatial data. A cloud is referred to computing infrastructure for a representation of network. From the perspective of providers, the main characteristics of cloud computing is being dynamic, high power in computing and storage. Also cloud computing is a cost benefit and effective way for representation of web-based spatial data and complex analysis. Furthermore, cloud computing is a way to facilitate distributed computing and store different data. One of the main features of cloud computing is ability in powerful computing and dynamic storage with an affordable expense and secure web. In this paper we further investigate the methodologies, services, issues and deployed techniques also, about situation of cloud computing in the past, present and future is probed and some issues concerning the security is expressed. Undoubtedly cloud computing is vital for spatial data infrastructure and consequently the cloud computing is able to expand the interactions for spatial data infrastructure in the future.展开更多
Developments in service oriented architecture (SOA) have taken us near to the once fictional dream of forming and running an online business, such commercial activity in which most or all of its commercial roles are o...Developments in service oriented architecture (SOA) have taken us near to the once fictional dream of forming and running an online business, such commercial activity in which most or all of its commercial roles are outsourced to online services. The novel concept of cloud computing gives a understanding of SOA in which Information Technology assets are provided as services that are extra flexible, inexpensive and striking to commercial activities. In this paper, we concisely study developments in concept of cloud computing, and debate the advantages of using cloud services for commercial activities and trade-offs that they have to consider. Further we presented a layered architecture for online business, and then we presented a conceptual architecture for complete online business working atmosphere. Moreover, we discuss the prospects and research experiments that are ahead of us in realizing the technical components of this conceptual architecture. We conclude by giving the outlook and impact of cloud services on both large and small businesses.展开更多
Big Data applications are pervading more and more aspects of our life, encompassing commercial and scientific uses at increasing rates as we move towards exascale analytics. Examples of Big Data applications include s...Big Data applications are pervading more and more aspects of our life, encompassing commercial and scientific uses at increasing rates as we move towards exascale analytics. Examples of Big Data applications include storing and accessing user data in commercial clouds, mining of social data, and analysis of large-scale simulations and experiments such as the Large Hadron Collider. An increasing number of such data—intensive applications and services are relying on clouds in order to process and manage the enormous amounts of data required for continuous operation. It can be difficult to decide which of the many options for cloud processing is suitable for a given application;the aim of this paper is therefore to provide an interested user with an overview of the most important concepts of cloud computing as it relates to processing of Big Data.展开更多
The data center network(DCN), which is an important component of data centers, consists of a large number of hosted servers and switches connected with high speed communication links. A DCN enables the deployment of r...The data center network(DCN), which is an important component of data centers, consists of a large number of hosted servers and switches connected with high speed communication links. A DCN enables the deployment of resources centralization and on-demand access of the information and services of data centers to users. In recent years, the scale of the DCN has constantly increased with the widespread use of cloud-based services and the unprecedented amount of data delivery in/between data centers, whereas the traditional DCN architecture lacks aggregate bandwidth, scalability, and cost effectiveness for coping with the increasing demands of tenants in accessing the services of cloud data centers. Therefore, the design of a novel DCN architecture with the features of scalability, low cost, robustness, and energy conservation is required. This paper reviews the recent research findings and technologies of DCN architectures to identify the issues in the existing DCN architectures for cloud computing. We develop a taxonomy for the classification of the current DCN architectures, and also qualitatively analyze the traditional and contemporary DCN architectures. Moreover, the DCN architectures are compared on the basis of the significant characteristics, such as bandwidth, fault tolerance, scalability, overhead, and deployment cost. Finally, we put forward open research issues in the deployment of scalable, low-cost, robust, and energy-efficient DCN architecture, for data centers in computational clouds.展开更多
Since Service-Oriented Architecture (SOA) reveals the black box nature of services,heterogeneity,service dynamism,and service evolvability,managing services is known to be a challenging problem.Autonomic computing (AC...Since Service-Oriented Architecture (SOA) reveals the black box nature of services,heterogeneity,service dynamism,and service evolvability,managing services is known to be a challenging problem.Autonomic computing (AC) is a way of designing systems that can manage themselves without direct human intervention.Hence,applying the key disciplines of AC to service management is appealing.A key task of service management is to identify probable causes for symptoms detected and to devise actuation methods that can remedy the causes.In SOA,there are a number of target elements for service remedies,and there can be a number of causes associated with each target element.However,there is not yet a comprehensive taxonomy of causes that is widely accepted.The lack of cause taxonomy results in the limited possibility of remedying the problems in an autonomic way.In this paper,we first present a meta-model,extract all target elements for service fault management,and present a computing model for autonomously managing service faults.Then we define fault taxonomy for each target element and inter-relationships among the elements.Finally,we show prototype implementation using cause taxonomy and conduct experiments with the prototype for validating its applicability and effectiveness.展开更多
Mobile Edge Computing(MEC)assists clouds to handle enormous tasks from mobile devices in close proximity.The edge servers are not allocated efficiently according to the dynamic nature of the network.It leads to process...Mobile Edge Computing(MEC)assists clouds to handle enormous tasks from mobile devices in close proximity.The edge servers are not allocated efficiently according to the dynamic nature of the network.It leads to processing delay,and the tasks are dropped due to time limitations.The researchersfind it difficult and complex to determine the offloading decision because of uncertain load dynamic condition over the edge nodes.The challenge relies on the offload-ing decision on selection of edge nodes for offloading in a centralized manner.This study focuses on minimizing task-processing time while simultaneously increasing the success rate of service provided by edge servers.Initially,a task-offloading problem needs to be formulated based on the communication and pro-cessing.Then offloading decision problem is solved by deep analysis on taskflow in the network and feedback from the devices on edge services.The significance of the model is improved with the modelling of Deep Mobile-X architecture and bi-directional Long Short Term Memory(b-LSTM).The simulation is done in the Edgecloudsim environment,and the outcomes show the significance of the proposed idea.The processing time of the anticipated model is 6.6 s.The following perfor-mance metrics,improved server utilization,the ratio of the dropped task,and number of offloading tasks are evaluated and compared with existing learning approaches.The proposed model shows a better trade-off compared to existing approaches.展开更多
Cloud computing environment is getting more interesting as a new trend of data management.Data replication has been widely applied to improve data access in distributed systems such as Grid and Cloud.However,due to th...Cloud computing environment is getting more interesting as a new trend of data management.Data replication has been widely applied to improve data access in distributed systems such as Grid and Cloud.However,due to the finite storage capacity of each site,copies that are useful for future jobs can be wastefully deleted and replaced with less valuable ones.Therefore,it is considerable to have appropriate replication strategy that can dynamically store the replicas while satisfying quality of service(QoS)requirements and storage capacity constraints.In this paper,we present a dynamic replication algorithm,named hierarchical data replication strategy(HDRS).HDRS consists of the replica creation that can adaptively increase replicas based on exponential growth or decay rate,the replica placement according to the access load and labeling technique,and finally the replica replacement based on the value of file in the future.We evaluate different dynamic data replication methods using CloudSim simulation.Experiments demonstrate that HDRS can reduce response time and bandwidth usage compared with other algorithms.It means that the HDRS can determine a popular file and replicates it to the best site.This method avoids useless replications and decreases access latency by balancing the load of sites.展开更多
Cloud manufacturing has become a reality. It requires sensing and capturing heterogeneous manufacturing resources and extensive data analysis through the industrial internet. However,the cloud computing and serviceori...Cloud manufacturing has become a reality. It requires sensing and capturing heterogeneous manufacturing resources and extensive data analysis through the industrial internet. However,the cloud computing and serviceoriented architecture are slightly inadequate in dynamic manufacturing resource management. This paper integrates the technology of edge computing and microservice and develops an intelligent edge gateway for internet of thing(IoT)-based manufacturing. Distributed manufacturing resources can be accessed through the edge gateway,and cloud-edge collaboration can be realized. The intelligent edge gateway provides a solution for complex resource ubiquitous perception in current manufacturing scenarios. Finally,a prototype system is developed to verify the effectiveness of the intelligent edge gateway.展开更多
On the basis of the discussion on main functions of logistics public information platform, and the summary of general functions of some typical cases at home and abroad, such as China electronic port and TradeLink, th...On the basis of the discussion on main functions of logistics public information platform, and the summary of general functions of some typical cases at home and abroad, such as China electronic port and TradeLink, the logistics public information platform architecture is introduced from a technical point of view. It contains multiple levels, such as cloud computing platform and data storage layer, and discusses some new technologies available. Finally, the important trend of the logistics public information platform development is discussed.展开更多
The rapid advancements in hardware, software, and computer networks have facilitated the shift of the computing paradigm from mainframe to cloud computing, in which users can get their desired services anytime, anywhe...The rapid advancements in hardware, software, and computer networks have facilitated the shift of the computing paradigm from mainframe to cloud computing, in which users can get their desired services anytime, anywhere, and by any means. However, cloud computing also presents many challenges, one of which is the difficulty in allowing users to freely obtain desired services, such as heterogeneous OSes and applications, via different light-weight devices. We have proposed a new paradigm by spatio-temporally extending the von Neumann architecture, called transparent computing, to centrally store and manage the commodity programs including OS codes, while streaming them to be run in non-state clients. This leads to a service-centric computing environment, in which users can select the desired services on demand, without concern for these services' administration, such as their installation, maintenance, management, and upgrade. In this paper, we introduce a novel concept, namely Meta OS, to support such program streaming through a distributed 4VP~ platform. Based on this platform, a pilot system has been implemented, which supports Windows and Linux environments. We verify the effectiveness of the platform through both real deployments and testbed experiments. The evaluation results suggest that the 4VP~ platform is a feasible and promising solution for the future computing infrastructure for cloud services.展开更多
The demand for 5G services and applications is driving the change of network architecture.The mobile edge computing(MEC)technology combines the mobile network technology with cloud computing and virtualization,and is ...The demand for 5G services and applications is driving the change of network architecture.The mobile edge computing(MEC)technology combines the mobile network technology with cloud computing and virtualization,and is one of the key technologies for 5G networks.Compared to network function virtualization(NFV),another critical enabler of 5G networks,MEC reduces latency and enhances the offered capacity.In this paper,we discuss the combination of the two technologies and propose a new architecture.Moreover,we list the application scenarios using the proposed architecture.展开更多
基金supported by National Information Security Program under Grant No.2009A112
文摘Security is a key problem for the development of Cloud Computing. A common service security architecture is a basic abstract to support security research work. The authorization ability in the service security faces more complex and variable users and environment. Based on the multidimensional views, the service security architecture is described on three dimensions of service security requirement integrating security attributes and service layers. An attribute-based dynamic access control model is presented to detail the relationships among subjects, objects, roles, attributes, context and extra factors further. The model uses dynamic control policies to support the multiple roles and flexible authority. At last, access control and policies execution mechanism were studied as the implementation suggestion.
文摘Cloud Computing has become one of the popular buzzwords in the IT area after Web2.0. This is not a new technology, but the concept that binds different existed technologies altogether including Grid Computing, Utility Computing, distributed system, virtualization and other mature technique. Business Process Management (BPM) is designed for business management using IT infrastructure to focus on process modeling, monitor and management. BPM is composed of business process, business information and IT resources, which help to build a real-time intelligent system, based on business management and IT technologies. This paper describes theory on Cloud Computing and proposes a BPM implement on Cloud environments.
文摘To solve the lag problem of the traditional storage technology in mass data storage and management,the application platform is designed and built for big data on Hadoop and data warehouse integration platform,which ensured the convenience for the management and usage of data.In order to break through the master node system bottlenecks,a storage system with better performance is designed through introduction of cloud computing technology,which adopts the design of master-slave distribution patterns by the network access according to the recent principle.Thus the burden of single access the master node is reduced.Also file block update strategy and fault recovery mechanism are provided to solve the management bottleneck problem of traditional storage system on the data update and fault recovery and offer feasible technical solutions to storage management for big data.
基金Sponsored by the National Social Science Found of China(Grant No.13CFX054)the Project of Humanities and Social Science of Chinese Ministry of Education(Grant No.11YJCZH175)
文摘Cloud computing is becoming the developing trend in the information field.It causes many transforms in the related fields.In order to adapt such changes,computer forensics is bound to improve and integrate into the new environment.This paper stands on this point,suggests a computer forensic service framework which is based on security architecture of cloud computing and requirements needed by cloud computing environment.The framework introduces honey farm technique,and pays more attention on active forensics,which can improve case handling efficiency and reduce the cost.
文摘Ubiquitous breadth of applications and data to quantify the sea and other features need an efficient means to provide information processing technology ensure. This paper analyzes the ubiquitous network application platform and its initial request for all kinds of computing and network resources. The current cloud computing technology distributed processing capabilities, which were assessed. And cloud computing in the future will provide a reference to the proposed application of research on cloud computing technology application and a new application fields.
文摘Cloud computing is one of the main issues of interest to the scientific community of the spatial data. A cloud is referred to computing infrastructure for a representation of network. From the perspective of providers, the main characteristics of cloud computing is being dynamic, high power in computing and storage. Also cloud computing is a cost benefit and effective way for representation of web-based spatial data and complex analysis. Furthermore, cloud computing is a way to facilitate distributed computing and store different data. One of the main features of cloud computing is ability in powerful computing and dynamic storage with an affordable expense and secure web. In this paper we further investigate the methodologies, services, issues and deployed techniques also, about situation of cloud computing in the past, present and future is probed and some issues concerning the security is expressed. Undoubtedly cloud computing is vital for spatial data infrastructure and consequently the cloud computing is able to expand the interactions for spatial data infrastructure in the future.
文摘Developments in service oriented architecture (SOA) have taken us near to the once fictional dream of forming and running an online business, such commercial activity in which most or all of its commercial roles are outsourced to online services. The novel concept of cloud computing gives a understanding of SOA in which Information Technology assets are provided as services that are extra flexible, inexpensive and striking to commercial activities. In this paper, we concisely study developments in concept of cloud computing, and debate the advantages of using cloud services for commercial activities and trade-offs that they have to consider. Further we presented a layered architecture for online business, and then we presented a conceptual architecture for complete online business working atmosphere. Moreover, we discuss the prospects and research experiments that are ahead of us in realizing the technical components of this conceptual architecture. We conclude by giving the outlook and impact of cloud services on both large and small businesses.
文摘Big Data applications are pervading more and more aspects of our life, encompassing commercial and scientific uses at increasing rates as we move towards exascale analytics. Examples of Big Data applications include storing and accessing user data in commercial clouds, mining of social data, and analysis of large-scale simulations and experiments such as the Large Hadron Collider. An increasing number of such data—intensive applications and services are relying on clouds in order to process and manage the enormous amounts of data required for continuous operation. It can be difficult to decide which of the many options for cloud processing is suitable for a given application;the aim of this paper is therefore to provide an interested user with an overview of the most important concepts of cloud computing as it relates to processing of Big Data.
基金Project supported by the Malaysian Ministry of Higher Education under the University of Malaya High Impact Research Grant(No.UM.C/HIR/MOHE/FCSIT/03)
文摘The data center network(DCN), which is an important component of data centers, consists of a large number of hosted servers and switches connected with high speed communication links. A DCN enables the deployment of resources centralization and on-demand access of the information and services of data centers to users. In recent years, the scale of the DCN has constantly increased with the widespread use of cloud-based services and the unprecedented amount of data delivery in/between data centers, whereas the traditional DCN architecture lacks aggregate bandwidth, scalability, and cost effectiveness for coping with the increasing demands of tenants in accessing the services of cloud data centers. Therefore, the design of a novel DCN architecture with the features of scalability, low cost, robustness, and energy conservation is required. This paper reviews the recent research findings and technologies of DCN architectures to identify the issues in the existing DCN architectures for cloud computing. We develop a taxonomy for the classification of the current DCN architectures, and also qualitatively analyze the traditional and contemporary DCN architectures. Moreover, the DCN architectures are compared on the basis of the significant characteristics, such as bandwidth, fault tolerance, scalability, overhead, and deployment cost. Finally, we put forward open research issues in the deployment of scalable, low-cost, robust, and energy-efficient DCN architecture, for data centers in computational clouds.
基金Project (No.2011-0002534) supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education,Science and Technology
文摘Since Service-Oriented Architecture (SOA) reveals the black box nature of services,heterogeneity,service dynamism,and service evolvability,managing services is known to be a challenging problem.Autonomic computing (AC) is a way of designing systems that can manage themselves without direct human intervention.Hence,applying the key disciplines of AC to service management is appealing.A key task of service management is to identify probable causes for symptoms detected and to devise actuation methods that can remedy the causes.In SOA,there are a number of target elements for service remedies,and there can be a number of causes associated with each target element.However,there is not yet a comprehensive taxonomy of causes that is widely accepted.The lack of cause taxonomy results in the limited possibility of remedying the problems in an autonomic way.In this paper,we first present a meta-model,extract all target elements for service fault management,and present a computing model for autonomously managing service faults.Then we define fault taxonomy for each target element and inter-relationships among the elements.Finally,we show prototype implementation using cause taxonomy and conduct experiments with the prototype for validating its applicability and effectiveness.
文摘Mobile Edge Computing(MEC)assists clouds to handle enormous tasks from mobile devices in close proximity.The edge servers are not allocated efficiently according to the dynamic nature of the network.It leads to processing delay,and the tasks are dropped due to time limitations.The researchersfind it difficult and complex to determine the offloading decision because of uncertain load dynamic condition over the edge nodes.The challenge relies on the offload-ing decision on selection of edge nodes for offloading in a centralized manner.This study focuses on minimizing task-processing time while simultaneously increasing the success rate of service provided by edge servers.Initially,a task-offloading problem needs to be formulated based on the communication and pro-cessing.Then offloading decision problem is solved by deep analysis on taskflow in the network and feedback from the devices on edge services.The significance of the model is improved with the modelling of Deep Mobile-X architecture and bi-directional Long Short Term Memory(b-LSTM).The simulation is done in the Edgecloudsim environment,and the outcomes show the significance of the proposed idea.The processing time of the anticipated model is 6.6 s.The following perfor-mance metrics,improved server utilization,the ratio of the dropped task,and number of offloading tasks are evaluated and compared with existing learning approaches.The proposed model shows a better trade-off compared to existing approaches.
文摘Cloud computing environment is getting more interesting as a new trend of data management.Data replication has been widely applied to improve data access in distributed systems such as Grid and Cloud.However,due to the finite storage capacity of each site,copies that are useful for future jobs can be wastefully deleted and replaced with less valuable ones.Therefore,it is considerable to have appropriate replication strategy that can dynamically store the replicas while satisfying quality of service(QoS)requirements and storage capacity constraints.In this paper,we present a dynamic replication algorithm,named hierarchical data replication strategy(HDRS).HDRS consists of the replica creation that can adaptively increase replicas based on exponential growth or decay rate,the replica placement according to the access load and labeling technique,and finally the replica replacement based on the value of file in the future.We evaluate different dynamic data replication methods using CloudSim simulation.Experiments demonstrate that HDRS can reduce response time and bandwidth usage compared with other algorithms.It means that the HDRS can determine a popular file and replicates it to the best site.This method avoids useless replications and decreases access latency by balancing the load of sites.
基金supported by the National Key Research and Development Program of China (No.2020YFB1710500)the Primary Research & Development Plan of Jiangsu Province(No.BE2021091)。
文摘Cloud manufacturing has become a reality. It requires sensing and capturing heterogeneous manufacturing resources and extensive data analysis through the industrial internet. However,the cloud computing and serviceoriented architecture are slightly inadequate in dynamic manufacturing resource management. This paper integrates the technology of edge computing and microservice and develops an intelligent edge gateway for internet of thing(IoT)-based manufacturing. Distributed manufacturing resources can be accessed through the edge gateway,and cloud-edge collaboration can be realized. The intelligent edge gateway provides a solution for complex resource ubiquitous perception in current manufacturing scenarios. Finally,a prototype system is developed to verify the effectiveness of the intelligent edge gateway.
文摘On the basis of the discussion on main functions of logistics public information platform, and the summary of general functions of some typical cases at home and abroad, such as China electronic port and TradeLink, the logistics public information platform architecture is introduced from a technical point of view. It contains multiple levels, such as cloud computing platform and data storage layer, and discusses some new technologies available. Finally, the important trend of the logistics public information platform development is discussed.
基金supported in part by the National High-Tech Research and Development(863)Program of China(No.2011AA01A203)the National Key Basic Research and Development Program(973)in China(No.2012BAH13F04)the research fund of Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology
文摘The rapid advancements in hardware, software, and computer networks have facilitated the shift of the computing paradigm from mainframe to cloud computing, in which users can get their desired services anytime, anywhere, and by any means. However, cloud computing also presents many challenges, one of which is the difficulty in allowing users to freely obtain desired services, such as heterogeneous OSes and applications, via different light-weight devices. We have proposed a new paradigm by spatio-temporally extending the von Neumann architecture, called transparent computing, to centrally store and manage the commodity programs including OS codes, while streaming them to be run in non-state clients. This leads to a service-centric computing environment, in which users can select the desired services on demand, without concern for these services' administration, such as their installation, maintenance, management, and upgrade. In this paper, we introduce a novel concept, namely Meta OS, to support such program streaming through a distributed 4VP~ platform. Based on this platform, a pilot system has been implemented, which supports Windows and Linux environments. We verify the effectiveness of the platform through both real deployments and testbed experiments. The evaluation results suggest that the 4VP~ platform is a feasible and promising solution for the future computing infrastructure for cloud services.
文摘The demand for 5G services and applications is driving the change of network architecture.The mobile edge computing(MEC)technology combines the mobile network technology with cloud computing and virtualization,and is one of the key technologies for 5G networks.Compared to network function virtualization(NFV),another critical enabler of 5G networks,MEC reduces latency and enhances the offered capacity.In this paper,we discuss the combination of the two technologies and propose a new architecture.Moreover,we list the application scenarios using the proposed architecture.