Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface ...Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface vehicle,the swarm robot system is more efficient than the operation of a single object as the former can reduce cost and save time.It is necessary to detect adjacent surface obstacles robustly to operate a cluster of unmanned surface vehicles.For this purpose,a LiDAR(light detection and ranging)sensor is used as it can simultaneously obtain 3D information for all directions,relatively robustly and accurately,irrespective of the surrounding environmental conditions.Although the GPS(global-positioning-system)error range exists,obtaining measurements of the surface-vessel position can still ensure stability during platoon maneuvering.In this study,a three-layer convolutional neural network is applied to classify types of surface vehicles.The aim of this approach is to redefine the sparse 3D point cloud data as 2D image data with a connotative meaning and subsequently utilize this transformed data for object classification purposes.Hence,we have proposed a descriptor that converts the 3D point cloud data into 2D image data.To use this descriptor effectively,it is necessary to perform a clustering operation that separates the point clouds for each object.We developed voxel-based clustering for the point cloud clustering.Furthermore,using the descriptor,3D point cloud data can be converted into a 2D feature image,and the converted 2D image is provided as an input value to the network.We intend to verify the validity of the proposed 3D point cloud feature descriptor by using experimental data in the simulator.Furthermore,we explore the feasibility of real-time object classification within this framework.展开更多
In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit so...In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit software to extract geometric information about the indoor environment.Furthermore,we proposed a method for constructing indoor elements based on parametric components.The research outcomes of this paper will offer new methods and tools for indoor space modeling and design.The approach of indoor space modeling based on 3D laser point cloud data and parametric component construction can enhance modeling efficiency and accuracy,providing architects,interior designers,and decorators with a better working platform and design reference.展开更多
Model reconstruction from points scanned on existing physical objects is much important in a variety of situations such as reverse engineering for mechanical products, computer vision and recovery of biological shapes...Model reconstruction from points scanned on existing physical objects is much important in a variety of situations such as reverse engineering for mechanical products, computer vision and recovery of biological shapes from two dimensional contours. With the development of measuring equipment, cloud points that contain more details of the object can be obtained conveniently. On the other hand, large quantity of sampled points brings difficulties to model reconstruction method. This paper first presents an algorithm to automatically reduce the number of cloud points under given tolerance. Triangle mesh surface from the simplified data set is reconstructed by the marching cubes algorithm. For various reasons, reconstructed mesh usually contains unwanted holes. An approach to create new triangles is proposed with optimized shape for covering the unexpected holes in triangle meshes. After hole filling, watertight triangle mesh can be directly output in STL format, which is widely used in rapid prototype manufacturing. Practical examples are included to demonstrate the method.展开更多
Digital technology provides a method of quantitative investigation and data analysis for contemporary landscape spatial analysis,and related research is moving from image recognition to digital algorithmic analysis,pr...Digital technology provides a method of quantitative investigation and data analysis for contemporary landscape spatial analysis,and related research is moving from image recognition to digital algorithmic analysis,providing a more scientific and macroscopic way of research.The key to refinement design is to refine the spatial design process and the spatial improvement strategy system.Taking the ancient city of Zhaoyu in Qixian County,Shanxi Province as an example,(1)based on obtaining the integrated data of the ancient city through the drone tilt photography,the style and landscape of the ancient city are modeled;(2)the point cloud data with spatial information is imported into the point cloud analysis platform and the data analysis is carried out from the overall macroscopic style of the ancient city to the refinement level,which results in the formation of a more intuitive landscape design scheme,thus improving the precision and practicability of the landscape design;(3)Based on spatial big data,it starts from the spatial aggregation level,spatial distribution characteristics and other evaluation index system to achieve the refinement analysis of the site.Digital technology and methods are used throughout the process to explore the refined design path.展开更多
Advanced cloud computing technology provides cost saving and flexibility of services for users.With the explosion of multimedia data,more and more data owners would outsource their personal multimedia data on the clou...Advanced cloud computing technology provides cost saving and flexibility of services for users.With the explosion of multimedia data,more and more data owners would outsource their personal multimedia data on the cloud.In the meantime,some computationally expensive tasks are also undertaken by cloud servers.However,the outsourced multimedia data and its applications may reveal the data owner’s private information because the data owners lose the control of their data.Recently,this thought has aroused new research interest on privacy-preserving reversible data hiding over outsourced multimedia data.In this paper,two reversible data hiding schemes are proposed for encrypted image data in cloud computing:reversible data hiding by homomorphic encryption and reversible data hiding in encrypted domain.The former is that additional bits are extracted after decryption and the latter is that extracted before decryption.Meanwhile,a combined scheme is also designed.This paper proposes the privacy-preserving outsourcing scheme of reversible data hiding over encrypted image data in cloud computing,which not only ensures multimedia data security without relying on the trustworthiness of cloud servers,but also guarantees that reversible data hiding can be operated over encrypted images at the different stages.Theoretical analysis confirms the correctness of the proposed encryption model and justifies the security of the proposed scheme.The computation cost of the proposed scheme is acceptable and adjusts to different security levels.展开更多
Due to the increasing number of cloud applications,the amount of data in the cloud shows signs of growing faster than ever before.The nature of cloud computing requires cloud data processing systems that can handle hu...Due to the increasing number of cloud applications,the amount of data in the cloud shows signs of growing faster than ever before.The nature of cloud computing requires cloud data processing systems that can handle huge volumes of data and have high performance.However,most cloud storage systems currently adopt a hash-like approach to retrieving data that only supports simple keyword-based enquiries,but lacks various forms of information search.Therefore,a scalable and efficient indexing scheme is clearly required.In this paper,we present a skip list-based cloud index,called SLC-index,which is a novel,scalable skip list-based indexing for cloud data processing.The SLC-index offers a two-layered architecture for extending indexing scope and facilitating better throughput.Dynamic load-balancing for the SLC-index is achieved by online migration of index nodes between servers.Furthermore,it is a flexible system due to its dynamic addition and removal of servers.The SLC-index is efficient for both point and range queries.Experimental results show the efficiency of the SLC-index and its usefulness as an alternative approach for cloud-suitable data structures.展开更多
To reduce energy consumption in cloud data centres,in this paper,we propose two algorithms called the Energy-aware Scheduling algorithm using Workload-aware Consolidation Technique(ESWCT) and the Energyaware Live Migr...To reduce energy consumption in cloud data centres,in this paper,we propose two algorithms called the Energy-aware Scheduling algorithm using Workload-aware Consolidation Technique(ESWCT) and the Energyaware Live Migration algorithm using Workload-aware Consolidation Technique(ELMWCT).As opposed to traditional energy-aware scheduling algorithms,which often focus on only one-dimensional resource,the two algorithms are based on the fact that multiple resources(such as CPU,memory and network bandwidth)are shared by users concurrently in cloud data centres and heterogeneous workloads have different resource consumption characteristics.Both algorithms investigate the problem of consolidating heterogeneous workloads.They try to execute all Virtual Machines(VMs) with the minimum amount of Physical Machines(PMs),and then power off unused physical servers to reduce power consumption.Simulation results show that both algorithms efficiently utilise the resources in cloud data centres,and the multidimensional resources have good balanced utilizations,which demonstrate their promising energy saving capability.展开更多
This study concerns a Ka-band solid-state transmitter cloud radar, made in China, which can operate in three different work modes, with different pulse widths, and coherent and incoherent integration numbers, to meet ...This study concerns a Ka-band solid-state transmitter cloud radar, made in China, which can operate in three different work modes, with different pulse widths, and coherent and incoherent integration numbers, to meet the requirements for cloud remote sensing over the Tibetan Plateau. Specifically, the design of the three operational modes of the radar(i.e., boundary mode M1, cirrus mode M2, and precipitation mode M3) is introduced. Also, a cloud radar data merging algorithm for the three modes is proposed. Using one month's continuous measurements during summertime at Naqu on the Tibetan Plateau,we analyzed the consistency between the cloud radar measurements of the three modes. The number of occurrences of radar detections of hydrometeors and the percentage contributions of the different modes' data to the merged data were estimated.The performance of the merging algorithm was evaluated. The results indicated that the minimum detectable reflectivity for each mode was consistent with theoretical results. Merged data provided measurements with a minimum reflectivity of -35 dBZ at the height of 5 km, and obtained information above the height of 0.2 km. Measurements of radial velocity by the three operational modes agreed very well, and systematic errors in measurements of reflectivity were less than 2 dB. However,large discrepancies existed in the measurements of the linear depolarization ratio taken from the different operational modes.The percentage of radar detections of hydrometeors in mid- and high-level clouds increased by 60% through application of pulse compression techniques. In conclusion, the merged data are appropriate for cloud and precipitation studies over the Tibetan Plateau.展开更多
Various types of radars with different horizontal and vertical detection ranges are deployed in China, particularly over complex terrain where radar blind zones are common. In this study, a new variational method is d...Various types of radars with different horizontal and vertical detection ranges are deployed in China, particularly over complex terrain where radar blind zones are common. In this study, a new variational method is developed to correct threedimensional radar reflectivity data based on hourly ground precipitation observations. The aim of this method is to improve the quality of observations of various types of radar and effectively assimilate operational Doppler radar observations. A mudslide-inducing local rainstorm is simulated by the WRF model with assimilation of radar reflectivity and radial velocity data using LAPS(Local Analysis and Prediction System). Experiments with different radar data assimilated by LAPS are performed. It is found that when radar reflectivity data are corrected using this variational method and assimilated by LAPS,the atmospheric conditions and cloud physics processes are reasonably described. The temporal evolution of radar reflectivity corrected by the variational method corresponds well to observed rainfall. It can better describe the cloud water distribution over the rainfall area and improve the cloud water analysis results over the central rainfall region. The LAPS cloud analysis system can update cloud microphysical variables and represent the hydrometeors associated with strong convective activities over the rainfall area well. Model performance is improved and the simulation of the dynamical processes and moisture transport is more consistent with observation.展开更多
With increasingly complex website structure and continuously advancing web technologies,accurate user clicks recognition from massive HTTP data,which is critical for web usage mining,becomes more difficult.In this pap...With increasingly complex website structure and continuously advancing web technologies,accurate user clicks recognition from massive HTTP data,which is critical for web usage mining,becomes more difficult.In this paper,we propose a dependency graph model to describe the relationships between web requests.Based on this model,we design and implement a heuristic parallel algorithm to distinguish user clicks with the assistance of cloud computing technology.We evaluate the proposed algorithm with real massive data.The size of the dataset collected from a mobile core network is 228.7GB.It covers more than three million users.The experiment results demonstrate that the proposed algorithm can achieve higher accuracy than previous methods.展开更多
Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly im proved c...Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly im proved computing and storage. Management has become easier, andOAM costs have been significantly reduced. Cloud desktop technology is develop ing rapidly. With this technology, users can flexibly and dynamically use virtual ma chine resources, companies' efficiency of using and allocating resources is greatly improved, and information security is ensured. In most existing virtual cloud desk top solutions, computing and storage are bound together, and data is stored as im age files. This limits the flexibility and expandability of systems and is insufficient for meetinz customers' requirements in different scenarios.展开更多
Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minute...Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.展开更多
Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for s...Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for stem volume calculation.In this study,the performance of Structure from Motion photogrammetry for estimating individual tree stem volume in relation to traditional approaches was evaluated.We selected 30 trees from five savanna species growing at the periphery of the W National Park in northern Benin and measured their circumferences at different heights using traditional tape and clinometer.Stem volumes of sample trees were estimated from the measured circumferences using nine volumetric formulae for solids of revolution,including cylinder,cone,paraboloid,neiloid and their respective fustrums.Each tree was photographed and stem volume determined using a taper function derived from tri-dimensional stem models.This reference volume was compared with the results of formulaic estimations.Tree stem profiles were further decomposed into different portions,approximately corresponding to the stump,butt logs and logs,and the suitability of each solid of revolution was assessed for simulating the resulting shapes.Stem volumes calculated using the fustrums of paraboloid and neiloid formulae were the closest to reference volumes with a bias and root mean square error of 8.0%and 24.4%,respectively.Stems closely resembled fustrums of a paraboloid and a neiloid.Individual stem portions assumed different solids as follows:fustrums of paraboloid and neiloid were more prevalent from the stump to breast height,while a paraboloid closely matched stem shapes beyond this point.Therefore,a more accurate stem volumetric estimate was attained when stems were considered as a composite of at least three geometric solids.展开更多
We propose a dynamic automated infrastructure model for the cloud data centre which is aimed as an efficient service stipulation for the enormous number of users.The data center and cloud computing technologies have b...We propose a dynamic automated infrastructure model for the cloud data centre which is aimed as an efficient service stipulation for the enormous number of users.The data center and cloud computing technologies have been at the moment rendering attention to major research and development efforts by companies,governments,and academic and other research institutions.In that,the difficult task is to facilitate the infrastructure to construct the information available to application-driven services and make business-smart decisions.On the other hand,the challenges that remain are the provision of dynamic infrastructure for applications and information anywhere.Further,developing technologies to handle private cloud computing infrastructure and operations in a completely automated and secure way has been critical.As a result,the focus of this article is on service and infrastructure life cycle management.We also show how cloud users interact with the cloud,how they request services from the cloud,how they select cloud strategies to deliver the desired service,and how they analyze their cloud consumption.展开更多
Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic info...Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic information in the network. Therefore, the controllers can only optimize the network latency of applications. However, the serving latency of applications is also an important factor in delivered user-experience for arrival requests. Unintelligent request routing will cause large serving latency if arrival requests are allocated to overloaded virtual machines. To deal with the request routing problem, this paper proposes the workload-aware software-defined networking controller architecture. Then, request routing algorithms are proposed to minimize the total round trip time for every type of request by considering the congestion in the network and the workload in virtual machines(VMs). This paper finally provides the evaluation of the proposed algorithms in a simulated prototype. The simulation results show that the proposed methodology is efficient compared with the existing approaches.展开更多
Multi-view laser radar (ladar) data registration in obscure environments is an important research field of obscured target detection from air to ground. There are few overlap regions of the observational data in dif...Multi-view laser radar (ladar) data registration in obscure environments is an important research field of obscured target detection from air to ground. There are few overlap regions of the observational data in different views because of the occluder, so the multi-view data registration is rather difficult. Through indepth analyses of the typical methods and problems, it is obtained that the sequence registration is more appropriate, but needs to improve the registration accuracy. On this basis, a multi-view data registration algorithm based on aggregating the adjacent frames, which are already registered, is proposed. It increases the overlap region between the pending registration frames by aggregation and further improves the registration accuracy. The experiment results show that the proposed algorithm can effectively register the multi-view ladar data in the obscure environment, and it also has a greater robustness and a higher registration accuracy compared with the sequence registration under the condition of equivalent operating efficiency.展开更多
Many organizations apply cloud computing to store and effectively process data for various applications.The user uploads the data in the cloud has less security due to the unreliable verification process of data integ...Many organizations apply cloud computing to store and effectively process data for various applications.The user uploads the data in the cloud has less security due to the unreliable verification process of data integrity.In this research,an enhanced Merkle hash tree method of effective authentication model is proposed in the multi-owner cloud to increase the security of the cloud data.Merkle Hash tree applies the leaf nodes with a hash tag and the non-leaf node contains the table of hash information of child to encrypt the large data.Merkle Hash tree provides the efficient mapping of data and easily identifies the changesmade in the data due to proper structure.The developed model supports privacy-preserving public auditing to provide a secure cloud storage system.The data owners upload the data in the cloud and edit the data using the private key.An enhanced Merkle hash tree method stores the data in the cloud server and splits it into batches.The data files requested by the data owner are audit by a third-party auditor and the multiowner authentication method is applied during the modification process to authenticate the user.The result shows that the proposed method reduces the encryption and decryption time for cloud data storage by 2–167 ms when compared to the existing Advanced Encryption Standard and Blowfish.展开更多
Data mining is a procedure of separating covered up,obscure,however possibly valuable data from gigantic data.Huge Data impactsly affects logical disclosures and worth creation.Data mining(DM)with Big Data has been br...Data mining is a procedure of separating covered up,obscure,however possibly valuable data from gigantic data.Huge Data impactsly affects logical disclosures and worth creation.Data mining(DM)with Big Data has been broadly utilized in the lifecycle of electronic items that range from the structure and generation stages to the administration organize.A far reaching examination of DM with Big Data and a survey of its application in the phases of its lifecycle won't just profit scientists to create solid research.As of late huge data have turned into a trendy expression,which constrained the analysts to extend the current data mining methods to adapt to the advanced idea of data and to grow new scientific procedures.In this paper,we build up an exact assessment technique dependent on the standard of Design of Experiment.We apply this technique to assess data mining instruments and AI calculations towards structure huge data examination for media transmission checking data.Two contextual investigations are directed to give bits of knowledge of relations between the necessities of data examination and the decision of an instrument or calculation with regards to data investigation work processes.展开更多
Despite the multifaceted advantages of cloud computing,concerns about data leakage or abuse impedes its adoption for security-sensi tive tasks.Recent investigations have revealed that the risk of unauthorized data acc...Despite the multifaceted advantages of cloud computing,concerns about data leakage or abuse impedes its adoption for security-sensi tive tasks.Recent investigations have revealed that the risk of unauthorized data access is one of the biggest concerns of users of cloud-based services.Transparency and accountability for data managed in the cloud is necessary.Specifically,when using a cloudhost service,a user typically has to trust both the cloud service provider and cloud infrastructure provider to properly handling private data.This is a multi-party system.Three particular trust models can be used according to the credibility of these providers.This pa per describes techniques for preventing data leakage that can be used with these different models.展开更多
基金supported by the Future Challenge Program through the Agency for Defense Development funded by the Defense Acquisition Program Administration (No.UC200015RD)。
文摘Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface vehicle,the swarm robot system is more efficient than the operation of a single object as the former can reduce cost and save time.It is necessary to detect adjacent surface obstacles robustly to operate a cluster of unmanned surface vehicles.For this purpose,a LiDAR(light detection and ranging)sensor is used as it can simultaneously obtain 3D information for all directions,relatively robustly and accurately,irrespective of the surrounding environmental conditions.Although the GPS(global-positioning-system)error range exists,obtaining measurements of the surface-vessel position can still ensure stability during platoon maneuvering.In this study,a three-layer convolutional neural network is applied to classify types of surface vehicles.The aim of this approach is to redefine the sparse 3D point cloud data as 2D image data with a connotative meaning and subsequently utilize this transformed data for object classification purposes.Hence,we have proposed a descriptor that converts the 3D point cloud data into 2D image data.To use this descriptor effectively,it is necessary to perform a clustering operation that separates the point clouds for each object.We developed voxel-based clustering for the point cloud clustering.Furthermore,using the descriptor,3D point cloud data can be converted into a 2D feature image,and the converted 2D image is provided as an input value to the network.We intend to verify the validity of the proposed 3D point cloud feature descriptor by using experimental data in the simulator.Furthermore,we explore the feasibility of real-time object classification within this framework.
基金supported by the Innovation and Entrepreneurship Training Program Topic for College Students of North China University of Technology in 2023.
文摘In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit software to extract geometric information about the indoor environment.Furthermore,we proposed a method for constructing indoor elements based on parametric components.The research outcomes of this paper will offer new methods and tools for indoor space modeling and design.The approach of indoor space modeling based on 3D laser point cloud data and parametric component construction can enhance modeling efficiency and accuracy,providing architects,interior designers,and decorators with a better working platform and design reference.
文摘Model reconstruction from points scanned on existing physical objects is much important in a variety of situations such as reverse engineering for mechanical products, computer vision and recovery of biological shapes from two dimensional contours. With the development of measuring equipment, cloud points that contain more details of the object can be obtained conveniently. On the other hand, large quantity of sampled points brings difficulties to model reconstruction method. This paper first presents an algorithm to automatically reduce the number of cloud points under given tolerance. Triangle mesh surface from the simplified data set is reconstructed by the marching cubes algorithm. For various reasons, reconstructed mesh usually contains unwanted holes. An approach to create new triangles is proposed with optimized shape for covering the unexpected holes in triangle meshes. After hole filling, watertight triangle mesh can be directly output in STL format, which is widely used in rapid prototype manufacturing. Practical examples are included to demonstrate the method.
文摘Digital technology provides a method of quantitative investigation and data analysis for contemporary landscape spatial analysis,and related research is moving from image recognition to digital algorithmic analysis,providing a more scientific and macroscopic way of research.The key to refinement design is to refine the spatial design process and the spatial improvement strategy system.Taking the ancient city of Zhaoyu in Qixian County,Shanxi Province as an example,(1)based on obtaining the integrated data of the ancient city through the drone tilt photography,the style and landscape of the ancient city are modeled;(2)the point cloud data with spatial information is imported into the point cloud analysis platform and the data analysis is carried out from the overall macroscopic style of the ancient city to the refinement level,which results in the formation of a more intuitive landscape design scheme,thus improving the precision and practicability of the landscape design;(3)Based on spatial big data,it starts from the spatial aggregation level,spatial distribution characteristics and other evaluation index system to achieve the refinement analysis of the site.Digital technology and methods are used throughout the process to explore the refined design path.
基金This work was supported by the National Natural Science Foundation of China(No.61702276)the Startup Foundation for Introducing Talent of Nanjing University of Information Science and Technology under Grant 2016r055 and the Priority Academic Program Development(PAPD)of Jiangsu Higher Education Institutions.The authors are grateful for the anonymous reviewers who made constructive comments and improvements.
文摘Advanced cloud computing technology provides cost saving and flexibility of services for users.With the explosion of multimedia data,more and more data owners would outsource their personal multimedia data on the cloud.In the meantime,some computationally expensive tasks are also undertaken by cloud servers.However,the outsourced multimedia data and its applications may reveal the data owner’s private information because the data owners lose the control of their data.Recently,this thought has aroused new research interest on privacy-preserving reversible data hiding over outsourced multimedia data.In this paper,two reversible data hiding schemes are proposed for encrypted image data in cloud computing:reversible data hiding by homomorphic encryption and reversible data hiding in encrypted domain.The former is that additional bits are extracted after decryption and the latter is that extracted before decryption.Meanwhile,a combined scheme is also designed.This paper proposes the privacy-preserving outsourcing scheme of reversible data hiding over encrypted image data in cloud computing,which not only ensures multimedia data security without relying on the trustworthiness of cloud servers,but also guarantees that reversible data hiding can be operated over encrypted images at the different stages.Theoretical analysis confirms the correctness of the proposed encryption model and justifies the security of the proposed scheme.The computation cost of the proposed scheme is acceptable and adjusts to different security levels.
基金Projects(61363021,61540061,61663047)supported by the National Natural Science Foundation of ChinaProject(2017SE206)supported by the Open Foundation of Key Laboratory in Software Engineering of Yunnan Province,China
文摘Due to the increasing number of cloud applications,the amount of data in the cloud shows signs of growing faster than ever before.The nature of cloud computing requires cloud data processing systems that can handle huge volumes of data and have high performance.However,most cloud storage systems currently adopt a hash-like approach to retrieving data that only supports simple keyword-based enquiries,but lacks various forms of information search.Therefore,a scalable and efficient indexing scheme is clearly required.In this paper,we present a skip list-based cloud index,called SLC-index,which is a novel,scalable skip list-based indexing for cloud data processing.The SLC-index offers a two-layered architecture for extending indexing scope and facilitating better throughput.Dynamic load-balancing for the SLC-index is achieved by online migration of index nodes between servers.Furthermore,it is a flexible system due to its dynamic addition and removal of servers.The SLC-index is efficient for both point and range queries.Experimental results show the efficiency of the SLC-index and its usefulness as an alternative approach for cloud-suitable data structures.
基金supported by the Opening Project of State key Laboratory of Networking and Switching Technology under Grant No.SKLNST-2010-1-03the National Natural Science Foundation of China under Grants No.U1333113,No.61303204+1 种基金the Sichuan Province seedling project under Grant No.2012ZZ036the Scientific Research Fund of Sichuan Normal University under Grant No.13KYL06
文摘To reduce energy consumption in cloud data centres,in this paper,we propose two algorithms called the Energy-aware Scheduling algorithm using Workload-aware Consolidation Technique(ESWCT) and the Energyaware Live Migration algorithm using Workload-aware Consolidation Technique(ELMWCT).As opposed to traditional energy-aware scheduling algorithms,which often focus on only one-dimensional resource,the two algorithms are based on the fact that multiple resources(such as CPU,memory and network bandwidth)are shared by users concurrently in cloud data centres and heterogeneous workloads have different resource consumption characteristics.Both algorithms investigate the problem of consolidating heterogeneous workloads.They try to execute all Virtual Machines(VMs) with the minimum amount of Physical Machines(PMs),and then power off unused physical servers to reduce power consumption.Simulation results show that both algorithms efficiently utilise the resources in cloud data centres,and the multidimensional resources have good balanced utilizations,which demonstrate their promising energy saving capability.
基金funded by the National Sciences Foundation of China(Grant No.91337103)the China Meteorological Administration Special Public Welfare Research Fund(Grant No.GYHY201406001)
文摘This study concerns a Ka-band solid-state transmitter cloud radar, made in China, which can operate in three different work modes, with different pulse widths, and coherent and incoherent integration numbers, to meet the requirements for cloud remote sensing over the Tibetan Plateau. Specifically, the design of the three operational modes of the radar(i.e., boundary mode M1, cirrus mode M2, and precipitation mode M3) is introduced. Also, a cloud radar data merging algorithm for the three modes is proposed. Using one month's continuous measurements during summertime at Naqu on the Tibetan Plateau,we analyzed the consistency between the cloud radar measurements of the three modes. The number of occurrences of radar detections of hydrometeors and the percentage contributions of the different modes' data to the merged data were estimated.The performance of the merging algorithm was evaluated. The results indicated that the minimum detectable reflectivity for each mode was consistent with theoretical results. Merged data provided measurements with a minimum reflectivity of -35 dBZ at the height of 5 km, and obtained information above the height of 0.2 km. Measurements of radial velocity by the three operational modes agreed very well, and systematic errors in measurements of reflectivity were less than 2 dB. However,large discrepancies existed in the measurements of the linear depolarization ratio taken from the different operational modes.The percentage of radar detections of hydrometeors in mid- and high-level clouds increased by 60% through application of pulse compression techniques. In conclusion, the merged data are appropriate for cloud and precipitation studies over the Tibetan Plateau.
基金supported by a National Department of Public Benefit Research Foundation of China(Grant No.GYHY201406001)NSFC(National Science Foundation of China)project(Grant Nos.41105072,41130960,41375057and 41375041)Hubei Meteorological Bureau project(Grant No.2016S02)
文摘Various types of radars with different horizontal and vertical detection ranges are deployed in China, particularly over complex terrain where radar blind zones are common. In this study, a new variational method is developed to correct threedimensional radar reflectivity data based on hourly ground precipitation observations. The aim of this method is to improve the quality of observations of various types of radar and effectively assimilate operational Doppler radar observations. A mudslide-inducing local rainstorm is simulated by the WRF model with assimilation of radar reflectivity and radial velocity data using LAPS(Local Analysis and Prediction System). Experiments with different radar data assimilated by LAPS are performed. It is found that when radar reflectivity data are corrected using this variational method and assimilated by LAPS,the atmospheric conditions and cloud physics processes are reasonably described. The temporal evolution of radar reflectivity corrected by the variational method corresponds well to observed rainfall. It can better describe the cloud water distribution over the rainfall area and improve the cloud water analysis results over the central rainfall region. The LAPS cloud analysis system can update cloud microphysical variables and represent the hydrometeors associated with strong convective activities over the rainfall area well. Model performance is improved and the simulation of the dynamical processes and moisture transport is more consistent with observation.
基金supported in part by the Fundamental Research Funds for the Central Universities under Grant No.2013RC0114111 Project of China under Grant No.B08004
文摘With increasingly complex website structure and continuously advancing web technologies,accurate user clicks recognition from massive HTTP data,which is critical for web usage mining,becomes more difficult.In this paper,we propose a dependency graph model to describe the relationships between web requests.Based on this model,we design and implement a heuristic parallel algorithm to distinguish user clicks with the assistance of cloud computing technology.We evaluate the proposed algorithm with real massive data.The size of the dataset collected from a mobile core network is 228.7GB.It covers more than three million users.The experiment results demonstrate that the proposed algorithm can achieve higher accuracy than previous methods.
文摘Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly im proved computing and storage. Management has become easier, andOAM costs have been significantly reduced. Cloud desktop technology is develop ing rapidly. With this technology, users can flexibly and dynamically use virtual ma chine resources, companies' efficiency of using and allocating resources is greatly improved, and information security is ensured. In most existing virtual cloud desk top solutions, computing and storage are bound together, and data is stored as im age files. This limits the flexibility and expandability of systems and is insufficient for meetinz customers' requirements in different scenarios.
基金National Natural Science Foundation of China(No.41801379)Fundamental Research Funds for the Central Universities(No.2019B08414)National Key R&D Program of China(No.2016YFC0401801)。
文摘Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.
基金The work was supported by the International Foundation for Science(Grant No:I-1-D-60661).
文摘Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for stem volume calculation.In this study,the performance of Structure from Motion photogrammetry for estimating individual tree stem volume in relation to traditional approaches was evaluated.We selected 30 trees from five savanna species growing at the periphery of the W National Park in northern Benin and measured their circumferences at different heights using traditional tape and clinometer.Stem volumes of sample trees were estimated from the measured circumferences using nine volumetric formulae for solids of revolution,including cylinder,cone,paraboloid,neiloid and their respective fustrums.Each tree was photographed and stem volume determined using a taper function derived from tri-dimensional stem models.This reference volume was compared with the results of formulaic estimations.Tree stem profiles were further decomposed into different portions,approximately corresponding to the stump,butt logs and logs,and the suitability of each solid of revolution was assessed for simulating the resulting shapes.Stem volumes calculated using the fustrums of paraboloid and neiloid formulae were the closest to reference volumes with a bias and root mean square error of 8.0%and 24.4%,respectively.Stems closely resembled fustrums of a paraboloid and a neiloid.Individual stem portions assumed different solids as follows:fustrums of paraboloid and neiloid were more prevalent from the stump to breast height,while a paraboloid closely matched stem shapes beyond this point.Therefore,a more accurate stem volumetric estimate was attained when stems were considered as a composite of at least three geometric solids.
基金This research work was fully supported by King Khalid University,Abha,Kingdom of Saudi Arabia,for funding this work through a Large Research Project under grant number RGP/161/42.
文摘We propose a dynamic automated infrastructure model for the cloud data centre which is aimed as an efficient service stipulation for the enormous number of users.The data center and cloud computing technologies have been at the moment rendering attention to major research and development efforts by companies,governments,and academic and other research institutions.In that,the difficult task is to facilitate the infrastructure to construct the information available to application-driven services and make business-smart decisions.On the other hand,the challenges that remain are the provision of dynamic infrastructure for applications and information anywhere.Further,developing technologies to handle private cloud computing infrastructure and operations in a completely automated and secure way has been critical.As a result,the focus of this article is on service and infrastructure life cycle management.We also show how cloud users interact with the cloud,how they request services from the cloud,how they select cloud strategies to deliver the desired service,and how they analyze their cloud consumption.
基金supported by the National Postdoctoral Science Foundation of China(2014M550068)
文摘Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic information in the network. Therefore, the controllers can only optimize the network latency of applications. However, the serving latency of applications is also an important factor in delivered user-experience for arrival requests. Unintelligent request routing will cause large serving latency if arrival requests are allocated to overloaded virtual machines. To deal with the request routing problem, this paper proposes the workload-aware software-defined networking controller architecture. Then, request routing algorithms are proposed to minimize the total round trip time for every type of request by considering the congestion in the network and the workload in virtual machines(VMs). This paper finally provides the evaluation of the proposed algorithms in a simulated prototype. The simulation results show that the proposed methodology is efficient compared with the existing approaches.
文摘Multi-view laser radar (ladar) data registration in obscure environments is an important research field of obscured target detection from air to ground. There are few overlap regions of the observational data in different views because of the occluder, so the multi-view data registration is rather difficult. Through indepth analyses of the typical methods and problems, it is obtained that the sequence registration is more appropriate, but needs to improve the registration accuracy. On this basis, a multi-view data registration algorithm based on aggregating the adjacent frames, which are already registered, is proposed. It increases the overlap region between the pending registration frames by aggregation and further improves the registration accuracy. The experiment results show that the proposed algorithm can effectively register the multi-view ladar data in the obscure environment, and it also has a greater robustness and a higher registration accuracy compared with the sequence registration under the condition of equivalent operating efficiency.
基金The Universiti Kebangsaan Malaysia(UKM)Research Grant Scheme FRGS/1/2020/ICT03/UKM/02/6 and GGPM-2020-028 funded this research.
文摘Many organizations apply cloud computing to store and effectively process data for various applications.The user uploads the data in the cloud has less security due to the unreliable verification process of data integrity.In this research,an enhanced Merkle hash tree method of effective authentication model is proposed in the multi-owner cloud to increase the security of the cloud data.Merkle Hash tree applies the leaf nodes with a hash tag and the non-leaf node contains the table of hash information of child to encrypt the large data.Merkle Hash tree provides the efficient mapping of data and easily identifies the changesmade in the data due to proper structure.The developed model supports privacy-preserving public auditing to provide a secure cloud storage system.The data owners upload the data in the cloud and edit the data using the private key.An enhanced Merkle hash tree method stores the data in the cloud server and splits it into batches.The data files requested by the data owner are audit by a third-party auditor and the multiowner authentication method is applied during the modification process to authenticate the user.The result shows that the proposed method reduces the encryption and decryption time for cloud data storage by 2–167 ms when compared to the existing Advanced Encryption Standard and Blowfish.
文摘Data mining is a procedure of separating covered up,obscure,however possibly valuable data from gigantic data.Huge Data impactsly affects logical disclosures and worth creation.Data mining(DM)with Big Data has been broadly utilized in the lifecycle of electronic items that range from the structure and generation stages to the administration organize.A far reaching examination of DM with Big Data and a survey of its application in the phases of its lifecycle won't just profit scientists to create solid research.As of late huge data have turned into a trendy expression,which constrained the analysts to extend the current data mining methods to adapt to the advanced idea of data and to grow new scientific procedures.In this paper,we build up an exact assessment technique dependent on the standard of Design of Experiment.We apply this technique to assess data mining instruments and AI calculations towards structure huge data examination for media transmission checking data.Two contextual investigations are directed to give bits of knowledge of relations between the necessities of data examination and the decision of an instrument or calculation with regards to data investigation work processes.
基金supported by National Basic Research (973) Program of China (2011CB302505)Natural Science Foundation of China (61373145, 61170210)+1 种基金National High-Tech R&D (863) Program of China (2012AA012600,2011AA01A203)Chinese Special Project of Science and Technology (2012ZX01039001)
文摘Despite the multifaceted advantages of cloud computing,concerns about data leakage or abuse impedes its adoption for security-sensi tive tasks.Recent investigations have revealed that the risk of unauthorized data access is one of the biggest concerns of users of cloud-based services.Transparency and accountability for data managed in the cloud is necessary.Specifically,when using a cloudhost service,a user typically has to trust both the cloud service provider and cloud infrastructure provider to properly handling private data.This is a multi-party system.Three particular trust models can be used according to the credibility of these providers.This pa per describes techniques for preventing data leakage that can be used with these different models.