With the improvement of current online communication schemes,it is now possible to successfully distribute and transport secured digital Content via the communication channel at a faster transmission rate.Traditional ...With the improvement of current online communication schemes,it is now possible to successfully distribute and transport secured digital Content via the communication channel at a faster transmission rate.Traditional steganography and cryptography concepts are used to achieve the goal of concealing secret Content on a media and encrypting it before transmission.Both of the techniques mentioned above aid in the confidentiality of feature content.The proposed approach concerns secret content embodiment in selected pixels on digital image layers such as Red,Green,and Blue.The private Content originated from a medical client and was forwarded to a medical practitioner on the server end through the internet.The K-Means clustering principle uses the contouring approach to frame the pixel clusters on the image layers.The content embodiment procedure is performed on the selected pixel groups of all layers of the image using the Least Significant Bit(LSB)substitution technique to build the secret Content embedded image known as the stego image,which is subsequently transmitted across the internet medium to the server end.The experimental results are computed using the inputs from“Open-Access Medical Image Repositories(aylward.org)”and demonstrate the scheme’s impudence as the Content concealing procedure progresses.展开更多
Publishing big data and making it accessible to researchers is important for knowledge building as it helps in applying highly efficient methods to plan,conduct,and assess scientific research.However,publishing and pr...Publishing big data and making it accessible to researchers is important for knowledge building as it helps in applying highly efficient methods to plan,conduct,and assess scientific research.However,publishing and processing big data poses a privacy concern related to protecting individuals’sensitive information while maintaining the usability of the published data.Several anonymization methods,such as slicing and merging,have been designed as solutions to the privacy concerns for publishing big data.However,the major drawback of merging and slicing is the random permutation procedure,which does not always guarantee complete protection against attribute or membership disclosure.Moreover,merging procedures may generatemany fake tuples,leading to a loss of data utility and subsequent erroneous knowledge extraction.This study therefore proposes a slicingbased enhanced method for privacy-preserving big data publishing while maintaining the data utility.In particular,the proposed method distributes the data into horizontal and vertical partitions.The lower and upper protection levels are then used to identify the unique and identical attributes’values.The unique and identical attributes are swapped to ensure the published big data is protected from disclosure risks.The outcome of the experiments demonstrates that the proposed method could maintain data utility and provide stronger privacy preservation.展开更多
In recent years,with the explosive development in Internet,data storage and data processing technologies,privacy preservation has been one of the greater concerns in data mining.A number of methods and techniques have...In recent years,with the explosive development in Internet,data storage and data processing technologies,privacy preservation has been one of the greater concerns in data mining.A number of methods and techniques have been developed for privacy preserving data mining.This paper provided a wide survey of different privacy preserving data mining algorithms and analyzed the representative techniques for privacy preservation.The existing problems and directions for future research are also discussed.展开更多
Data is humongous today because of the extensive use of World WideWeb, Social Media and Intelligent Systems. This data can be very important anduseful if it is harnessed carefully and correctly. Useful information can...Data is humongous today because of the extensive use of World WideWeb, Social Media and Intelligent Systems. This data can be very important anduseful if it is harnessed carefully and correctly. Useful information can beextracted from this massive data using the Data Mining process. The informationextracted can be used to make vital decisions in various industries. Clustering is avery popular Data Mining method which divides the data points into differentgroups such that all similar data points form a part of the same group. Clusteringmethods are of various types. Many parameters and indexes exist for the evaluationand comparison of these methods. In this paper, we have compared partitioningbased methods K-Means, Fuzzy C-Means (FCM), Partitioning AroundMedoids (PAM) and Clustering Large Application (CLARA) on secure perturbeddata. Comparison and identification has been done for the method which performsbetter for analyzing the data perturbed using Extended NMF on the basis of thevalues of various indexes like Dunn Index, Silhouette Index, Xie-Beni Indexand Davies-Bouldin Index.展开更多
The need for accessing historical Earth Observation(EO)data series strongly increased in the last ten years,particularly for long-term science and environmental monitoring applications.This trend is likely to increase...The need for accessing historical Earth Observation(EO)data series strongly increased in the last ten years,particularly for long-term science and environmental monitoring applications.This trend is likely to increase even more in the future,in particular regarding the growing interest on global change monitoring which is driving users to request time-series of data spanning 20 years and more,and also due to the need to support the United Nations Framework Convention on Climate Change(UNFCCC).While much of the satellite observations are accessible from different data centers,the solution for analyzing measurements collected from various instruments for time series analysis is both difficult and critical.Climate research is a big data problem that involves high data volume of measurements,methods for on-the-fly extraction and reduction to keep up with the speed and data volume,and the ability to address uncertainties from data collections,processing,and analysis.The content of EO data archives is extending from a few years to decades and therefore,their value as a scientific time-series is continuously increasing.Hence there is a strong need to preserve the EO space data without time constraints and to keep them accessible and exploitable.The preservation of EO space data can also be considered as responsibility of the Space Agencies or data owners as they constitute a humankind asset.This publication aims at describing the activities supported by the European Space Agency relating to the Long Time Series generation with all relevant best practices and models needed to organise and measure the preservation and stewardship processes.The Data Stewardship Reference Model has been defined to give an overview and a way to help the data owners and space agencies in order to preserve and curate the space datasets to be ready for long time data series composition and analysis.展开更多
文摘With the improvement of current online communication schemes,it is now possible to successfully distribute and transport secured digital Content via the communication channel at a faster transmission rate.Traditional steganography and cryptography concepts are used to achieve the goal of concealing secret Content on a media and encrypting it before transmission.Both of the techniques mentioned above aid in the confidentiality of feature content.The proposed approach concerns secret content embodiment in selected pixels on digital image layers such as Red,Green,and Blue.The private Content originated from a medical client and was forwarded to a medical practitioner on the server end through the internet.The K-Means clustering principle uses the contouring approach to frame the pixel clusters on the image layers.The content embodiment procedure is performed on the selected pixel groups of all layers of the image using the Least Significant Bit(LSB)substitution technique to build the secret Content embedded image known as the stego image,which is subsequently transmitted across the internet medium to the server end.The experimental results are computed using the inputs from“Open-Access Medical Image Repositories(aylward.org)”and demonstrate the scheme’s impudence as the Content concealing procedure progresses.
基金This work was supported by Postgraduate Research Grants Scheme(PGRS)with Grant No.PGRS190360.
文摘Publishing big data and making it accessible to researchers is important for knowledge building as it helps in applying highly efficient methods to plan,conduct,and assess scientific research.However,publishing and processing big data poses a privacy concern related to protecting individuals’sensitive information while maintaining the usability of the published data.Several anonymization methods,such as slicing and merging,have been designed as solutions to the privacy concerns for publishing big data.However,the major drawback of merging and slicing is the random permutation procedure,which does not always guarantee complete protection against attribute or membership disclosure.Moreover,merging procedures may generatemany fake tuples,leading to a loss of data utility and subsequent erroneous knowledge extraction.This study therefore proposes a slicingbased enhanced method for privacy-preserving big data publishing while maintaining the data utility.In particular,the proposed method distributes the data into horizontal and vertical partitions.The lower and upper protection levels are then used to identify the unique and identical attributes’values.The unique and identical attributes are swapped to ensure the published big data is protected from disclosure risks.The outcome of the experiments demonstrates that the proposed method could maintain data utility and provide stronger privacy preservation.
基金This work was supported by the National Social Science Foundation Project of China under Grant 16BTQ085.
文摘In recent years,with the explosive development in Internet,data storage and data processing technologies,privacy preservation has been one of the greater concerns in data mining.A number of methods and techniques have been developed for privacy preserving data mining.This paper provided a wide survey of different privacy preserving data mining algorithms and analyzed the representative techniques for privacy preservation.The existing problems and directions for future research are also discussed.
文摘Data is humongous today because of the extensive use of World WideWeb, Social Media and Intelligent Systems. This data can be very important anduseful if it is harnessed carefully and correctly. Useful information can beextracted from this massive data using the Data Mining process. The informationextracted can be used to make vital decisions in various industries. Clustering is avery popular Data Mining method which divides the data points into differentgroups such that all similar data points form a part of the same group. Clusteringmethods are of various types. Many parameters and indexes exist for the evaluationand comparison of these methods. In this paper, we have compared partitioningbased methods K-Means, Fuzzy C-Means (FCM), Partitioning AroundMedoids (PAM) and Clustering Large Application (CLARA) on secure perturbeddata. Comparison and identification has been done for the method which performsbetter for analyzing the data perturbed using Extended NMF on the basis of thevalues of various indexes like Dunn Index, Silhouette Index, Xie-Beni Indexand Davies-Bouldin Index.
文摘The need for accessing historical Earth Observation(EO)data series strongly increased in the last ten years,particularly for long-term science and environmental monitoring applications.This trend is likely to increase even more in the future,in particular regarding the growing interest on global change monitoring which is driving users to request time-series of data spanning 20 years and more,and also due to the need to support the United Nations Framework Convention on Climate Change(UNFCCC).While much of the satellite observations are accessible from different data centers,the solution for analyzing measurements collected from various instruments for time series analysis is both difficult and critical.Climate research is a big data problem that involves high data volume of measurements,methods for on-the-fly extraction and reduction to keep up with the speed and data volume,and the ability to address uncertainties from data collections,processing,and analysis.The content of EO data archives is extending from a few years to decades and therefore,their value as a scientific time-series is continuously increasing.Hence there is a strong need to preserve the EO space data without time constraints and to keep them accessible and exploitable.The preservation of EO space data can also be considered as responsibility of the Space Agencies or data owners as they constitute a humankind asset.This publication aims at describing the activities supported by the European Space Agency relating to the Long Time Series generation with all relevant best practices and models needed to organise and measure the preservation and stewardship processes.The Data Stewardship Reference Model has been defined to give an overview and a way to help the data owners and space agencies in order to preserve and curate the space datasets to be ready for long time data series composition and analysis.