Wireless sensor networks(WSNs)consist of a great deal of sensor nodes with limited power,computation,storage,sensing and communication capabilities.Data aggregation is a very important technique,which is designed to s...Wireless sensor networks(WSNs)consist of a great deal of sensor nodes with limited power,computation,storage,sensing and communication capabilities.Data aggregation is a very important technique,which is designed to substantially reduce the communication overhead and energy expenditure of sensor node during the process of data collection in a WSNs.However,privacy-preservation is more challenging especially in data aggregation,where the aggregators need to perform some aggregation operations on sensing data it received.We present a state-of-the art survey of privacy-preserving data aggregation in WSNs.At first,we classify the existing privacy-preserving data aggregation schemes into different categories by the core privacy-preserving techniques used in each scheme.And then compare and contrast different algorithms on the basis of performance measures such as the privacy protection ability,communication consumption,power consumption and data accuracy etc.Furthermore,based on the existing work,we also discuss a number of open issues which may intrigue the interest of researchers for future work.展开更多
The Internet of Things(IoT)has profoundly impacted our lives and has greatly revolutionized our lifestyle.The terminal devices in an IoT data aggregation application sense real-time data for the remote cloud server to...The Internet of Things(IoT)has profoundly impacted our lives and has greatly revolutionized our lifestyle.The terminal devices in an IoT data aggregation application sense real-time data for the remote cloud server to achieve intelligent decisions.However,the high frequency of collecting user data will raise people concerns about personal privacy.In recent years,many privacy-preserving data aggregation schemes have been proposed.Unfortunately,most existing schemes cannot support either arbitrary aggregation functions,or dynamic user group management,or fault tolerance.In this paper,we propose an efficient and privacy-preserving data aggregation scheme.In the scheme,we design a lightweight encryption method to protect the user privacy by using a ring topology and a random location sequence.On this basis,the proposed scheme supports not only arbitrary aggregation functions,but also flexible dynamic user management.Furthermore,the scheme achieves faulttolerant capabilities by utilizing a future data buffering mechanism.Security analysis reveals that the scheme can achieve the desired security properties,and experimental evaluation results show the scheme's efficiency in terms of computational and communication overhead.展开更多
In the realm of vehicular ad hoc networks(VANETs),data aggregation plays a pivotal role in bringing together data from multiple vehicles for further processing and sharing.Erroneous data feedback can significantly imp...In the realm of vehicular ad hoc networks(VANETs),data aggregation plays a pivotal role in bringing together data from multiple vehicles for further processing and sharing.Erroneous data feedback can significantly impact vehicle operations,control,and overall safety,necessitating the assurance of security in vehicular data aggregation.Addressing the security risks and challenges inherent in data aggregation within VANETs,this paper introduces a blockchain-based scheme for secure and anonymous data aggregation.The proposed scheme integrates cloud computing with blockchain technology,presenting a novel blockchain-based data aggregation system that robustly supports efficient and secure data collection in VANETs.Leveraging key escrow resilience mechanisms,the solution ensures the security of system keys,preventing the security problems caused by keys generated by third parties alone in the past.Furthermore,through secondary data aggregation,fine-grained data aggregation is achieved,providing effective support for cloud services in VANETs.The effectiveness of the proposed scheme is confirmed through security analysis and performance evaluations,demonstrating superior computational and communication efficiency compared existing alternatives.展开更多
The proliferation of intelligent,connected Internet of Things(IoT)devices facilitates data collection.However,task workers may be reluctant to participate in data collection due to privacy concerns,and task requesters...The proliferation of intelligent,connected Internet of Things(IoT)devices facilitates data collection.However,task workers may be reluctant to participate in data collection due to privacy concerns,and task requesters may be concerned about the validity of the collected data.Hence,it is vital to evaluate the quality of the data collected by the task workers while protecting privacy in spatial crowdsourcing(SC)data collection tasks with IoT.To this end,this paper proposes a privacy-preserving data reliability evaluation for SC in IoT,named PARE.First,we design a data uploading format using blockchain and Paillier homomorphic cryptosystem,providing unchangeable and traceable data while overcoming privacy concerns.Secondly,based on the uploaded data,we propose a method to determine the approximate correct value region without knowing the exact value.Finally,we offer a data filtering mechanism based on the Paillier cryptosystem using this value region.The evaluation and analysis results show that PARE outperforms the existing solution in terms of performance and privacy protection.展开更多
The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial...The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial networks(GANs)are instrumental in resource scheduling,their application in this domain is impeded by challenges such as convergence speed,inferior optimality searching capability,and the inability to learn from failed decision making feedbacks.Therefore,a cloud-edge collaborative federated GAN-based communication and computing resource scheduling algorithm with long-term constraint violation sensitiveness is proposed to address these challenges.The proposed algorithm facilitates real-time,energy-efficient data processing by optimizing transmission power control,data migration,and computing resource allocation.It employs federated learning for global parameter aggregation to enhance GAN parameter updating and dynamically adjusts GAN learning rates and global aggregation weights based on energy consumption constraint violations.Simulation results indicate that the proposed algorithm effectively reduces data processing latency,energy consumption,and convergence time.展开更多
In a smart grid, a huge amount of data is collected for various applications, such as load monitoring and demand response. These data are used for analyzing the power state and formulating the optimal dispatching stra...In a smart grid, a huge amount of data is collected for various applications, such as load monitoring and demand response. These data are used for analyzing the power state and formulating the optimal dispatching strategy. However, these big energy data in terms of volume, velocity and variety raise concern over consumers' privacy. For instance, in order to optimize energy utilization and support demand response, numerous smart meters are installed at a consumer's home to collect energy consumption data at a fine granularity, but these fine-grained data may contain information on the appliances and thus the consumer's behaviors at home. In this paper, we propose a privacy-preserving data aggregation scheme based on secret sharing with fault tolerance in a smart grid, which ensures that the control center obtains the integrated data without compromising privacy. Meanwhile, we also consider fault tolerance and resistance to differential attack during the data aggregation. Finally, we perform a security analysis and performance evaluation of our scheme in comparison with the other similar schemes. The analysis shows that our scheme can meet the security requirement, and it also shows better performance than other popular methods.展开更多
As an emergent-architecture, mobile edge computing shifts cloud service to the edge of networks. It can satisfy several desirable characteristics for Io T systems. To reduce communication pressure from Io T devices, d...As an emergent-architecture, mobile edge computing shifts cloud service to the edge of networks. It can satisfy several desirable characteristics for Io T systems. To reduce communication pressure from Io T devices, data aggregation is a good candidate. However, data processing in MEC may suffer from many challenges, such as unverifiability of aggregated data, privacy-violation and fault-tolerance. To address these challenges, we propose PVF-DA: privacy-preserving, verifiable and fault-tolerant data aggregation in MEC based on aggregator-oblivious encryption and zero-knowledge-proof. The proposed scheme can not only provide privacy protection of the reported data, but also resist the collusion between MEC server and corrupted Io T devices. Furthermore, the proposed scheme has two outstanding features: verifiability and strong fault-tolerance. Verifiability can make Io T device to verify whether the reported sensing data is correctly aggregated. Strong fault-tolerance makes the aggregator to compute an aggregate even if one or several Io Ts fail to report their data. Finally, the detailed security proofs are shown that the proposed scheme can achieve security and privacy-preservation properties in MEC.展开更多
With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most...With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most elementary learning models of machine learning.Privacy issues surrounding SVM classifier training have attracted increasing attention.In this paper,we investigate Differential Privacy-compliant Federated Machine Learning with Dimensionality Reduction,called FedDPDR-DPML,which greatly improves data utility while providing strong privacy guarantees.Considering in distributed learning scenarios,multiple participants usually hold unbalanced or small amounts of data.Therefore,FedDPDR-DPML enables multiple participants to collaboratively learn a global model based on weighted model averaging and knowledge aggregation and then the server distributes the global model to each participant to improve local data utility.Aiming at high-dimensional data,we adopt differential privacy in both the principal component analysis(PCA)-based dimensionality reduction phase and SVM classifiers training phase,which improves model accuracy while achieving strict differential privacy protection.Besides,we train Differential privacy(DP)-compliant SVM classifiers by adding noise to the objective function itself,thus leading to better data utility.Extensive experiments on three high-dimensional datasets demonstrate that FedDPDR-DPML can achieve high accuracy while ensuring strong privacy protection.展开更多
As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when ...As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when learning agents are deployed on the edge side,the data aggregation from the end side to the designated edge devices is an important research topic.Considering the various importance of end devices,this paper studies the weighted data aggregation problem in a single hop end-to-edge communication network.Firstly,to make sure all the end devices with various weights are fairly treated in data aggregation,a distributed end-to-edge cooperative scheme is proposed.Then,to handle the massive contention on the wireless channel caused by end devices,a multi-armed bandit(MAB)algorithm is designed to help the end devices find their most appropriate update rates.Diffe-rent from the traditional data aggregation works,combining the MAB enables our algorithm a higher efficiency in data aggregation.With a theoretical analysis,we show that the efficiency of our algorithm is asymptotically optimal.Comparative experiments with previous works are also conducted to show the strength of our algorithm.展开更多
Medical data mining has become an essential task in healthcare sector to secure the personal and medical data of patients using privacy policy.In this background,several authentication and accessibility issues emerge ...Medical data mining has become an essential task in healthcare sector to secure the personal and medical data of patients using privacy policy.In this background,several authentication and accessibility issues emerge with an inten-tion to protect the sensitive details of the patients over getting published in open domain.To solve this problem,Multi Attribute Case based Privacy Preservation(MACPP)technique is proposed in this study to enhance the security of privacy-preserving data.Private information can be any attribute information which is categorized as sensitive logs in a patient’s records.The semantic relation between transactional patient records and access rights is estimated based on the mean average value to distinguish sensitive and non-sensitive information.In addition to this,crypto hidden policy is also applied here to encrypt the sensitive data through symmetric standard key log verification that protects the personalized sensitive information.Further,linear integrity verification provides authentication rights to verify the data,improves the performance of privacy preserving techni-que against intruders and assures high security in healthcare setting.展开更多
Developing a privacy-preserving data publishing algorithm that stops individuals from disclosing their identities while not ignoring data utility remains an important goal to achieve.Because finding the trade-off betw...Developing a privacy-preserving data publishing algorithm that stops individuals from disclosing their identities while not ignoring data utility remains an important goal to achieve.Because finding the trade-off between data privacy and data utility is an NP-hard problem and also a current research area.When existing approaches are investigated,one of the most significant difficulties discovered is the presence of outlier data in the datasets.Outlier data has a negative impact on data utility.Furthermore,k-anonymity algorithms,which are commonly used in the literature,do not provide adequate protection against outlier data.In this study,a new data anonymization algorithm is devised and tested for boosting data utility by incorporating an outlier data detection mechanism into the Mondrian algorithm.The connectivity-based outlier factor(COF)algorithm is used to detect outliers.Mondrian is selected because of its capacity to anonymize multidimensional data while meeting the needs of real-world data.COF,on the other hand,is used to discover outliers in high-dimensional datasets with complicated structures.The proposed algorithm generates more equivalence classes than the Mondrian algorithm and provides greater data utility than previous algorithms based on k-anonymization.In addition,it outperforms other algorithms in the discernibility metric(DM),normalized average equivalence class size(Cavg),global certainty penalty(GCP),query error rate,classification accuracy(CA),and F-measure metrics.Moreover,the increase in the values of theGCPand error ratemetrics demonstrates that the proposed algorithm facilitates obtaining higher data utility by grouping closer data points when compared to other algorithms.展开更多
In the analysis of big data,deep learn-ing is a crucial technique.Big data analysis tasks are typically carried out on the cloud since it offers strong computer capabilities and storage areas.Nev-ertheless,there is a ...In the analysis of big data,deep learn-ing is a crucial technique.Big data analysis tasks are typically carried out on the cloud since it offers strong computer capabilities and storage areas.Nev-ertheless,there is a contradiction between the open nature of the cloud and the demand that data own-ers maintain their privacy.To use cloud resources for privacy-preserving data training,a viable method must be found.A privacy-preserving deep learning model(PPDLM)is suggested in this research to ad-dress this preserving issue.To preserve data privacy,we first encrypted the data using homomorphic en-cryption(HE)approach.Moreover,the deep learn-ing algorithm’s activation function—the sigmoid func-tion—uses the least-squares method to process non-addition and non-multiplication operations that are not allowed by homomorphic.Finally,experimental re-sults show that PPDLM has a significant effect on the protection of data privacy information.Compared with Non-Privacy Preserving Deep Learning Model(NPPDLM),PPDLM has higher computational effi-ciency.展开更多
As the Internet of Things(IoT)advances,machine-type devices are densely deployed and massive networks such as ultra-dense networks(UDNs)are formed.Various devices attend to the network to transmit data using machine-t...As the Internet of Things(IoT)advances,machine-type devices are densely deployed and massive networks such as ultra-dense networks(UDNs)are formed.Various devices attend to the network to transmit data using machine-type communication(MTC),whereby numerous,various are generated.MTC devices generally have resource constraints and use wireless communication.In this kind of network,data aggregation is a key function to provide transmission efficiency.It can reduce the number of transmitted data in the network,and this leads to energy saving and reducing transmission delays.In order to effectively operate data aggregation in UDNs,it is important to select an aggregation point well.The total number of transmitted data may vary,depending on the aggregation point to which the data are delivered.Therefore,in this paper,we propose a novel data aggregation scheme to select the appropriate aggregation point and describe the data transmission method applying the proposed aggregation scheme.In addition,we evaluate the proposed scheme with extensive computer simulations.Better performances in the proposed scheme are achieved compared to the conventional approach.展开更多
Fog computing is a promising technology that has been emerged to handle the growth of smart devices as well as the popularity of latency-sensitive and location-awareness Internet of Things(IoT)services.After the emerg...Fog computing is a promising technology that has been emerged to handle the growth of smart devices as well as the popularity of latency-sensitive and location-awareness Internet of Things(IoT)services.After the emergence of IoT-based services,the industry of internet-based devices has grown.The number of these devices has raised from millions to billions,and it is expected to increase further in the near future.Thus,additional challenges will be added to the traditional centralized cloud-based architecture as it will not be able to handle that growth and to support all connected devices in real-time without affecting the user experience.Conventional data aggregation models for Fog enabled IoT environ-ments possess high computational complexity and communication cost.There-fore,in order to resolve the issues and improve the lifetime of the network,this study develops an effective hierarchical data aggregation with chaotic barnacles mating optimizer(HDAG-CBMO)technique.The HDAG-CBMO technique derives afitness function from many relational matrices,like residual energy,average distance to neighbors,and centroid degree of target area.Besides,a chaotic theory based population initialization technique is derived for the optimal initial position of barnacles.Moreover,a learning based data offloading method has been developed for reducing the response time to IoT user requests.A wide range of simulation analyses demonstrated that the HDAG-CBMO technique has resulted in balanced energy utilization and prolonged lifetime of the Fog assisted IoT networks.展开更多
The conventional hospital environment is transformed into digital transformation that focuses on patient centric remote approach through advanced technologies.Early diagnosis of many diseases will improve the patient ...The conventional hospital environment is transformed into digital transformation that focuses on patient centric remote approach through advanced technologies.Early diagnosis of many diseases will improve the patient life.The cost of health care systems is reduced due to the use of advanced technologies such as Internet of Things(IoT),Wireless Sensor Networks(WSN),Embedded systems,Deep learning approaches and Optimization and aggregation methods.The data generated through these technologies will demand the bandwidth,data rate,latency of the network.In this proposed work,efficient discrete grey wolf optimization(DGWO)based data aggregation scheme using Elliptic curve Elgamal with Message Authentication code(ECEMAC)has been used to aggregate the parameters generated from the wearable sensor devices of the patient.The nodes that are far away from edge node will forward the data to its neighbor cluster head using DGWO.Aggregation scheme will reduce the number of transmissions over the network.The aggregated data are preprocessed at edge node to remove the noise for better diagnosis.Edge node will reduce the overhead of cloud server.The aggregated data are forward to cloud server for central storage and diagnosis.This proposed smart diagnosis will reduce the transmission cost through aggrega-tion scheme which will reduce the energy of the system.Energy cost for proposed system for 300 nodes is 0.34μJ.Various energy cost of existing approaches such as secure privacy preserving data aggregation scheme(SPPDA),concealed data aggregation scheme for multiple application(CDAMA)and secure aggregation scheme(ASAS)are 1.3μJ,0.81μJ and 0.51μJ respectively.The optimization approaches and encryption method will ensure the data privacy.展开更多
Federated learning for edge computing is a promising solution in the data booming era,which leverages the computation ability of each edge device to train local models and only shares the model gradients to the centra...Federated learning for edge computing is a promising solution in the data booming era,which leverages the computation ability of each edge device to train local models and only shares the model gradients to the central server.However,the frequently transmitted local gradients could also leak the participants’private data.To protect the privacy of local training data,lots of cryptographic-based Privacy-Preserving Federated Learning(PPFL)schemes have been proposed.However,due to the constrained resource nature of mobile devices and complex cryptographic operations,traditional PPFL schemes fail to provide efficient data confidentiality and lightweight integrity verification simultaneously.To tackle this problem,we propose a Verifiable Privacypreserving Federated Learning scheme(VPFL)for edge computing systems to prevent local gradients from leaking over the transmission stage.Firstly,we combine the Distributed Selective Stochastic Gradient Descent(DSSGD)method with Paillier homomorphic cryptosystem to achieve the distributed encryption functionality,so as to reduce the computation cost of the complex cryptosystem.Secondly,we further present an online/offline signature method to realize the lightweight gradients integrity verification,where the offline part can be securely outsourced to the edge server.Comprehensive security analysis demonstrates the proposed VPFL can achieve data confidentiality,authentication,and integrity.At last,we evaluate both communication overhead and computation cost of the proposed VPFL scheme,the experimental results have shown VPFL has low computation costs and communication overheads while maintaining high training accuracy.展开更多
Advanced cloud computing technology provides cost saving and flexibility of services for users.With the explosion of multimedia data,more and more data owners would outsource their personal multimedia data on the clou...Advanced cloud computing technology provides cost saving and flexibility of services for users.With the explosion of multimedia data,more and more data owners would outsource their personal multimedia data on the cloud.In the meantime,some computationally expensive tasks are also undertaken by cloud servers.However,the outsourced multimedia data and its applications may reveal the data owner’s private information because the data owners lose the control of their data.Recently,this thought has aroused new research interest on privacy-preserving reversible data hiding over outsourced multimedia data.In this paper,two reversible data hiding schemes are proposed for encrypted image data in cloud computing:reversible data hiding by homomorphic encryption and reversible data hiding in encrypted domain.The former is that additional bits are extracted after decryption and the latter is that extracted before decryption.Meanwhile,a combined scheme is also designed.This paper proposes the privacy-preserving outsourcing scheme of reversible data hiding over encrypted image data in cloud computing,which not only ensures multimedia data security without relying on the trustworthiness of cloud servers,but also guarantees that reversible data hiding can be operated over encrypted images at the different stages.Theoretical analysis confirms the correctness of the proposed encryption model and justifies the security of the proposed scheme.The computation cost of the proposed scheme is acceptable and adjusts to different security levels.展开更多
With the increasing popularity of cloud computing,privacy has become one of the key problem in cloud security.When data is outsourced to the cloud,for data owners,they need to ensure the security of their privacy;for ...With the increasing popularity of cloud computing,privacy has become one of the key problem in cloud security.When data is outsourced to the cloud,for data owners,they need to ensure the security of their privacy;for cloud service providers,they need some information of the data to provide high QoS services;and for authorized users,they need to access to the true value of data.The existing privacy-preserving methods can't meet all the needs of the three parties at the same time.To address this issue,we propose a retrievable data perturbation method and use it in the privacy-preserving in data outsourcing in cloud computing.Our scheme comes in four steps.Firstly,an improved random generator is proposed to generate an accurate "noise".Next,a perturbation algorithm is introduced to add noise to the original data.By doing this,the privacy information is hidden,but the mean and covariance of data which the service providers may need remain unchanged.Then,a retrieval algorithm is proposed to get the original data back from the perturbed data.Finally,we combine the retrievable perturbation with the access control process to ensure only the authorized users can retrieve the original data.The experiments show that our scheme perturbs date correctly,efficiently,and securely.展开更多
In scenarios of real-time data collection of long-term deployed Wireless Sensor Networks (WSNs), low-latency data collection with long net- work lifetime becomes a key issue. In this paper, we present a data aggrega...In scenarios of real-time data collection of long-term deployed Wireless Sensor Networks (WSNs), low-latency data collection with long net- work lifetime becomes a key issue. In this paper, we present a data aggregation scheduling with guaran- teed lifetime and efficient latency in WSNs. We first Construct a Guaranteed Lifetime Mininmm Ra- dius Data Aggregation Tree (GLMRDAT) which is conducive to reduce scheduling latency while pro- viding a guaranteed network lifetime, and then de-sign a Greedy Scheduling algorithM (GSM) based on finding the nmzximum independent set in conflict graph to schedule he transmission of nodes in the aggregation tree. Finally, simulations show that our proposed approach not only outperfonm the state-of-the-art solutions in terms of schedule latency, but also provides longer and guaranteed network lifetilre.展开更多
By integrating the traditional power grid with information and communication technology, smart grid achieves dependable, efficient, and flexible grid data processing. The smart meters deployed on the user side of the ...By integrating the traditional power grid with information and communication technology, smart grid achieves dependable, efficient, and flexible grid data processing. The smart meters deployed on the user side of the smart grid collect the users' power usage data on a regular basis and upload it to the control center to complete the smart grid data acquisition. The control center can evaluate the supply and demand of the power grid through aggregated data from users and then dynamically adjust the power supply and price, etc. However, since the grid data collected from users may disclose the user's electricity usage habits and daily activities, privacy concern has become a critical issue in smart grid data aggregation. Most of the existing privacy-preserving data collection schemes for smart grid adopt homomorphic encryption or randomization techniques which are either impractical because of the high computation overhead or unrealistic for requiring a trusted third party.展开更多
基金supported in part by the National Natural Science Foundation of China(No.61272084,61202004)the Natural Science Foundation of Jiangsu Province(No.BK20130096)the Project of Natural Science Research of Jiangsu University(No.14KJB520031,No.11KJA520002)
文摘Wireless sensor networks(WSNs)consist of a great deal of sensor nodes with limited power,computation,storage,sensing and communication capabilities.Data aggregation is a very important technique,which is designed to substantially reduce the communication overhead and energy expenditure of sensor node during the process of data collection in a WSNs.However,privacy-preservation is more challenging especially in data aggregation,where the aggregators need to perform some aggregation operations on sensing data it received.We present a state-of-the art survey of privacy-preserving data aggregation in WSNs.At first,we classify the existing privacy-preserving data aggregation schemes into different categories by the core privacy-preserving techniques used in each scheme.And then compare and contrast different algorithms on the basis of performance measures such as the privacy protection ability,communication consumption,power consumption and data accuracy etc.Furthermore,based on the existing work,we also discuss a number of open issues which may intrigue the interest of researchers for future work.
基金supported by the Natural Science Foundation of Fujian Province(2018J01782)the National Natural Science Foundation of China(U1905211)the Educational scientific research project of Fujian Provincial Department of Education(JAT210291)。
文摘The Internet of Things(IoT)has profoundly impacted our lives and has greatly revolutionized our lifestyle.The terminal devices in an IoT data aggregation application sense real-time data for the remote cloud server to achieve intelligent decisions.However,the high frequency of collecting user data will raise people concerns about personal privacy.In recent years,many privacy-preserving data aggregation schemes have been proposed.Unfortunately,most existing schemes cannot support either arbitrary aggregation functions,or dynamic user group management,or fault tolerance.In this paper,we propose an efficient and privacy-preserving data aggregation scheme.In the scheme,we design a lightweight encryption method to protect the user privacy by using a ring topology and a random location sequence.On this basis,the proposed scheme supports not only arbitrary aggregation functions,but also flexible dynamic user management.Furthermore,the scheme achieves faulttolerant capabilities by utilizing a future data buffering mechanism.Security analysis reveals that the scheme can achieve the desired security properties,and experimental evaluation results show the scheme's efficiency in terms of computational and communication overhead.
基金supported by the National Natural Science Foundation of China(61662089).
文摘In the realm of vehicular ad hoc networks(VANETs),data aggregation plays a pivotal role in bringing together data from multiple vehicles for further processing and sharing.Erroneous data feedback can significantly impact vehicle operations,control,and overall safety,necessitating the assurance of security in vehicular data aggregation.Addressing the security risks and challenges inherent in data aggregation within VANETs,this paper introduces a blockchain-based scheme for secure and anonymous data aggregation.The proposed scheme integrates cloud computing with blockchain technology,presenting a novel blockchain-based data aggregation system that robustly supports efficient and secure data collection in VANETs.Leveraging key escrow resilience mechanisms,the solution ensures the security of system keys,preventing the security problems caused by keys generated by third parties alone in the past.Furthermore,through secondary data aggregation,fine-grained data aggregation is achieved,providing effective support for cloud services in VANETs.The effectiveness of the proposed scheme is confirmed through security analysis and performance evaluations,demonstrating superior computational and communication efficiency compared existing alternatives.
基金This work was supported by the National Natural Science Foundation of China under Grant 62233003the National Key Research and Development Program of China under Grant 2020YFB1708602.
文摘The proliferation of intelligent,connected Internet of Things(IoT)devices facilitates data collection.However,task workers may be reluctant to participate in data collection due to privacy concerns,and task requesters may be concerned about the validity of the collected data.Hence,it is vital to evaluate the quality of the data collected by the task workers while protecting privacy in spatial crowdsourcing(SC)data collection tasks with IoT.To this end,this paper proposes a privacy-preserving data reliability evaluation for SC in IoT,named PARE.First,we design a data uploading format using blockchain and Paillier homomorphic cryptosystem,providing unchangeable and traceable data while overcoming privacy concerns.Secondly,based on the uploaded data,we propose a method to determine the approximate correct value region without knowing the exact value.Finally,we offer a data filtering mechanism based on the Paillier cryptosystem using this value region.The evaluation and analysis results show that PARE outperforms the existing solution in terms of performance and privacy protection.
基金supported by China Southern Power Grid Technology Project under Grant 03600KK52220019(GDKJXM20220253).
文摘The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial networks(GANs)are instrumental in resource scheduling,their application in this domain is impeded by challenges such as convergence speed,inferior optimality searching capability,and the inability to learn from failed decision making feedbacks.Therefore,a cloud-edge collaborative federated GAN-based communication and computing resource scheduling algorithm with long-term constraint violation sensitiveness is proposed to address these challenges.The proposed algorithm facilitates real-time,energy-efficient data processing by optimizing transmission power control,data migration,and computing resource allocation.It employs federated learning for global parameter aggregation to enhance GAN parameter updating and dynamically adjusts GAN learning rates and global aggregation weights based on energy consumption constraint violations.Simulation results indicate that the proposed algorithm effectively reduces data processing latency,energy consumption,and convergence time.
文摘In a smart grid, a huge amount of data is collected for various applications, such as load monitoring and demand response. These data are used for analyzing the power state and formulating the optimal dispatching strategy. However, these big energy data in terms of volume, velocity and variety raise concern over consumers' privacy. For instance, in order to optimize energy utilization and support demand response, numerous smart meters are installed at a consumer's home to collect energy consumption data at a fine granularity, but these fine-grained data may contain information on the appliances and thus the consumer's behaviors at home. In this paper, we propose a privacy-preserving data aggregation scheme based on secret sharing with fault tolerance in a smart grid, which ensures that the control center obtains the integrated data without compromising privacy. Meanwhile, we also consider fault tolerance and resistance to differential attack during the data aggregation. Finally, we perform a security analysis and performance evaluation of our scheme in comparison with the other similar schemes. The analysis shows that our scheme can meet the security requirement, and it also shows better performance than other popular methods.
基金supported by Beijing Natural Science Foundation—Haidian Original Innovation Joint Fund Project Task Book(Key Research Topic)(Nos.L182039)Open Fund of National Engineering Laboratory for Big Data Collaborative Security Technology and the Foundation of Guizhou Provincial Key Laboratory of Public Big Data(No.2019BDKFJJ012)。
文摘As an emergent-architecture, mobile edge computing shifts cloud service to the edge of networks. It can satisfy several desirable characteristics for Io T systems. To reduce communication pressure from Io T devices, data aggregation is a good candidate. However, data processing in MEC may suffer from many challenges, such as unverifiability of aggregated data, privacy-violation and fault-tolerance. To address these challenges, we propose PVF-DA: privacy-preserving, verifiable and fault-tolerant data aggregation in MEC based on aggregator-oblivious encryption and zero-knowledge-proof. The proposed scheme can not only provide privacy protection of the reported data, but also resist the collusion between MEC server and corrupted Io T devices. Furthermore, the proposed scheme has two outstanding features: verifiability and strong fault-tolerance. Verifiability can make Io T device to verify whether the reported sensing data is correctly aggregated. Strong fault-tolerance makes the aggregator to compute an aggregate even if one or several Io Ts fail to report their data. Finally, the detailed security proofs are shown that the proposed scheme can achieve security and privacy-preservation properties in MEC.
基金supported in part by National Natural Science Foundation of China(Nos.62102311,62202377,62272385)in part by Natural Science Basic Research Program of Shaanxi(Nos.2022JQ-600,2022JM-353,2023-JC-QN-0327)+2 种基金in part by Shaanxi Distinguished Youth Project(No.2022JC-47)in part by Scientific Research Program Funded by Shaanxi Provincial Education Department(No.22JK0560)in part by Distinguished Youth Talents of Shaanxi Universities,and in part by Youth Innovation Team of Shaanxi Universities.
文摘With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most elementary learning models of machine learning.Privacy issues surrounding SVM classifier training have attracted increasing attention.In this paper,we investigate Differential Privacy-compliant Federated Machine Learning with Dimensionality Reduction,called FedDPDR-DPML,which greatly improves data utility while providing strong privacy guarantees.Considering in distributed learning scenarios,multiple participants usually hold unbalanced or small amounts of data.Therefore,FedDPDR-DPML enables multiple participants to collaboratively learn a global model based on weighted model averaging and knowledge aggregation and then the server distributes the global model to each participant to improve local data utility.Aiming at high-dimensional data,we adopt differential privacy in both the principal component analysis(PCA)-based dimensionality reduction phase and SVM classifiers training phase,which improves model accuracy while achieving strict differential privacy protection.Besides,we train Differential privacy(DP)-compliant SVM classifiers by adding noise to the objective function itself,thus leading to better data utility.Extensive experiments on three high-dimensional datasets demonstrate that FedDPDR-DPML can achieve high accuracy while ensuring strong privacy protection.
基金supported by the National Natural Science Foundation of China(NSFC)(62102232,62122042,61971269)Natural Science Foundation of Shandong Province Under(ZR2021QF064)。
文摘As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when learning agents are deployed on the edge side,the data aggregation from the end side to the designated edge devices is an important research topic.Considering the various importance of end devices,this paper studies the weighted data aggregation problem in a single hop end-to-edge communication network.Firstly,to make sure all the end devices with various weights are fairly treated in data aggregation,a distributed end-to-edge cooperative scheme is proposed.Then,to handle the massive contention on the wireless channel caused by end devices,a multi-armed bandit(MAB)algorithm is designed to help the end devices find their most appropriate update rates.Diffe-rent from the traditional data aggregation works,combining the MAB enables our algorithm a higher efficiency in data aggregation.With a theoretical analysis,we show that the efficiency of our algorithm is asymptotically optimal.Comparative experiments with previous works are also conducted to show the strength of our algorithm.
文摘Medical data mining has become an essential task in healthcare sector to secure the personal and medical data of patients using privacy policy.In this background,several authentication and accessibility issues emerge with an inten-tion to protect the sensitive details of the patients over getting published in open domain.To solve this problem,Multi Attribute Case based Privacy Preservation(MACPP)technique is proposed in this study to enhance the security of privacy-preserving data.Private information can be any attribute information which is categorized as sensitive logs in a patient’s records.The semantic relation between transactional patient records and access rights is estimated based on the mean average value to distinguish sensitive and non-sensitive information.In addition to this,crypto hidden policy is also applied here to encrypt the sensitive data through symmetric standard key log verification that protects the personalized sensitive information.Further,linear integrity verification provides authentication rights to verify the data,improves the performance of privacy preserving techni-que against intruders and assures high security in healthcare setting.
基金supported by the Scientific and Technological Research Council of Turkiye,under Project No.(122E670).
文摘Developing a privacy-preserving data publishing algorithm that stops individuals from disclosing their identities while not ignoring data utility remains an important goal to achieve.Because finding the trade-off between data privacy and data utility is an NP-hard problem and also a current research area.When existing approaches are investigated,one of the most significant difficulties discovered is the presence of outlier data in the datasets.Outlier data has a negative impact on data utility.Furthermore,k-anonymity algorithms,which are commonly used in the literature,do not provide adequate protection against outlier data.In this study,a new data anonymization algorithm is devised and tested for boosting data utility by incorporating an outlier data detection mechanism into the Mondrian algorithm.The connectivity-based outlier factor(COF)algorithm is used to detect outliers.Mondrian is selected because of its capacity to anonymize multidimensional data while meeting the needs of real-world data.COF,on the other hand,is used to discover outliers in high-dimensional datasets with complicated structures.The proposed algorithm generates more equivalence classes than the Mondrian algorithm and provides greater data utility than previous algorithms based on k-anonymization.In addition,it outperforms other algorithms in the discernibility metric(DM),normalized average equivalence class size(Cavg),global certainty penalty(GCP),query error rate,classification accuracy(CA),and F-measure metrics.Moreover,the increase in the values of theGCPand error ratemetrics demonstrates that the proposed algorithm facilitates obtaining higher data utility by grouping closer data points when compared to other algorithms.
基金This work was partially supported by the Natural Science Foundation of Beijing Municipality(No.4222038)by Open Research Project of the State Key Laboratory of Media Convergence and Communication(Communication University of China),the National Key R&D Program of China(No.2021YFF0307600)Fundamental Research Funds for the Central Universities.
文摘In the analysis of big data,deep learn-ing is a crucial technique.Big data analysis tasks are typically carried out on the cloud since it offers strong computer capabilities and storage areas.Nev-ertheless,there is a contradiction between the open nature of the cloud and the demand that data own-ers maintain their privacy.To use cloud resources for privacy-preserving data training,a viable method must be found.A privacy-preserving deep learning model(PPDLM)is suggested in this research to ad-dress this preserving issue.To preserve data privacy,we first encrypted the data using homomorphic en-cryption(HE)approach.Moreover,the deep learn-ing algorithm’s activation function—the sigmoid func-tion—uses the least-squares method to process non-addition and non-multiplication operations that are not allowed by homomorphic.Finally,experimental re-sults show that PPDLM has a significant effect on the protection of data privacy information.Compared with Non-Privacy Preserving Deep Learning Model(NPPDLM),PPDLM has higher computational effi-ciency.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea Government(MSIT)(No.2021R1C1C1013133)this work was supported by the Soonchunhyang University Research Fund(No.20210442).
文摘As the Internet of Things(IoT)advances,machine-type devices are densely deployed and massive networks such as ultra-dense networks(UDNs)are formed.Various devices attend to the network to transmit data using machine-type communication(MTC),whereby numerous,various are generated.MTC devices generally have resource constraints and use wireless communication.In this kind of network,data aggregation is a key function to provide transmission efficiency.It can reduce the number of transmitted data in the network,and this leads to energy saving and reducing transmission delays.In order to effectively operate data aggregation in UDNs,it is important to select an aggregation point well.The total number of transmitted data may vary,depending on the aggregation point to which the data are delivered.Therefore,in this paper,we propose a novel data aggregation scheme to select the appropriate aggregation point and describe the data transmission method applying the proposed aggregation scheme.In addition,we evaluate the proposed scheme with extensive computer simulations.Better performances in the proposed scheme are achieved compared to the conventional approach.
文摘Fog computing is a promising technology that has been emerged to handle the growth of smart devices as well as the popularity of latency-sensitive and location-awareness Internet of Things(IoT)services.After the emergence of IoT-based services,the industry of internet-based devices has grown.The number of these devices has raised from millions to billions,and it is expected to increase further in the near future.Thus,additional challenges will be added to the traditional centralized cloud-based architecture as it will not be able to handle that growth and to support all connected devices in real-time without affecting the user experience.Conventional data aggregation models for Fog enabled IoT environ-ments possess high computational complexity and communication cost.There-fore,in order to resolve the issues and improve the lifetime of the network,this study develops an effective hierarchical data aggregation with chaotic barnacles mating optimizer(HDAG-CBMO)technique.The HDAG-CBMO technique derives afitness function from many relational matrices,like residual energy,average distance to neighbors,and centroid degree of target area.Besides,a chaotic theory based population initialization technique is derived for the optimal initial position of barnacles.Moreover,a learning based data offloading method has been developed for reducing the response time to IoT user requests.A wide range of simulation analyses demonstrated that the HDAG-CBMO technique has resulted in balanced energy utilization and prolonged lifetime of the Fog assisted IoT networks.
基金This research was supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI)funded by the Ministry of Health&Welfare,Republic of Korea(grant number:HI21C1831)the Soonchunhyang University Research Fund.
文摘The conventional hospital environment is transformed into digital transformation that focuses on patient centric remote approach through advanced technologies.Early diagnosis of many diseases will improve the patient life.The cost of health care systems is reduced due to the use of advanced technologies such as Internet of Things(IoT),Wireless Sensor Networks(WSN),Embedded systems,Deep learning approaches and Optimization and aggregation methods.The data generated through these technologies will demand the bandwidth,data rate,latency of the network.In this proposed work,efficient discrete grey wolf optimization(DGWO)based data aggregation scheme using Elliptic curve Elgamal with Message Authentication code(ECEMAC)has been used to aggregate the parameters generated from the wearable sensor devices of the patient.The nodes that are far away from edge node will forward the data to its neighbor cluster head using DGWO.Aggregation scheme will reduce the number of transmissions over the network.The aggregated data are preprocessed at edge node to remove the noise for better diagnosis.Edge node will reduce the overhead of cloud server.The aggregated data are forward to cloud server for central storage and diagnosis.This proposed smart diagnosis will reduce the transmission cost through aggrega-tion scheme which will reduce the energy of the system.Energy cost for proposed system for 300 nodes is 0.34μJ.Various energy cost of existing approaches such as secure privacy preserving data aggregation scheme(SPPDA),concealed data aggregation scheme for multiple application(CDAMA)and secure aggregation scheme(ASAS)are 1.3μJ,0.81μJ and 0.51μJ respectively.The optimization approaches and encryption method will ensure the data privacy.
基金supported by the National Natural Science Foundation of China(No.62206238)the Natural Science Foundation of Jiangsu Province(Grant No.BK20220562)the Natural Science Research Project of Universities in Jiangsu Province(No.22KJB520010).
文摘Federated learning for edge computing is a promising solution in the data booming era,which leverages the computation ability of each edge device to train local models and only shares the model gradients to the central server.However,the frequently transmitted local gradients could also leak the participants’private data.To protect the privacy of local training data,lots of cryptographic-based Privacy-Preserving Federated Learning(PPFL)schemes have been proposed.However,due to the constrained resource nature of mobile devices and complex cryptographic operations,traditional PPFL schemes fail to provide efficient data confidentiality and lightweight integrity verification simultaneously.To tackle this problem,we propose a Verifiable Privacypreserving Federated Learning scheme(VPFL)for edge computing systems to prevent local gradients from leaking over the transmission stage.Firstly,we combine the Distributed Selective Stochastic Gradient Descent(DSSGD)method with Paillier homomorphic cryptosystem to achieve the distributed encryption functionality,so as to reduce the computation cost of the complex cryptosystem.Secondly,we further present an online/offline signature method to realize the lightweight gradients integrity verification,where the offline part can be securely outsourced to the edge server.Comprehensive security analysis demonstrates the proposed VPFL can achieve data confidentiality,authentication,and integrity.At last,we evaluate both communication overhead and computation cost of the proposed VPFL scheme,the experimental results have shown VPFL has low computation costs and communication overheads while maintaining high training accuracy.
基金This work was supported by the National Natural Science Foundation of China(No.61702276)the Startup Foundation for Introducing Talent of Nanjing University of Information Science and Technology under Grant 2016r055 and the Priority Academic Program Development(PAPD)of Jiangsu Higher Education Institutions.The authors are grateful for the anonymous reviewers who made constructive comments and improvements.
文摘Advanced cloud computing technology provides cost saving and flexibility of services for users.With the explosion of multimedia data,more and more data owners would outsource their personal multimedia data on the cloud.In the meantime,some computationally expensive tasks are also undertaken by cloud servers.However,the outsourced multimedia data and its applications may reveal the data owner’s private information because the data owners lose the control of their data.Recently,this thought has aroused new research interest on privacy-preserving reversible data hiding over outsourced multimedia data.In this paper,two reversible data hiding schemes are proposed for encrypted image data in cloud computing:reversible data hiding by homomorphic encryption and reversible data hiding in encrypted domain.The former is that additional bits are extracted after decryption and the latter is that extracted before decryption.Meanwhile,a combined scheme is also designed.This paper proposes the privacy-preserving outsourcing scheme of reversible data hiding over encrypted image data in cloud computing,which not only ensures multimedia data security without relying on the trustworthiness of cloud servers,but also guarantees that reversible data hiding can be operated over encrypted images at the different stages.Theoretical analysis confirms the correctness of the proposed encryption model and justifies the security of the proposed scheme.The computation cost of the proposed scheme is acceptable and adjusts to different security levels.
基金supported in part by NSFC under Grant No.61172090National Science and Technology Major Project under Grant 2012ZX03002001+3 种基金Research Fund for the Doctoral Program of Higher Education of China under Grant No.20120201110013Scientific and Technological Project in Shaanxi Province under Grant(No.2012K06-30, No.2014JQ8322)Basic Science Research Fund in Xi'an Jiaotong University(No. XJJ2014049,No.XKJC2014008)Shaanxi Science and Technology Innovation Project (2013SZS16-Z01/P01/K01)
文摘With the increasing popularity of cloud computing,privacy has become one of the key problem in cloud security.When data is outsourced to the cloud,for data owners,they need to ensure the security of their privacy;for cloud service providers,they need some information of the data to provide high QoS services;and for authorized users,they need to access to the true value of data.The existing privacy-preserving methods can't meet all the needs of the three parties at the same time.To address this issue,we propose a retrievable data perturbation method and use it in the privacy-preserving in data outsourcing in cloud computing.Our scheme comes in four steps.Firstly,an improved random generator is proposed to generate an accurate "noise".Next,a perturbation algorithm is introduced to add noise to the original data.By doing this,the privacy information is hidden,but the mean and covariance of data which the service providers may need remain unchanged.Then,a retrieval algorithm is proposed to get the original data back from the perturbed data.Finally,we combine the retrievable perturbation with the access control process to ensure only the authorized users can retrieve the original data.The experiments show that our scheme perturbs date correctly,efficiently,and securely.
基金This paper was supported by the National Basic Research Pro- gram of China (973 Program) under Crant No. 2011CB302903 the National Natural Science Foundation of China under Crants No. 60873231, No.61272084+3 种基金 the Natural Science Foundation of Jiangsu Province under Ca-ant No. BK2009426 the Innovation Project for Postgraduate Cultivation of Jiangsu Province under Crants No. CXZZ11_0402, No. CX10B195Z, No. CXLX11_0415, No. CXLXll 0416 the Natural Science Research Project of Jiangsu Education Department under Grant No. 09KJD510008 the Natural Science Foundation of the Jiangsu Higher Educa-tion Institutions of China under Grant No. 11KJA520002.
文摘In scenarios of real-time data collection of long-term deployed Wireless Sensor Networks (WSNs), low-latency data collection with long net- work lifetime becomes a key issue. In this paper, we present a data aggregation scheduling with guaran- teed lifetime and efficient latency in WSNs. We first Construct a Guaranteed Lifetime Mininmm Ra- dius Data Aggregation Tree (GLMRDAT) which is conducive to reduce scheduling latency while pro- viding a guaranteed network lifetime, and then de-sign a Greedy Scheduling algorithM (GSM) based on finding the nmzximum independent set in conflict graph to schedule he transmission of nodes in the aggregation tree. Finally, simulations show that our proposed approach not only outperfonm the state-of-the-art solutions in terms of schedule latency, but also provides longer and guaranteed network lifetilre.
基金supported in part by the National Natural Science Foundation of China under Grant No.61972371Youth Innovation Promotion Association of Chinese Academy of Sciences(CAS)under Grant No.Y202093.
文摘By integrating the traditional power grid with information and communication technology, smart grid achieves dependable, efficient, and flexible grid data processing. The smart meters deployed on the user side of the smart grid collect the users' power usage data on a regular basis and upload it to the control center to complete the smart grid data acquisition. The control center can evaluate the supply and demand of the power grid through aggregated data from users and then dynamically adjust the power supply and price, etc. However, since the grid data collected from users may disclose the user's electricity usage habits and daily activities, privacy concern has become a critical issue in smart grid data aggregation. Most of the existing privacy-preserving data collection schemes for smart grid adopt homomorphic encryption or randomization techniques which are either impractical because of the high computation overhead or unrealistic for requiring a trusted third party.