The fast proliferation of edge devices for the Internet of Things(IoT)has led to massive volumes of data explosion.The generated data is collected and shared using edge-based IoT structures at a considerably high freq...The fast proliferation of edge devices for the Internet of Things(IoT)has led to massive volumes of data explosion.The generated data is collected and shared using edge-based IoT structures at a considerably high frequency.Thus,the data-sharing privacy exposure issue is increasingly intimidating when IoT devices make malicious requests for filching sensitive information from a cloud storage system through edge nodes.To address the identified issue,we present evolutionary privacy preservation learning strategies for an edge computing-based IoT data sharing scheme.In particular,we introduce evolutionary game theory and construct a payoff matrix to symbolize intercommunication between IoT devices and edge nodes,where IoT devices and edge nodes are two parties of the game.IoT devices may make malicious requests to achieve their goals of stealing privacy.Accordingly,edge nodes should deny malicious IoT device requests to prevent IoT data from being disclosed.They dynamically adjust their own strategies according to the opponent's strategy and finally maximize the payoffs.Built upon a developed application framework to illustrate the concrete data sharing architecture,a novel algorithm is proposed that can derive the optimal evolutionary learning strategy.Furthermore,we numerically simulate evolutionarily stable strategies,and the final results experimentally verify the correctness of the IoT data sharing privacy preservation scheme.Therefore,the proposed model can effectively defeat malicious invasion and protect sensitive information from leaking when IoT data is shared.展开更多
Most visual privacy protection methods only hide the identity information of the face images,but the expression,behavior and some other information,which are of great significant in the live broadcast and other scenar...Most visual privacy protection methods only hide the identity information of the face images,but the expression,behavior and some other information,which are of great significant in the live broadcast and other scenarios,are also destroyed by the privacy protection process.To this end,this paper introduces a method to remove the identity information while preserving the expression information by performing multi-mode discriminant analysis on the images normalized with AAM algorithm.The face images are decomposed into mutually orthogonal subspaces corresponding to face attributes such as gender,race and expression,each of which owns related characteristic parameters.Then,the expression parameter is preserves to keep the facial expression information while others parameters,including gender and race,are modified to protect face privacy.The experiments show that this method yields well performance on both data utility and privacy protection.展开更多
Due to dramatically increasing information published in social networks,privacy issues have given rise to public concerns.Although the presence of differential privacy provides privacy protection with theoretical foun...Due to dramatically increasing information published in social networks,privacy issues have given rise to public concerns.Although the presence of differential privacy provides privacy protection with theoretical foundations,the trade-off between privacy and data utility still demands further improvement.However,most existing studies do not consider the quantitative impact of the adversary when measuring data utility.In this paper,we firstly propose a personalized differential privacy method based on social distance.Then,we analyze the maximum data utility when users and adversaries are blind to the strategy sets of each other.We formalize all the payoff functions in the differential privacy sense,which is followed by the establishment of a static Bayesian game.The trade-off is calculated by deriving the Bayesian Nash equilibrium with a modified reinforcement learning algorithm.The proposed method achieves fast convergence by reducing the cardinality from n to 2.In addition,the in-place trade-off can maximize the user's data utility if the action sets of the user and the adversary are public while the strategy sets are unrevealed.Our extensive experiments on the real-world dataset prove the proposed model is effective and feasible.展开更多
The popularization of intelligent healthcare devices and big data analytics significantly boosts the development of Smart Healthcare Networks(SHNs).To enhance the precision of diagnosis,different participants in SHNs ...The popularization of intelligent healthcare devices and big data analytics significantly boosts the development of Smart Healthcare Networks(SHNs).To enhance the precision of diagnosis,different participants in SHNs share health data that contain sensitive information.Therefore,the data exchange process raises privacy concerns,especially when the integration of health data from multiple sources(linkage attack)results in further leakage.Linkage attack is a type of dominant attack in the privacy domain,which can leverage various data sources for private data mining.Furthermore,adversaries launch poisoning attacks to falsify the health data,which leads to misdiagnosing or even physical damage.To protect private health data,we propose a personalized differential privacy model based on the trust levels among users.The trust is evaluated by a defined community density,while the corresponding privacy protection level is mapped to controllable randomized noise constrained by differential privacy.To avoid linkage attacks in personalized differential privacy,we design a noise correlation decoupling mechanism using a Markov stochastic process.In addition,we build the community model on a blockchain,which can mitigate the risk of poisoning attacks during differentially private data transmission over SHNs.Extensive experiments and analysis on real-world datasets have testified the proposed model,and achieved better performance compared with existing research from perspectives of privacy protection and effectiveness.展开更多
Artificial Intelligence of Things(AIoT)is experiencing unimaginable fast booming with the popularization of end devices and advanced machine learning and data processing techniques.An increasing volume of data is bein...Artificial Intelligence of Things(AIoT)is experiencing unimaginable fast booming with the popularization of end devices and advanced machine learning and data processing techniques.An increasing volume of data is being collected every single second to enable Artificial Intelligence(AI)on the Internet of Things(IoT).The explosion of data brings significant benefits to various intelligent industries to provide predictive services and research institutes to advance human knowledge in data-intensive fields.To make the best use of the collected data,various data mining techniques have been deployed to extract data patterns.In classic scenarios,the data collected from IoT devices is directly sent to cloud servers for processing using diverse methods such as training machine learning models.展开更多
基金supported in part by Zhejiang Provincial Natural Science Foundation of China under Grant nos.LZ22F020002 and LY22F020003National Natural Science Foundation of China under Grant nos.61772018 and 62002226the key project of Humanities and Social Sciences in Colleges and Universities of Zhejiang Province under Grant no.2021GH017.
文摘The fast proliferation of edge devices for the Internet of Things(IoT)has led to massive volumes of data explosion.The generated data is collected and shared using edge-based IoT structures at a considerably high frequency.Thus,the data-sharing privacy exposure issue is increasingly intimidating when IoT devices make malicious requests for filching sensitive information from a cloud storage system through edge nodes.To address the identified issue,we present evolutionary privacy preservation learning strategies for an edge computing-based IoT data sharing scheme.In particular,we introduce evolutionary game theory and construct a payoff matrix to symbolize intercommunication between IoT devices and edge nodes,where IoT devices and edge nodes are two parties of the game.IoT devices may make malicious requests to achieve their goals of stealing privacy.Accordingly,edge nodes should deny malicious IoT device requests to prevent IoT data from being disclosed.They dynamically adjust their own strategies according to the opponent's strategy and finally maximize the payoffs.Built upon a developed application framework to illustrate the concrete data sharing architecture,a novel algorithm is proposed that can derive the optimal evolutionary learning strategy.Furthermore,we numerically simulate evolutionarily stable strategies,and the final results experimentally verify the correctness of the IoT data sharing privacy preservation scheme.Therefore,the proposed model can effectively defeat malicious invasion and protect sensitive information from leaking when IoT data is shared.
文摘Most visual privacy protection methods only hide the identity information of the face images,but the expression,behavior and some other information,which are of great significant in the live broadcast and other scenarios,are also destroyed by the privacy protection process.To this end,this paper introduces a method to remove the identity information while preserving the expression information by performing multi-mode discriminant analysis on the images normalized with AAM algorithm.The face images are decomposed into mutually orthogonal subspaces corresponding to face attributes such as gender,race and expression,each of which owns related characteristic parameters.Then,the expression parameter is preserves to keep the facial expression information while others parameters,including gender and race,are modified to protect face privacy.The experiments show that this method yields well performance on both data utility and privacy protection.
文摘Due to dramatically increasing information published in social networks,privacy issues have given rise to public concerns.Although the presence of differential privacy provides privacy protection with theoretical foundations,the trade-off between privacy and data utility still demands further improvement.However,most existing studies do not consider the quantitative impact of the adversary when measuring data utility.In this paper,we firstly propose a personalized differential privacy method based on social distance.Then,we analyze the maximum data utility when users and adversaries are blind to the strategy sets of each other.We formalize all the payoff functions in the differential privacy sense,which is followed by the establishment of a static Bayesian game.The trade-off is calculated by deriving the Bayesian Nash equilibrium with a modified reinforcement learning algorithm.The proposed method achieves fast convergence by reducing the cardinality from n to 2.In addition,the in-place trade-off can maximize the user's data utility if the action sets of the user and the adversary are public while the strategy sets are unrevealed.Our extensive experiments on the real-world dataset prove the proposed model is effective and feasible.
基金supported by the National Key Research and Development Program of China(No.2021YFF0900400).
文摘The popularization of intelligent healthcare devices and big data analytics significantly boosts the development of Smart Healthcare Networks(SHNs).To enhance the precision of diagnosis,different participants in SHNs share health data that contain sensitive information.Therefore,the data exchange process raises privacy concerns,especially when the integration of health data from multiple sources(linkage attack)results in further leakage.Linkage attack is a type of dominant attack in the privacy domain,which can leverage various data sources for private data mining.Furthermore,adversaries launch poisoning attacks to falsify the health data,which leads to misdiagnosing or even physical damage.To protect private health data,we propose a personalized differential privacy model based on the trust levels among users.The trust is evaluated by a defined community density,while the corresponding privacy protection level is mapped to controllable randomized noise constrained by differential privacy.To avoid linkage attacks in personalized differential privacy,we design a noise correlation decoupling mechanism using a Markov stochastic process.In addition,we build the community model on a blockchain,which can mitigate the risk of poisoning attacks during differentially private data transmission over SHNs.Extensive experiments and analysis on real-world datasets have testified the proposed model,and achieved better performance compared with existing research from perspectives of privacy protection and effectiveness.
文摘Artificial Intelligence of Things(AIoT)is experiencing unimaginable fast booming with the popularization of end devices and advanced machine learning and data processing techniques.An increasing volume of data is being collected every single second to enable Artificial Intelligence(AI)on the Internet of Things(IoT).The explosion of data brings significant benefits to various intelligent industries to provide predictive services and research institutes to advance human knowledge in data-intensive fields.To make the best use of the collected data,various data mining techniques have been deployed to extract data patterns.In classic scenarios,the data collected from IoT devices is directly sent to cloud servers for processing using diverse methods such as training machine learning models.