Due to the fact that consumers'privacy data sharing has multifaceted and complex effects on the e-commerce platform and its two sided agents,consumers and sellers,a game-theoretic model in a monopoly e-market is s...Due to the fact that consumers'privacy data sharing has multifaceted and complex effects on the e-commerce platform and its two sided agents,consumers and sellers,a game-theoretic model in a monopoly e-market is set up to study the equilibrium strategies of the three agents(the platform,the seller on it and consumers)under privacy data sharing.Equilibrium decisions show that after sharing consumers'privacy data once,the platform can collect more privacy data from consumers.Meanwhile,privacy data sharing pushes the seller to reduce the product price.Moreover,the platform will increase the transaction fee if the privacy data sharing value is high.It is also indicated that privacy data sharing always benefits consumers and the seller.However,the platform's profit decreases if the privacy data sharing value is low and the privacy data sharing level is high.Finally,an extended model considering an incomplete information game among the agents is discussed.The results show that both the platform and the seller cannot obtain a high profit from privacy data sharing.Factors including the seller's possibility to buy privacy data,the privacy data sharing value and privacy data sharing level affect the two agents'payoffs.If the platform wishes to benefit from privacy data sharing,it should increase the possibility of the seller to buy privacy data or increase the privacy data sharing value.展开更多
The overgeneralisation may happen because most studies on data publishing for multiple sensitive attributes(SAs)have not considered the personalised privacy requirement.Furthermore,sensitive information disclosure may...The overgeneralisation may happen because most studies on data publishing for multiple sensitive attributes(SAs)have not considered the personalised privacy requirement.Furthermore,sensitive information disclosure may also be caused by these personalised requirements.To address the matter,this article develops a personalised data publishing method for multiple SAs.According to the requirements of individuals,the new method partitions SAs values into two categories:private values and public values,and breaks the association between them for privacy guarantees.For the private values,this paper takes the process of anonymisation,while the public values are released without this process.An algorithm is designed to achieve the privacy mode,where the selectivity is determined by the sensitive value frequency and undesirable objects.The experimental results show that the proposed method can provide more information utility when compared with previous methods.The theoretic analyses and experiments also indicate that the privacy can be guaranteed even though the public values are known to an adversary.The overgeneralisation and privacy breach caused by the personalised requirement can be avoided by the new method.展开更多
Advances in technology require upgrades in the law. One such area involves data brokers, which have thus far gone unregulated. Data brokers use artificial intelligence to aggregate information into data profiles about...Advances in technology require upgrades in the law. One such area involves data brokers, which have thus far gone unregulated. Data brokers use artificial intelligence to aggregate information into data profiles about individual Americans derived from consumer use of the internet and connected devices. Data profiles are then sold for profit. Government investigators use a legal loophole to purchase this data instead of obtaining a search warrant, which the Fourth Amendment would otherwise require. Consumers have lacked a reasonable means to fight or correct the information data brokers collect. Americans may not even be aware of the risks of data aggregation, which upends the test of reasonable expectations used in a search warrant analysis. Data aggregation should be controlled and regulated, which is the direction some privacy laws take. Legislatures must step forward to safeguard against shadowy data-profiling practices, whether abroad or at home. In the meantime, courts can modify their search warrant analysis by including data privacy principles.展开更多
Multi-Source data plays an important role in the evolution of media convergence.Its fusion processing enables the further mining of data and utilization of data value and broadens the path for the sharing and dissemin...Multi-Source data plays an important role in the evolution of media convergence.Its fusion processing enables the further mining of data and utilization of data value and broadens the path for the sharing and dissemination of media data.However,it also faces serious problems in terms of protecting user and data privacy.Many privacy protectionmethods have been proposed to solve the problemof privacy leakage during the process of data sharing,but they suffer fromtwo flaws:1)the lack of algorithmic frameworks for specific scenarios such as dynamic datasets in the media domain;2)the inability to solve the problem of the high computational complexity of ciphertext in multi-source data privacy protection,resulting in long encryption and decryption times.In this paper,we propose a multi-source data privacy protection method based on homomorphic encryption and blockchain technology,which solves the privacy protection problem ofmulti-source heterogeneous data in the dissemination ofmedia and reduces ciphertext processing time.We deployed the proposedmethod on theHyperledger platformfor testing and compared it with the privacy protection schemes based on k-anonymity and differential privacy.The experimental results showthat the key generation,encryption,and decryption times of the proposedmethod are lower than those in data privacy protection methods based on k-anonymity technology and differential privacy technology.This significantly reduces the processing time ofmulti-source data,which gives it potential for use in many applications.展开更多
The traditional air traffic control information sharing data has weak security characteristics of personal privacy data and poor effect,which is easy to leads to the problem that the data is usurped.Starting from the ...The traditional air traffic control information sharing data has weak security characteristics of personal privacy data and poor effect,which is easy to leads to the problem that the data is usurped.Starting from the application of the ATC(automatic train control)network,this paper focuses on the zero trust and zero trust access strategy and the tamper-proof method of information-sharing network data.Through the improvement of ATC’s zero trust physical layer authentication and network data distributed feature differentiation calculation,this paper reconstructs the personal privacy scope authentication structure and designs a tamper-proof method of ATC’s information sharing on the Internet.From the single management authority to the unified management of data units,the systematic algorithm improvement of shared network data tamper prevention method is realized,and RDTP(Reliable Data Transfer Protocol)is selected in the network data of information sharing resources to realize the effectiveness of tamper prevention of air traffic control data during transmission.The results show that this method can reasonably avoid the tampering of information sharing on the Internet,maintain the security factors of air traffic control information sharing on the Internet,and the Central Processing Unit(CPU)utilization rate is only 4.64%,which effectively increases the performance of air traffic control data comprehensive security protection system.展开更多
Nowadays,smart wearable devices are used widely in the Social Internet of Things(IoT),which record human physiological data in real time.To protect the data privacy of smart devices,researchers pay more attention to f...Nowadays,smart wearable devices are used widely in the Social Internet of Things(IoT),which record human physiological data in real time.To protect the data privacy of smart devices,researchers pay more attention to federated learning.Although the data leakage problem is somewhat solved,a new challenge has emerged.Asynchronous federated learning shortens the convergence time,while it has time delay and data heterogeneity problems.Both of the two problems harm the accuracy.To overcome these issues,we propose an asynchronous federated learning scheme based on double compensation to solve the problem of time delay and data heterogeneity problems.The scheme improves the Delay Compensated Asynchronous Stochastic Gradient Descent(DC-ASGD)algorithm based on the second-order Taylor expansion as the delay compensation.It adds the FedProx operator to the objective function as the heterogeneity compensation.Besides,the proposed scheme motivates the federated learning process by adjusting the importance of the participants and the central server.We conduct multiple sets of experiments in both conventional and heterogeneous scenarios.The experimental results show that our scheme improves the accuracy by about 5%while keeping the complexity constant.We can find that our scheme converges more smoothly during training and adapts better in heterogeneous environments through numerical experiments.The proposed double-compensation-based federated learning scheme is highly accurate,flexible in terms of participants and smooth the training process.Hence it is deemed suitable for data privacy protection of smart wearable devices.展开更多
Most studies have conducted experiments on predicting energy consumption by integrating data formodel training.However, the process of centralizing data can cause problems of data leakage.Meanwhile,many laws and regul...Most studies have conducted experiments on predicting energy consumption by integrating data formodel training.However, the process of centralizing data can cause problems of data leakage.Meanwhile,many laws and regulationson data security and privacy have been enacted, making it difficult to centralize data, which can lead to a datasilo problem. Thus, to train the model while maintaining user privacy, we adopt a federated learning framework.However, in all classical federated learning frameworks secure aggregation, the Federated Averaging (FedAvg)method is used to directly weight the model parameters on average, which may have an adverse effect on te model.Therefore, we propose the Federated Reinforcement Learning (FedRL) model, which consists of multiple userscollaboratively training the model. Each household trains a local model on local data. These local data neverleave the local area, and only the encrypted parameters are uploaded to the central server to participate in thesecure aggregation of the global model. We improve FedAvg by incorporating a Q-learning algorithm to assignweights to each locally uploaded local model. And the model has improved predictive performance. We validatethe performance of the FedRL model by testing it on a real-world dataset and compare the experimental results withother models. The performance of our proposed method in most of the evaluation metrics is improved comparedto both the centralized and distributed models.展开更多
In the financial sector, data are highly confidential and sensitive,and ensuring data privacy is critical. Sample fusion is the basis of horizontalfederation learning, but it is suitable only for scenarios where custo...In the financial sector, data are highly confidential and sensitive,and ensuring data privacy is critical. Sample fusion is the basis of horizontalfederation learning, but it is suitable only for scenarios where customershave the same format but different targets, namely for scenarios with strongfeature overlapping and weak user overlapping. To solve this limitation, thispaper proposes a federated learning-based model with local data sharing anddifferential privacy. The indexing mechanism of differential privacy is used toobtain different degrees of privacy budgets, which are applied to the gradientaccording to the contribution degree to ensure privacy without affectingaccuracy. In addition, data sharing is performed to improve the utility ofthe global model. Further, the distributed prediction model is used to predictcustomers’ loan propensity on the premise of protecting user privacy. Usingan aggregation mechanism based on federated learning can help to train themodel on distributed data without exposing local data. The proposed methodis verified by experiments, and experimental results show that for non-iiddata, the proposed method can effectively improve data accuracy and reducethe impact of sample tilt. The proposed method can be extended to edgecomputing, blockchain, and the Industrial Internet of Things (IIoT) fields.The theoretical analysis and experimental results show that the proposedmethod can ensure the privacy and accuracy of the federated learning processand can also improve the model utility for non-iid data by 7% compared tothe federated averaging method (FedAvg).展开更多
The unmanned aerial vehicle(UAV)self-organizing network is composed of multiple UAVs with autonomous capabilities according to a certain structure and scale,which can quickly and accurately complete complex tasks such...The unmanned aerial vehicle(UAV)self-organizing network is composed of multiple UAVs with autonomous capabilities according to a certain structure and scale,which can quickly and accurately complete complex tasks such as path planning,situational awareness,and information transmission.Due to the openness of the network,the UAV cluster is more vulnerable to passive eavesdropping,active interference,and other attacks,which makes the system face serious security threats.This paper proposes a Blockchain-Based Data Acquisition(BDA)scheme with privacy protection to address the data privacy and identity authentication problems in the UAV-assisted data acquisition scenario.Each UAV cluster has an aggregate unmanned aerial vehicle(AGV)that can batch-verify the acquisition reports within its administrative domain.After successful verification,AGV adds its signcrypted ciphertext to the aggregation and uploads it to the blockchain for storage.There are two chains in the blockchain that store the public key information of registered entities and the aggregated reports,respectively.The security analysis shows that theBDAconstruction can protect the privacy and authenticity of acquisition data,and effectively resist a malicious key generation center and the public-key substitution attack.It also provides unforgeability to acquisition reports under the Elliptic Curve Discrete Logarithm Problem(ECDLP)assumption.The performance analysis demonstrates that compared with other schemes,the proposed BDA construction has lower computational complexity and is more suitable for the UAV cluster network with limited computing power and storage capacity.展开更多
Unmanned aerial vehicles(UAVs),or drones,have revolutionized a wide range of industries,including monitoring,agriculture,surveillance,and supply chain.However,their widespread use also poses significant challenges,suc...Unmanned aerial vehicles(UAVs),or drones,have revolutionized a wide range of industries,including monitoring,agriculture,surveillance,and supply chain.However,their widespread use also poses significant challenges,such as public safety,privacy,and cybersecurity.Cyberattacks,targetingUAVs have become more frequent,which highlights the need for robust security solutions.Blockchain technology,the foundation of cryptocurrencies has the potential to address these challenges.This study suggests a platform that utilizes blockchain technology tomanage drone operations securely and confidentially.By incorporating blockchain technology,the proposed method aims to increase the security and privacy of drone data.The suggested platform stores information on a public blockchain located on Ethereum and leverages the Ganache platform to ensure secure and private blockchain transactions.TheMetaMask wallet for Ethbalance is necessary for BCT transactions.The present research finding shows that the proposed approach’s efficiency and security features are superior to existing methods.This study contributes to the development of a secure and efficient system for managing drone operations that could have significant applications in various industries.The proposed platform’s security measures could mitigate privacy concerns,minimize cyber security risk,and enhance public safety,ultimately promoting the widespread adoption of UAVs.The results of the study demonstrate that the blockchain can ensure the fulfillment of core security needs such as authentication,privacy preservation,confidentiality,integrity,and access control.展开更多
Imagine numerous clients,each with personal data;individual inputs are severely corrupt,and a server only concerns the collective,statistically essential facets of this data.In several data mining methods,privacy has ...Imagine numerous clients,each with personal data;individual inputs are severely corrupt,and a server only concerns the collective,statistically essential facets of this data.In several data mining methods,privacy has become highly critical.As a result,various privacy-preserving data analysis technologies have emerged.Hence,we use the randomization process to reconstruct composite data attributes accurately.Also,we use privacy measures to estimate how much deception is required to guarantee privacy.There are several viable privacy protections;however,determining which one is the best is still a work in progress.This paper discusses the difficulty of measuring privacy while also offering numerous random sampling procedures and statistical and categorized data results.Further-more,this paper investigates the use of arbitrary nature with perturbations in privacy preservation.According to the research,arbitrary objects(most notably random matrices)have"predicted"frequency patterns.It shows how to recover crucial information from a sample damaged by a random number using an arbi-trary lattice spectral selection strategy.Thisfiltration system's conceptual frame-work posits,and extensive practicalfindings indicate that sparse data distortions preserve relatively modest privacy protection in various situations.As a result,the research framework is efficient and effective in maintaining data privacy and security.展开更多
The advent of Industry 5.0 marks a transformative era where Cyber-Physical Systems(CPSs)seamlessly integrate physical processes with advanced digital technologies.However,as industries become increasingly interconnect...The advent of Industry 5.0 marks a transformative era where Cyber-Physical Systems(CPSs)seamlessly integrate physical processes with advanced digital technologies.However,as industries become increasingly interconnected and reliant on smart digital technologies,the intersection of physical and cyber domains introduces novel security considerations,endangering the entire industrial ecosystem.The transition towards a more cooperative setting,including humans and machines in Industry 5.0,together with the growing intricacy and interconnection of CPSs,presents distinct and diverse security and privacy challenges.In this regard,this study provides a comprehensive review of security and privacy concerns pertaining to CPSs in the context of Industry 5.0.The review commences by providing an outline of the role of CPSs in Industry 5.0 and then proceeds to conduct a thorough review of the different security risks associated with CPSs in the context of Industry 5.0.Afterward,the study also presents the privacy implications inherent in these systems,particularly in light of the massive data collection and processing required.In addition,the paper delineates potential avenues for future research and provides countermeasures to surmount these challenges.Overall,the study underscores the imperative of adopting comprehensive security and privacy strategies within the context of Industry 5.0.展开更多
As a distributed machine learning method,federated learning(FL)has the advantage of naturally protecting data privacy.It keeps data locally and trains local models through local data to protect the privacy of local da...As a distributed machine learning method,federated learning(FL)has the advantage of naturally protecting data privacy.It keeps data locally and trains local models through local data to protect the privacy of local data.The federated learning method effectively solves the problem of artificial Smart data islands and privacy protection issues.However,existing research shows that attackersmay still steal user information by analyzing the parameters in the federated learning training process and the aggregation parameters on the server side.To solve this problem,differential privacy(DP)techniques are widely used for privacy protection in federated learning.However,adding Gaussian noise perturbations to the data degrades the model learning performance.To address these issues,this paper proposes a differential privacy federated learning scheme based on adaptive Gaussian noise(DPFL-AGN).To protect the data privacy and security of the federated learning training process,adaptive Gaussian noise is specifically added in the training process to hide the real parameters uploaded by the client.In addition,this paper proposes an adaptive noise reduction method.With the convergence of the model,the Gaussian noise in the later stage of the federated learning training process is reduced adaptively.This paper conducts a series of simulation experiments on realMNIST and CIFAR-10 datasets,and the results show that the DPFL-AGN algorithmperforms better compared to the other algorithms.展开更多
The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes ...The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively.展开更多
This study investigates university English teachers’acceptance and willingness to use learning management system(LMS)data analysis tools in their teaching practices.The research employs a mixed-method approach,combin...This study investigates university English teachers’acceptance and willingness to use learning management system(LMS)data analysis tools in their teaching practices.The research employs a mixed-method approach,combining quantitative surveys and qualitative interviews to understand teachers’perceptions and attitudes,and the factors influencing their adoption of LMS data analysis tools.The findings reveal that perceived usefulness,perceived ease of use,technical literacy,organizational support,and data privacy concerns significantly impact teachers’willingness to use these tools.Based on these insights,the study offers practical recommendations for educational institutions to enhance the effective adoption of LMS data analysis tools in English language teaching.展开更多
In order to investigate the enhancement of data privacy by distributing data packets via multiple paths, this paper fommlates a security model and analyzes the privacy problem in mul- tipath scenarios leveraging infor...In order to investigate the enhancement of data privacy by distributing data packets via multiple paths, this paper fommlates a security model and analyzes the privacy problem in mul- tipath scenarios leveraging inforrmtion theoretic concept. Based on proposed model, a privacy function related to the path number is discussed. We heuristically recommend the optin^al path num- ber and analyze the tradeoff among the perform- ance, resource consumption and privacy. For re- ducing the inforlmtion leakage, the data schedule algorithms are also proposed. The analytical model can provide guidelines for the multipath protocol design.展开更多
Privacy protection for big data linking is discussed here in relation to the Central Statistics Office (CSO), Ireland's, big data linking project titled the 'Structure of Earnings Survey - Administrative Data Proj...Privacy protection for big data linking is discussed here in relation to the Central Statistics Office (CSO), Ireland's, big data linking project titled the 'Structure of Earnings Survey - Administrative Data Project' (SESADP). The result of the project was the creation of datasets and statistical outputs for the years 2011 to 2014 to meet Eurostat's annual earnings statistics requirements and the Structure of Earnings Survey (SES) Regulation. Record linking across the Census and various public sector datasets enabled the necessary information to be acquired to meet the Eurostat earnings requirements. However, the risk of statistical disclosure (i.e. identifying an individual on the dataset) is high unless privacy and confidentiality safe-guards are built into the data matching process. This paper looks at the three methods of linking records on big datasets employed on the SESADP, and how to anonymise the data to protect the identity of the individuals, where potentially disclosive variables exist.展开更多
This paper aims to find a practical way of quantitatively representing the privacy of network data. A method of quantifying the privacy of network data anonymization based on similarity distance and entropy in the sce...This paper aims to find a practical way of quantitatively representing the privacy of network data. A method of quantifying the privacy of network data anonymization based on similarity distance and entropy in the scenario involving multiparty network data sharing with Trusted Third Party (TTP) is proposed. Simulations are then conducted using network data from different sources, and show that the measurement indicators defined in this paper can adequately quantify the privacy of the network. In particular, it can indicate the effect of the auxiliary information of the adversary on privacy.展开更多
The incentive mechanism of federated learning has been a hot topic,but little research has been done on the compensation of privacy loss.To this end,this study uses the Local SGD federal learning framework and gives a...The incentive mechanism of federated learning has been a hot topic,but little research has been done on the compensation of privacy loss.To this end,this study uses the Local SGD federal learning framework and gives a theoretical analysis under the use of differential privacy protection.Based on the analysis,a multi‐attribute reverse auction model is proposed to be used for user selection as well as payment calculation for participation in federal learning.The model uses a mixture of economic and non‐economic attributes in making choices for users and is transformed into an optimisation equation to solve the user choice problem.In addition,a post‐auction negotiation model that uses the Rubinstein bargaining model as well as optimisation equations to describe the negotiation process and theoretically demonstrate the improvement of social welfare is proposed.In the experimental part,the authors find that their algorithm improves both the model accuracy and the F1‐score values relative to the comparison algorithms to varying degrees.展开更多
Publishing big data and making it accessible to researchers is important for knowledge building as it helps in applying highly efficient methods to plan,conduct,and assess scientific research.However,publishing and pr...Publishing big data and making it accessible to researchers is important for knowledge building as it helps in applying highly efficient methods to plan,conduct,and assess scientific research.However,publishing and processing big data poses a privacy concern related to protecting individuals’sensitive information while maintaining the usability of the published data.Several anonymization methods,such as slicing and merging,have been designed as solutions to the privacy concerns for publishing big data.However,the major drawback of merging and slicing is the random permutation procedure,which does not always guarantee complete protection against attribute or membership disclosure.Moreover,merging procedures may generatemany fake tuples,leading to a loss of data utility and subsequent erroneous knowledge extraction.This study therefore proposes a slicingbased enhanced method for privacy-preserving big data publishing while maintaining the data utility.In particular,the proposed method distributes the data into horizontal and vertical partitions.The lower and upper protection levels are then used to identify the unique and identical attributes’values.The unique and identical attributes are swapped to ensure the published big data is protected from disclosure risks.The outcome of the experiments demonstrates that the proposed method could maintain data utility and provide stronger privacy preservation.展开更多
基金The National Social Science Foundation of China(No.17BGL196)。
文摘Due to the fact that consumers'privacy data sharing has multifaceted and complex effects on the e-commerce platform and its two sided agents,consumers and sellers,a game-theoretic model in a monopoly e-market is set up to study the equilibrium strategies of the three agents(the platform,the seller on it and consumers)under privacy data sharing.Equilibrium decisions show that after sharing consumers'privacy data once,the platform can collect more privacy data from consumers.Meanwhile,privacy data sharing pushes the seller to reduce the product price.Moreover,the platform will increase the transaction fee if the privacy data sharing value is high.It is also indicated that privacy data sharing always benefits consumers and the seller.However,the platform's profit decreases if the privacy data sharing value is low and the privacy data sharing level is high.Finally,an extended model considering an incomplete information game among the agents is discussed.The results show that both the platform and the seller cannot obtain a high profit from privacy data sharing.Factors including the seller's possibility to buy privacy data,the privacy data sharing value and privacy data sharing level affect the two agents'payoffs.If the platform wishes to benefit from privacy data sharing,it should increase the possibility of the seller to buy privacy data or increase the privacy data sharing value.
基金Doctoral research start-up fund of Guangxi Normal UniversityGuangzhou Research Institute of Communication University of China Common Construction Project,Sunflower-the Aging Intelligent CommunityGuangxi project of improving Middle-aged/Young teachers'ability,Grant/Award Number:2020KY020323。
文摘The overgeneralisation may happen because most studies on data publishing for multiple sensitive attributes(SAs)have not considered the personalised privacy requirement.Furthermore,sensitive information disclosure may also be caused by these personalised requirements.To address the matter,this article develops a personalised data publishing method for multiple SAs.According to the requirements of individuals,the new method partitions SAs values into two categories:private values and public values,and breaks the association between them for privacy guarantees.For the private values,this paper takes the process of anonymisation,while the public values are released without this process.An algorithm is designed to achieve the privacy mode,where the selectivity is determined by the sensitive value frequency and undesirable objects.The experimental results show that the proposed method can provide more information utility when compared with previous methods.The theoretic analyses and experiments also indicate that the privacy can be guaranteed even though the public values are known to an adversary.The overgeneralisation and privacy breach caused by the personalised requirement can be avoided by the new method.
文摘Advances in technology require upgrades in the law. One such area involves data brokers, which have thus far gone unregulated. Data brokers use artificial intelligence to aggregate information into data profiles about individual Americans derived from consumer use of the internet and connected devices. Data profiles are then sold for profit. Government investigators use a legal loophole to purchase this data instead of obtaining a search warrant, which the Fourth Amendment would otherwise require. Consumers have lacked a reasonable means to fight or correct the information data brokers collect. Americans may not even be aware of the risks of data aggregation, which upends the test of reasonable expectations used in a search warrant analysis. Data aggregation should be controlled and regulated, which is the direction some privacy laws take. Legislatures must step forward to safeguard against shadowy data-profiling practices, whether abroad or at home. In the meantime, courts can modify their search warrant analysis by including data privacy principles.
基金funded by the High-Quality and Cutting-Edge Discipline Construction Project for Universities in Beijing (Internet Information,Communication University of China).
文摘Multi-Source data plays an important role in the evolution of media convergence.Its fusion processing enables the further mining of data and utilization of data value and broadens the path for the sharing and dissemination of media data.However,it also faces serious problems in terms of protecting user and data privacy.Many privacy protectionmethods have been proposed to solve the problemof privacy leakage during the process of data sharing,but they suffer fromtwo flaws:1)the lack of algorithmic frameworks for specific scenarios such as dynamic datasets in the media domain;2)the inability to solve the problem of the high computational complexity of ciphertext in multi-source data privacy protection,resulting in long encryption and decryption times.In this paper,we propose a multi-source data privacy protection method based on homomorphic encryption and blockchain technology,which solves the privacy protection problem ofmulti-source heterogeneous data in the dissemination ofmedia and reduces ciphertext processing time.We deployed the proposedmethod on theHyperledger platformfor testing and compared it with the privacy protection schemes based on k-anonymity and differential privacy.The experimental results showthat the key generation,encryption,and decryption times of the proposedmethod are lower than those in data privacy protection methods based on k-anonymity technology and differential privacy technology.This significantly reduces the processing time ofmulti-source data,which gives it potential for use in many applications.
基金This work was supported by National Natural Science Foundation of China(U2133208,U20A20161).
文摘The traditional air traffic control information sharing data has weak security characteristics of personal privacy data and poor effect,which is easy to leads to the problem that the data is usurped.Starting from the application of the ATC(automatic train control)network,this paper focuses on the zero trust and zero trust access strategy and the tamper-proof method of information-sharing network data.Through the improvement of ATC’s zero trust physical layer authentication and network data distributed feature differentiation calculation,this paper reconstructs the personal privacy scope authentication structure and designs a tamper-proof method of ATC’s information sharing on the Internet.From the single management authority to the unified management of data units,the systematic algorithm improvement of shared network data tamper prevention method is realized,and RDTP(Reliable Data Transfer Protocol)is selected in the network data of information sharing resources to realize the effectiveness of tamper prevention of air traffic control data during transmission.The results show that this method can reasonably avoid the tampering of information sharing on the Internet,maintain the security factors of air traffic control information sharing on the Internet,and the Central Processing Unit(CPU)utilization rate is only 4.64%,which effectively increases the performance of air traffic control data comprehensive security protection system.
基金supported by the National Natural Science Foundation of China,No.61977006.
文摘Nowadays,smart wearable devices are used widely in the Social Internet of Things(IoT),which record human physiological data in real time.To protect the data privacy of smart devices,researchers pay more attention to federated learning.Although the data leakage problem is somewhat solved,a new challenge has emerged.Asynchronous federated learning shortens the convergence time,while it has time delay and data heterogeneity problems.Both of the two problems harm the accuracy.To overcome these issues,we propose an asynchronous federated learning scheme based on double compensation to solve the problem of time delay and data heterogeneity problems.The scheme improves the Delay Compensated Asynchronous Stochastic Gradient Descent(DC-ASGD)algorithm based on the second-order Taylor expansion as the delay compensation.It adds the FedProx operator to the objective function as the heterogeneity compensation.Besides,the proposed scheme motivates the federated learning process by adjusting the importance of the participants and the central server.We conduct multiple sets of experiments in both conventional and heterogeneous scenarios.The experimental results show that our scheme improves the accuracy by about 5%while keeping the complexity constant.We can find that our scheme converges more smoothly during training and adapts better in heterogeneous environments through numerical experiments.The proposed double-compensation-based federated learning scheme is highly accurate,flexible in terms of participants and smooth the training process.Hence it is deemed suitable for data privacy protection of smart wearable devices.
基金supported by National Key R&D Program of China(No.2020YFC2006602)National Natural Science Foundation of China(Nos.62172324,62072324,61876217,6187612)+2 种基金University Natural Science Foundation of Jiangsu Province(No.21KJA520005)Primary Research and Development Plan of Jiangsu Province(No.BE2020026)Natural Science Foundation of Jiangsu Province(No.BK20190942).
文摘Most studies have conducted experiments on predicting energy consumption by integrating data formodel training.However, the process of centralizing data can cause problems of data leakage.Meanwhile,many laws and regulationson data security and privacy have been enacted, making it difficult to centralize data, which can lead to a datasilo problem. Thus, to train the model while maintaining user privacy, we adopt a federated learning framework.However, in all classical federated learning frameworks secure aggregation, the Federated Averaging (FedAvg)method is used to directly weight the model parameters on average, which may have an adverse effect on te model.Therefore, we propose the Federated Reinforcement Learning (FedRL) model, which consists of multiple userscollaboratively training the model. Each household trains a local model on local data. These local data neverleave the local area, and only the encrypted parameters are uploaded to the central server to participate in thesecure aggregation of the global model. We improve FedAvg by incorporating a Q-learning algorithm to assignweights to each locally uploaded local model. And the model has improved predictive performance. We validatethe performance of the FedRL model by testing it on a real-world dataset and compare the experimental results withother models. The performance of our proposed method in most of the evaluation metrics is improved comparedto both the centralized and distributed models.
基金supported by the National Natural Science Foundation (NSFC),China,under the National Natural Science Foundation Youth Fund program (J.Hao,No.62101275).
文摘In the financial sector, data are highly confidential and sensitive,and ensuring data privacy is critical. Sample fusion is the basis of horizontalfederation learning, but it is suitable only for scenarios where customershave the same format but different targets, namely for scenarios with strongfeature overlapping and weak user overlapping. To solve this limitation, thispaper proposes a federated learning-based model with local data sharing anddifferential privacy. The indexing mechanism of differential privacy is used toobtain different degrees of privacy budgets, which are applied to the gradientaccording to the contribution degree to ensure privacy without affectingaccuracy. In addition, data sharing is performed to improve the utility ofthe global model. Further, the distributed prediction model is used to predictcustomers’ loan propensity on the premise of protecting user privacy. Usingan aggregation mechanism based on federated learning can help to train themodel on distributed data without exposing local data. The proposed methodis verified by experiments, and experimental results show that for non-iiddata, the proposed method can effectively improve data accuracy and reducethe impact of sample tilt. The proposed method can be extended to edgecomputing, blockchain, and the Industrial Internet of Things (IIoT) fields.The theoretical analysis and experimental results show that the proposedmethod can ensure the privacy and accuracy of the federated learning processand can also improve the model utility for non-iid data by 7% compared tothe federated averaging method (FedAvg).
基金supported in part by the National Key R&D Program of China under Project 2020YFB1006004the Guangxi Natural Science Foundation under Grants 2019GXNSFFA245015 and 2019GXNSFGA245004+2 种基金the National Natural Science Foundation of China under Projects 62162017,61862012,61962012,and 62172119the Major Key Project of PCL under Grants PCL2021A09,PCL2021A02 and PCL2022A03the Innovation Project of Guangxi Graduate Education YCSW2021175.
文摘The unmanned aerial vehicle(UAV)self-organizing network is composed of multiple UAVs with autonomous capabilities according to a certain structure and scale,which can quickly and accurately complete complex tasks such as path planning,situational awareness,and information transmission.Due to the openness of the network,the UAV cluster is more vulnerable to passive eavesdropping,active interference,and other attacks,which makes the system face serious security threats.This paper proposes a Blockchain-Based Data Acquisition(BDA)scheme with privacy protection to address the data privacy and identity authentication problems in the UAV-assisted data acquisition scenario.Each UAV cluster has an aggregate unmanned aerial vehicle(AGV)that can batch-verify the acquisition reports within its administrative domain.After successful verification,AGV adds its signcrypted ciphertext to the aggregation and uploads it to the blockchain for storage.There are two chains in the blockchain that store the public key information of registered entities and the aggregated reports,respectively.The security analysis shows that theBDAconstruction can protect the privacy and authenticity of acquisition data,and effectively resist a malicious key generation center and the public-key substitution attack.It also provides unforgeability to acquisition reports under the Elliptic Curve Discrete Logarithm Problem(ECDLP)assumption.The performance analysis demonstrates that compared with other schemes,the proposed BDA construction has lower computational complexity and is more suitable for the UAV cluster network with limited computing power and storage capacity.
基金supported by the Deanship forResearch&Innovation,Ministry of Education in Saudi Arabia with the Grant Code:IFP22UUQU4281768DSR205.
文摘Unmanned aerial vehicles(UAVs),or drones,have revolutionized a wide range of industries,including monitoring,agriculture,surveillance,and supply chain.However,their widespread use also poses significant challenges,such as public safety,privacy,and cybersecurity.Cyberattacks,targetingUAVs have become more frequent,which highlights the need for robust security solutions.Blockchain technology,the foundation of cryptocurrencies has the potential to address these challenges.This study suggests a platform that utilizes blockchain technology tomanage drone operations securely and confidentially.By incorporating blockchain technology,the proposed method aims to increase the security and privacy of drone data.The suggested platform stores information on a public blockchain located on Ethereum and leverages the Ganache platform to ensure secure and private blockchain transactions.TheMetaMask wallet for Ethbalance is necessary for BCT transactions.The present research finding shows that the proposed approach’s efficiency and security features are superior to existing methods.This study contributes to the development of a secure and efficient system for managing drone operations that could have significant applications in various industries.The proposed platform’s security measures could mitigate privacy concerns,minimize cyber security risk,and enhance public safety,ultimately promoting the widespread adoption of UAVs.The results of the study demonstrate that the blockchain can ensure the fulfillment of core security needs such as authentication,privacy preservation,confidentiality,integrity,and access control.
文摘Imagine numerous clients,each with personal data;individual inputs are severely corrupt,and a server only concerns the collective,statistically essential facets of this data.In several data mining methods,privacy has become highly critical.As a result,various privacy-preserving data analysis technologies have emerged.Hence,we use the randomization process to reconstruct composite data attributes accurately.Also,we use privacy measures to estimate how much deception is required to guarantee privacy.There are several viable privacy protections;however,determining which one is the best is still a work in progress.This paper discusses the difficulty of measuring privacy while also offering numerous random sampling procedures and statistical and categorized data results.Further-more,this paper investigates the use of arbitrary nature with perturbations in privacy preservation.According to the research,arbitrary objects(most notably random matrices)have"predicted"frequency patterns.It shows how to recover crucial information from a sample damaged by a random number using an arbi-trary lattice spectral selection strategy.Thisfiltration system's conceptual frame-work posits,and extensive practicalfindings indicate that sparse data distortions preserve relatively modest privacy protection in various situations.As a result,the research framework is efficient and effective in maintaining data privacy and security.
文摘The advent of Industry 5.0 marks a transformative era where Cyber-Physical Systems(CPSs)seamlessly integrate physical processes with advanced digital technologies.However,as industries become increasingly interconnected and reliant on smart digital technologies,the intersection of physical and cyber domains introduces novel security considerations,endangering the entire industrial ecosystem.The transition towards a more cooperative setting,including humans and machines in Industry 5.0,together with the growing intricacy and interconnection of CPSs,presents distinct and diverse security and privacy challenges.In this regard,this study provides a comprehensive review of security and privacy concerns pertaining to CPSs in the context of Industry 5.0.The review commences by providing an outline of the role of CPSs in Industry 5.0 and then proceeds to conduct a thorough review of the different security risks associated with CPSs in the context of Industry 5.0.Afterward,the study also presents the privacy implications inherent in these systems,particularly in light of the massive data collection and processing required.In addition,the paper delineates potential avenues for future research and provides countermeasures to surmount these challenges.Overall,the study underscores the imperative of adopting comprehensive security and privacy strategies within the context of Industry 5.0.
基金the Sichuan Provincial Science and Technology Department Project under Grant 2019YFN0104the Yibin Science and Technology Plan Project under Grant 2021GY008the Sichuan University of Science and Engineering Postgraduate Innovation Fund Project under Grant Y2022154.
文摘As a distributed machine learning method,federated learning(FL)has the advantage of naturally protecting data privacy.It keeps data locally and trains local models through local data to protect the privacy of local data.The federated learning method effectively solves the problem of artificial Smart data islands and privacy protection issues.However,existing research shows that attackersmay still steal user information by analyzing the parameters in the federated learning training process and the aggregation parameters on the server side.To solve this problem,differential privacy(DP)techniques are widely used for privacy protection in federated learning.However,adding Gaussian noise perturbations to the data degrades the model learning performance.To address these issues,this paper proposes a differential privacy federated learning scheme based on adaptive Gaussian noise(DPFL-AGN).To protect the data privacy and security of the federated learning training process,adaptive Gaussian noise is specifically added in the training process to hide the real parameters uploaded by the client.In addition,this paper proposes an adaptive noise reduction method.With the convergence of the model,the Gaussian noise in the later stage of the federated learning training process is reduced adaptively.This paper conducts a series of simulation experiments on realMNIST and CIFAR-10 datasets,and the results show that the DPFL-AGN algorithmperforms better compared to the other algorithms.
文摘The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively.
文摘This study investigates university English teachers’acceptance and willingness to use learning management system(LMS)data analysis tools in their teaching practices.The research employs a mixed-method approach,combining quantitative surveys and qualitative interviews to understand teachers’perceptions and attitudes,and the factors influencing their adoption of LMS data analysis tools.The findings reveal that perceived usefulness,perceived ease of use,technical literacy,organizational support,and data privacy concerns significantly impact teachers’willingness to use these tools.Based on these insights,the study offers practical recommendations for educational institutions to enhance the effective adoption of LMS data analysis tools in English language teaching.
基金This paper was partially supported by the National Basic Re-search Program of China under Grant No. 2007CB307100 the National Natural Science Foundation of China under Crant No 60972010 the Fundamental Research Funds for the Central U- niversities under Grant No. 2011JBM018.
文摘In order to investigate the enhancement of data privacy by distributing data packets via multiple paths, this paper fommlates a security model and analyzes the privacy problem in mul- tipath scenarios leveraging inforrmtion theoretic concept. Based on proposed model, a privacy function related to the path number is discussed. We heuristically recommend the optin^al path num- ber and analyze the tradeoff among the perform- ance, resource consumption and privacy. For re- ducing the inforlmtion leakage, the data schedule algorithms are also proposed. The analytical model can provide guidelines for the multipath protocol design.
文摘Privacy protection for big data linking is discussed here in relation to the Central Statistics Office (CSO), Ireland's, big data linking project titled the 'Structure of Earnings Survey - Administrative Data Project' (SESADP). The result of the project was the creation of datasets and statistical outputs for the years 2011 to 2014 to meet Eurostat's annual earnings statistics requirements and the Structure of Earnings Survey (SES) Regulation. Record linking across the Census and various public sector datasets enabled the necessary information to be acquired to meet the Eurostat earnings requirements. However, the risk of statistical disclosure (i.e. identifying an individual on the dataset) is high unless privacy and confidentiality safe-guards are built into the data matching process. This paper looks at the three methods of linking records on big datasets employed on the SESADP, and how to anonymise the data to protect the identity of the individuals, where potentially disclosive variables exist.
基金supported by the National Key Basic Research Program of China (973 Program) under Grant No. 2009CB320505the Fundamental Research Funds for the Central Universities under Grant No. 2011RC0508+2 种基金the National Natural Science Foundation of China under Grant No. 61003282China Next Generation Internet Project "Research and Trial on Evolving Next Generation Network Intelligence Capability Enhancement"the National Science and Technology Major Project "Research about Architecture of Mobile Internet" under Grant No. 2011ZX03002-001-01
文摘This paper aims to find a practical way of quantitatively representing the privacy of network data. A method of quantifying the privacy of network data anonymization based on similarity distance and entropy in the scenario involving multiparty network data sharing with Trusted Third Party (TTP) is proposed. Simulations are then conducted using network data from different sources, and show that the measurement indicators defined in this paper can adequately quantify the privacy of the network. In particular, it can indicate the effect of the auxiliary information of the adversary on privacy.
基金National Natural Science Foundation of China,Grant Number:62062020National Natural Science Foundation of China,Grant Number:72161005Technology Foundation of Guizhou Province,Grant Number:QianKeHeJiChu‐ZK[2022]‐General184.
文摘The incentive mechanism of federated learning has been a hot topic,but little research has been done on the compensation of privacy loss.To this end,this study uses the Local SGD federal learning framework and gives a theoretical analysis under the use of differential privacy protection.Based on the analysis,a multi‐attribute reverse auction model is proposed to be used for user selection as well as payment calculation for participation in federal learning.The model uses a mixture of economic and non‐economic attributes in making choices for users and is transformed into an optimisation equation to solve the user choice problem.In addition,a post‐auction negotiation model that uses the Rubinstein bargaining model as well as optimisation equations to describe the negotiation process and theoretically demonstrate the improvement of social welfare is proposed.In the experimental part,the authors find that their algorithm improves both the model accuracy and the F1‐score values relative to the comparison algorithms to varying degrees.
基金This work was supported by Postgraduate Research Grants Scheme(PGRS)with Grant No.PGRS190360.
文摘Publishing big data and making it accessible to researchers is important for knowledge building as it helps in applying highly efficient methods to plan,conduct,and assess scientific research.However,publishing and processing big data poses a privacy concern related to protecting individuals’sensitive information while maintaining the usability of the published data.Several anonymization methods,such as slicing and merging,have been designed as solutions to the privacy concerns for publishing big data.However,the major drawback of merging and slicing is the random permutation procedure,which does not always guarantee complete protection against attribute or membership disclosure.Moreover,merging procedures may generatemany fake tuples,leading to a loss of data utility and subsequent erroneous knowledge extraction.This study therefore proposes a slicingbased enhanced method for privacy-preserving big data publishing while maintaining the data utility.In particular,the proposed method distributes the data into horizontal and vertical partitions.The lower and upper protection levels are then used to identify the unique and identical attributes’values.The unique and identical attributes are swapped to ensure the published big data is protected from disclosure risks.The outcome of the experiments demonstrates that the proposed method could maintain data utility and provide stronger privacy preservation.