Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic ...Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic models,and there is a significant gap between the research results and actual wireless sensor networks.Some scholars have now modeled data fusion networks to make them more suitable for practical applications.This paper will explore the deployment problem of a stochastic data fusion wireless sensor network(SDFWSN),a model that reflects the randomness of environmental monitoring and uses data fusion techniques widely used in actual sensor networks for information collection.The deployment problem of SDFWSN is modeled as a multi-objective optimization problem.The network life cycle,spatiotemporal coverage,detection rate,and false alarm rate of SDFWSN are used as optimization objectives to optimize the deployment of network nodes.This paper proposes an enhanced multi-objective mongoose optimization algorithm(EMODMOA)to solve the deployment problem of SDFWSN.First,to overcome the shortcomings of the DMOA algorithm,such as its low convergence and tendency to get stuck in a local optimum,an encircling and hunting strategy is introduced into the original algorithm to propose the EDMOA algorithm.The EDMOA algorithm is designed as the EMODMOA algorithm by selecting reference points using the K-Nearest Neighbor(KNN)algorithm.To verify the effectiveness of the proposed algorithm,the EMODMOA algorithm was tested at CEC 2020 and achieved good results.In the SDFWSN deployment problem,the algorithm was compared with the Non-dominated Sorting Genetic Algorithm II(NSGAII),Multiple Objective Particle Swarm Optimization(MOPSO),Multi-Objective Evolutionary Algorithm based on Decomposition(MOEA/D),and Multi-Objective Grey Wolf Optimizer(MOGWO).By comparing and analyzing the performance evaluation metrics and optimization results of the objective functions of the multi-objective algorithms,the algorithm outperforms the other algorithms in the SDFWSN deployment results.To better demonstrate the superiority of the algorithm,simulations of diverse test cases were also performed,and good results were obtained.展开更多
The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial...The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial networks(GANs)are instrumental in resource scheduling,their application in this domain is impeded by challenges such as convergence speed,inferior optimality searching capability,and the inability to learn from failed decision making feedbacks.Therefore,a cloud-edge collaborative federated GAN-based communication and computing resource scheduling algorithm with long-term constraint violation sensitiveness is proposed to address these challenges.The proposed algorithm facilitates real-time,energy-efficient data processing by optimizing transmission power control,data migration,and computing resource allocation.It employs federated learning for global parameter aggregation to enhance GAN parameter updating and dynamically adjusts GAN learning rates and global aggregation weights based on energy consumption constraint violations.Simulation results indicate that the proposed algorithm effectively reduces data processing latency,energy consumption,and convergence time.展开更多
The Internet of things(IoT)is a wireless network designed to perform specific tasks and plays a crucial role in various fields such as environmental monitoring,surveillance,and healthcare.To address the limitations im...The Internet of things(IoT)is a wireless network designed to perform specific tasks and plays a crucial role in various fields such as environmental monitoring,surveillance,and healthcare.To address the limitations imposed by inadequate resources,energy,and network scalability,this type of network relies heavily on data aggregation and clustering algorithms.Although various conventional studies have aimed to enhance the lifespan of a network through robust systems,they do not always provide optimal efficiency for real-time applications.This paper presents an approach based on state-of-the-art machine-learning methods.In this study,we employed a novel approach that combines an extended version of principal component analysis(PCA)and a reinforcement learning algorithm to achieve efficient clustering and data reduction.The primary objectives of this study are to enhance the service life of a network,reduce energy usage,and improve data aggregation efficiency.We evaluated the proposed methodology using data collected from sensors deployed in agricultural fields for crop monitoring.Our proposed approach(PQL)was compared to previous studies that utilized adaptive Q-learning(AQL)and regional energy-aware clustering(REAC).Our study outperformed in terms of both network longevity and energy consumption and established a fault-tolerant network.展开更多
With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve suffi...With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed.展开更多
The accurate estimation of parameters is the premise for establishing a high-fidelity simulation model of a valve-controlled cylinder system.Bench test data are easily obtained,but it is challenging to emulate actual ...The accurate estimation of parameters is the premise for establishing a high-fidelity simulation model of a valve-controlled cylinder system.Bench test data are easily obtained,but it is challenging to emulate actual loads in the research on parameter estimation of valve-controlled cylinder system.Despite the actual load information contained in the operating data of the control valve,its acquisition remains challenging.This paper proposes a method that fuses bench test and operating data for parameter estimation to address the aforementioned problems.The proposed method is based on Bayesian theory,and its core is a pool fusion of prior information from bench test and operating data.Firstly,a system model is established,and the parameters in the model are analysed.Secondly,the bench and operating data of the system are collected.Then,the model parameters and weight coefficients are estimated using the data fusion method.Finally,the estimated effects of the data fusion method,Bayesian method,and particle swarm optimisation(PSO)algorithm on system model parameters are compared.The research shows that the weight coefficient represents the contribution of different prior information to the parameter estimation result.The effect of parameter estimation based on the data fusion method is better than that of the Bayesian method and the PSO algorithm.Increasing load complexity leads to a decrease in model accuracy,highlighting the crucial role of the data fusion method in parameter estimation studies.展开更多
Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized charact...Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized characteristics of mining slopes,this study introduces a new method that fuses model data from Unmanned aerial vehicles(UAV)tilt photogrammetry and 3D laser scanning through a data alignment algorithm based on control points.First,the mini batch K-Medoids algorithm is utilized to cluster the point cloud data from ground 3D laser scanning.Then,the elbow rule is applied to determine the optimal cluster number(K0),and the feature points are extracted.Next,the nearest neighbor point algorithm is employed to match the feature points obtained from UAV tilt photogrammetry,and the internal point coordinates are adjusted through the distanceweighted average to construct a 3D model.Finally,by integrating an engineering case study,the K0 value is determined to be 8,with a matching accuracy between the two model datasets ranging from 0.0669 to 1.0373 mm.Therefore,compared with the modeling method utilizing K-medoids clustering algorithm,the new modeling method significantly enhances the computational efficiency,the accuracy of selecting the optimal number of feature points in 3D laser scanning,and the precision of the 3D model derived from UAV tilt photogrammetry.This method provides a research foundation for constructing mine slope model.展开更多
By integrating the traditional power grid with information and communication technology, smart grid achieves dependable, efficient, and flexible grid data processing. The smart meters deployed on the user side of the ...By integrating the traditional power grid with information and communication technology, smart grid achieves dependable, efficient, and flexible grid data processing. The smart meters deployed on the user side of the smart grid collect the users' power usage data on a regular basis and upload it to the control center to complete the smart grid data acquisition. The control center can evaluate the supply and demand of the power grid through aggregated data from users and then dynamically adjust the power supply and price, etc. However, since the grid data collected from users may disclose the user's electricity usage habits and daily activities, privacy concern has become a critical issue in smart grid data aggregation. Most of the existing privacy-preserving data collection schemes for smart grid adopt homomorphic encryption or randomization techniques which are either impractical because of the high computation overhead or unrealistic for requiring a trusted third party.展开更多
In scenarios of real-time data collection of long-term deployed Wireless Sensor Networks (WSNs), low-latency data collection with long net- work lifetime becomes a key issue. In this paper, we present a data aggrega...In scenarios of real-time data collection of long-term deployed Wireless Sensor Networks (WSNs), low-latency data collection with long net- work lifetime becomes a key issue. In this paper, we present a data aggregation scheduling with guaran- teed lifetime and efficient latency in WSNs. We first Construct a Guaranteed Lifetime Mininmm Ra- dius Data Aggregation Tree (GLMRDAT) which is conducive to reduce scheduling latency while pro- viding a guaranteed network lifetime, and then de-sign a Greedy Scheduling algorithM (GSM) based on finding the nmzximum independent set in conflict graph to schedule he transmission of nodes in the aggregation tree. Finally, simulations show that our proposed approach not only outperfonm the state-of-the-art solutions in terms of schedule latency, but also provides longer and guaranteed network lifetilre.展开更多
Wireless sensor networks(WSNs)consist of a great deal of sensor nodes with limited power,computation,storage,sensing and communication capabilities.Data aggregation is a very important technique,which is designed to s...Wireless sensor networks(WSNs)consist of a great deal of sensor nodes with limited power,computation,storage,sensing and communication capabilities.Data aggregation is a very important technique,which is designed to substantially reduce the communication overhead and energy expenditure of sensor node during the process of data collection in a WSNs.However,privacy-preservation is more challenging especially in data aggregation,where the aggregators need to perform some aggregation operations on sensing data it received.We present a state-of-the art survey of privacy-preserving data aggregation in WSNs.At first,we classify the existing privacy-preserving data aggregation schemes into different categories by the core privacy-preserving techniques used in each scheme.And then compare and contrast different algorithms on the basis of performance measures such as the privacy protection ability,communication consumption,power consumption and data accuracy etc.Furthermore,based on the existing work,we also discuss a number of open issues which may intrigue the interest of researchers for future work.展开更多
The Internet of Things(IoT)has profoundly impacted our lives and has greatly revolutionized our lifestyle.The terminal devices in an IoT data aggregation application sense real-time data for the remote cloud server to...The Internet of Things(IoT)has profoundly impacted our lives and has greatly revolutionized our lifestyle.The terminal devices in an IoT data aggregation application sense real-time data for the remote cloud server to achieve intelligent decisions.However,the high frequency of collecting user data will raise people concerns about personal privacy.In recent years,many privacy-preserving data aggregation schemes have been proposed.Unfortunately,most existing schemes cannot support either arbitrary aggregation functions,or dynamic user group management,or fault tolerance.In this paper,we propose an efficient and privacy-preserving data aggregation scheme.In the scheme,we design a lightweight encryption method to protect the user privacy by using a ring topology and a random location sequence.On this basis,the proposed scheme supports not only arbitrary aggregation functions,but also flexible dynamic user management.Furthermore,the scheme achieves faulttolerant capabilities by utilizing a future data buffering mechanism.Security analysis reveals that the scheme can achieve the desired security properties,and experimental evaluation results show the scheme's efficiency in terms of computational and communication overhead.展开更多
Sensor nodes in a wireless sensor network (WSN) are typically powered by batteries, thus the energy is constrained. It is our design goal to efficiently utilize the energy of each sensor node to extend its lifetime,...Sensor nodes in a wireless sensor network (WSN) are typically powered by batteries, thus the energy is constrained. It is our design goal to efficiently utilize the energy of each sensor node to extend its lifetime, so as to prolong the lifetime of the whole WSN. In this paper, we propose a path-based data aggregation scheme (PBDAS) for grid-based wireless sensor networks. In order to extend the lifetime of a WSN, we construct a grid infrastructure by partitioning the whole sensor field into a grid of cells. Each cell has a head responsible for aggregating its own data with the data sensed by the others in the same cell and then transmitting out. In order to efficiently and rapidly transmit the data to the base station (BS), we link each cell head to form a chain. Each cell head on the chain takes turn becoming the chain leader responsible for transmitting data to the BS. Aggregated data moves from head to head along the chain, and finally the chain leader transmits to the BS. In PBDAS, only the cell heads need to transmit data toward the BS. Therefore, the data transmissions to the BS substantially decrease. Besides, the cell heads and chain leader are designated in turn according to the energy level so that the energy depletion of nodes is evenly distributed. Simulation results show that the proposed PBDAS extends the lifetime of sensor nodes, so as to make the lifetime of the whole network longer.展开更多
Data aggregation technology reduces traffic overhead of wireless sensor network and extends effective working time of the network,yet continued operation of wireless sensor networks increases the probability of aggreg...Data aggregation technology reduces traffic overhead of wireless sensor network and extends effective working time of the network,yet continued operation of wireless sensor networks increases the probability of aggregation nodes being captured and probability of aggregated data being tampered.Thus it will seriously affect the security performance of the network. For network security issues,a stateful public key based SDAM( secure data aggregation model) is proposed for wireless sensor networks( WSNs),which employs a new stateful public key encryption to provide efficient end-to-end security. Moreover,the security aggregation model will not impose any bound on the aggregation function property,so as to realize the low cost and high security level at the same time.展开更多
With the popularity of sensor-rich mobile devices,mobile crowdsensing(MCS)has emerged as an effective method for data collection and processing.However,MCS platform usually need workers’precise locations for optimal ...With the popularity of sensor-rich mobile devices,mobile crowdsensing(MCS)has emerged as an effective method for data collection and processing.However,MCS platform usually need workers’precise locations for optimal task execution and collect sensing data from workers,which raises severe concerns of privacy leakage.Trying to preserve workers’location and sensing data from the untrusted MCS platform,a differentially private data aggregation method based on worker partition and location obfuscation(DP-DAWL method)is proposed in the paper.DP-DAWL method firstly use an improved K-means algorithm to divide workers into groups and assign different privacy budget to the group according to group size(the number of workers).Then each worker’s location is obfuscated and his/her sensing data is perturbed by adding Laplace noise before uploading to the platform.In the stage of data aggregation,DP-DAWL method adopts an improved Kalman filter algorithm to filter out the added noise(including both added noise of sensing data and the system noise in the sensing process).Through using optimal estimation of noisy aggregated sensing data,the platform can finally gain better utility of aggregated data while preserving workers’privacy.Extensive experiments on the synthetic datasets demonstrate the effectiveness of the proposed method.展开更多
In order to avoid internal attacks during data aggregation in wireless sensor networks, a grid-based network architecture fit for monitoring is designed and the algorithms for network division, initialization and grid...In order to avoid internal attacks during data aggregation in wireless sensor networks, a grid-based network architecture fit for monitoring is designed and the algorithms for network division, initialization and grid tree construction are presented. The characteristics of on-off attacks are first studied and monitoring mechanisms are then designed for sensor nodes. A Fast Detection and Slow Recovery (FDSR) algorithm is proposed to prevent on-off attacks by observing the behaviors of the nodes and computing reputations. A recovery mechanism is designed to isolate malicious nodes by identifying the new roles of nodes and updating the grid tree. In the experiments, some situations of on-off attacks are simulated and the results are compared with other approaches. The experimental results indicate that our approach can detect malicious nodes effectively and guarantee secure data aggregation with acceptable energy consumption.展开更多
As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when ...As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when learning agents are deployed on the edge side,the data aggregation from the end side to the designated edge devices is an important research topic.Considering the various importance of end devices,this paper studies the weighted data aggregation problem in a single hop end-to-edge communication network.Firstly,to make sure all the end devices with various weights are fairly treated in data aggregation,a distributed end-to-edge cooperative scheme is proposed.Then,to handle the massive contention on the wireless channel caused by end devices,a multi-armed bandit(MAB)algorithm is designed to help the end devices find their most appropriate update rates.Diffe-rent from the traditional data aggregation works,combining the MAB enables our algorithm a higher efficiency in data aggregation.With a theoretical analysis,we show that the efficiency of our algorithm is asymptotically optimal.Comparative experiments with previous works are also conducted to show the strength of our algorithm.展开更多
In monitoring systems, multiple sensor nodes can detect a single target of interest simultaneously and the data collected are usually highly correlated and redundant. If each node sends data to the base station, energ...In monitoring systems, multiple sensor nodes can detect a single target of interest simultaneously and the data collected are usually highly correlated and redundant. If each node sends data to the base station, energy will be wasted and thus the network energy will be depleted quickly. Data aggregation is an important paradigm for compressing data so that the energy of the network is spent efficiently. In this paper, a novel data aggregation algorithm called Redundancy Elimination for Accurate Data Aggregation (READA) has been proposed. By exploiting the range of spatial correlations of data in the network, READA applies a grouping and compression mechanism to remove duplicate data in the aggregated set of data to be sent to the base station without largely losing the accuracy of the final aggregated data. One peculiarity of READA is that it uses a prediction model derived from cached values to confirm whether any outlier is actually an event which has occurred. From the various simulations conducted, it was observed that in READA the accuracy of data has been highly preserved taking into consideration the energy dissipated for aggregating the展开更多
In Wireless Sensor Networks (WSNs), sensor nodes are developed densely. They have limit processing ca-pability and low power resources. Thus, energy is one of most important constraints in these networks. In some appl...In Wireless Sensor Networks (WSNs), sensor nodes are developed densely. They have limit processing ca-pability and low power resources. Thus, energy is one of most important constraints in these networks. In some applications of sensor networks, sensor nodes sense data from the environment periodically and trans-mit these data to sink node. In order to decrease energy consumption and so, increase network’s lifetime, volume of transmitted data should be decreased. A solution, which is suggested, is aggregation. In aggrega-tion mechanisms, the nodes aggregate received data and send aggregated result instead of raw data to sink, so, the volume of the transmitted data is decreased. Aggregation algorithms should construct aggregation tree and transmit data to sink based on this tree. In this paper, we propose an automaton based algorithm to con-struct aggregation tree by using energy and distance parameters. Automaton is a decision-making machine that is able-to-learn. Since network’s topology is dynamic, algorithm should construct aggregation tree peri-odically. In order to aware nodes of topology and so, select optimal path, routing packets must be flooded in entire network that led to high energy consumption. By using automaton machine which is in interaction with environment, we solve this problem based on automat learning. By using this strategy, aggregation tree is reconstructed locally, that result in decreasing energy consumption. Simulation results show that the pro-posed algorithm has better performance in terms of energy efficiency which increase the network lifetime and support better coverage.展开更多
Wireless Sensor Networks (WSNs) typically use in-network processing to reduce the communication overhead. Due to the fusion of data items sourced at different nodes into a single one during in-network processing, the ...Wireless Sensor Networks (WSNs) typically use in-network processing to reduce the communication overhead. Due to the fusion of data items sourced at different nodes into a single one during in-network processing, the sanctity of the aggregated data needs to be ensured. Especially, the data integrity of the aggregated result is critical as any malicious update to it can jeopardize not one, but many sensor readings. In this paper, we analyse three different approaches to providing integrity support for SDA in WSNs. The first one is traditional MAC, in which each leaf node and intermediate node share a key with parent (symmetric key). The second is aggregate MAC (AMAC), in which a base station shares a unique key with all the other sensor nodes. The third is homomorphic MAC (Homo MAC) that is purely symmetric key-based approach. These approaches exhibit diverse trade-off in resource consumption and security assumptions. Adding together to that, we also propose a probabilistic and improved variant of homomorphic MAC that improves the security strength for secure data aggregation in WSNs. We carry out simulations in TinyOS environment to experimentally evaluate the impact of each of these on the resource consumption in WSNs.展开更多
This paper describes an empirical study aiming at identifying the main differences between different logistic regression models and collision data aggregation methods that are commonly applied in road safety literatur...This paper describes an empirical study aiming at identifying the main differences between different logistic regression models and collision data aggregation methods that are commonly applied in road safety literature for modeling collision severity. In particular, the research compares three popular multilevel logistic models (i.e., sequential binary logit models, ordered logit models, and multinomial logit models) as well as three data aggregation methods (i.e., occupant based, vehicle based, and collision based). Six years of collision data (2001-2006) from 31 highway routes from across the province of Ontario, Canada were used for this analysis. It was found that a multilevel multinomial logit model has the best fit to the data than the other two models while the results obtained from occupant-based data are more reliable than those from vehicle- and collision-based data. More importantly, while generally consistent in terms of factors that were found to be significant between different models and data aggregation methods, the effect size of each factor differ sub- stantially, which could have significant implications forevaluating the effects of different safety-related policies and countermeasures.展开更多
基金supported by the National Natural Science Foundation of China under Grant Nos.U21A20464,62066005Innovation Project of Guangxi Graduate Education under Grant No.YCSW2024313.
文摘Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic models,and there is a significant gap between the research results and actual wireless sensor networks.Some scholars have now modeled data fusion networks to make them more suitable for practical applications.This paper will explore the deployment problem of a stochastic data fusion wireless sensor network(SDFWSN),a model that reflects the randomness of environmental monitoring and uses data fusion techniques widely used in actual sensor networks for information collection.The deployment problem of SDFWSN is modeled as a multi-objective optimization problem.The network life cycle,spatiotemporal coverage,detection rate,and false alarm rate of SDFWSN are used as optimization objectives to optimize the deployment of network nodes.This paper proposes an enhanced multi-objective mongoose optimization algorithm(EMODMOA)to solve the deployment problem of SDFWSN.First,to overcome the shortcomings of the DMOA algorithm,such as its low convergence and tendency to get stuck in a local optimum,an encircling and hunting strategy is introduced into the original algorithm to propose the EDMOA algorithm.The EDMOA algorithm is designed as the EMODMOA algorithm by selecting reference points using the K-Nearest Neighbor(KNN)algorithm.To verify the effectiveness of the proposed algorithm,the EMODMOA algorithm was tested at CEC 2020 and achieved good results.In the SDFWSN deployment problem,the algorithm was compared with the Non-dominated Sorting Genetic Algorithm II(NSGAII),Multiple Objective Particle Swarm Optimization(MOPSO),Multi-Objective Evolutionary Algorithm based on Decomposition(MOEA/D),and Multi-Objective Grey Wolf Optimizer(MOGWO).By comparing and analyzing the performance evaluation metrics and optimization results of the objective functions of the multi-objective algorithms,the algorithm outperforms the other algorithms in the SDFWSN deployment results.To better demonstrate the superiority of the algorithm,simulations of diverse test cases were also performed,and good results were obtained.
基金supported by China Southern Power Grid Technology Project under Grant 03600KK52220019(GDKJXM20220253).
文摘The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial networks(GANs)are instrumental in resource scheduling,their application in this domain is impeded by challenges such as convergence speed,inferior optimality searching capability,and the inability to learn from failed decision making feedbacks.Therefore,a cloud-edge collaborative federated GAN-based communication and computing resource scheduling algorithm with long-term constraint violation sensitiveness is proposed to address these challenges.The proposed algorithm facilitates real-time,energy-efficient data processing by optimizing transmission power control,data migration,and computing resource allocation.It employs federated learning for global parameter aggregation to enhance GAN parameter updating and dynamically adjusts GAN learning rates and global aggregation weights based on energy consumption constraint violations.Simulation results indicate that the proposed algorithm effectively reduces data processing latency,energy consumption,and convergence time.
文摘The Internet of things(IoT)is a wireless network designed to perform specific tasks and plays a crucial role in various fields such as environmental monitoring,surveillance,and healthcare.To address the limitations imposed by inadequate resources,energy,and network scalability,this type of network relies heavily on data aggregation and clustering algorithms.Although various conventional studies have aimed to enhance the lifespan of a network through robust systems,they do not always provide optimal efficiency for real-time applications.This paper presents an approach based on state-of-the-art machine-learning methods.In this study,we employed a novel approach that combines an extended version of principal component analysis(PCA)and a reinforcement learning algorithm to achieve efficient clustering and data reduction.The primary objectives of this study are to enhance the service life of a network,reduce energy usage,and improve data aggregation efficiency.We evaluated the proposed methodology using data collected from sensors deployed in agricultural fields for crop monitoring.Our proposed approach(PQL)was compared to previous studies that utilized adaptive Q-learning(AQL)and regional energy-aware clustering(REAC).Our study outperformed in terms of both network longevity and energy consumption and established a fault-tolerant network.
文摘With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed.
基金Supported by National Key R&D Program of China(Grant Nos.2020YFB1709901,2020YFB1709904)National Natural Science Foundation of China(Grant Nos.51975495,51905460)+1 种基金Guangdong Provincial Basic and Applied Basic Research Foundation of China(Grant No.2021-A1515012286)Science and Technology Plan Project of Fuzhou City of China(Grant No.2022-P-022).
文摘The accurate estimation of parameters is the premise for establishing a high-fidelity simulation model of a valve-controlled cylinder system.Bench test data are easily obtained,but it is challenging to emulate actual loads in the research on parameter estimation of valve-controlled cylinder system.Despite the actual load information contained in the operating data of the control valve,its acquisition remains challenging.This paper proposes a method that fuses bench test and operating data for parameter estimation to address the aforementioned problems.The proposed method is based on Bayesian theory,and its core is a pool fusion of prior information from bench test and operating data.Firstly,a system model is established,and the parameters in the model are analysed.Secondly,the bench and operating data of the system are collected.Then,the model parameters and weight coefficients are estimated using the data fusion method.Finally,the estimated effects of the data fusion method,Bayesian method,and particle swarm optimisation(PSO)algorithm on system model parameters are compared.The research shows that the weight coefficient represents the contribution of different prior information to the parameter estimation result.The effect of parameter estimation based on the data fusion method is better than that of the Bayesian method and the PSO algorithm.Increasing load complexity leads to a decrease in model accuracy,highlighting the crucial role of the data fusion method in parameter estimation studies.
基金funded by National Natural Science Foundation of China(Grant Nos.42272333,42277147).
文摘Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized characteristics of mining slopes,this study introduces a new method that fuses model data from Unmanned aerial vehicles(UAV)tilt photogrammetry and 3D laser scanning through a data alignment algorithm based on control points.First,the mini batch K-Medoids algorithm is utilized to cluster the point cloud data from ground 3D laser scanning.Then,the elbow rule is applied to determine the optimal cluster number(K0),and the feature points are extracted.Next,the nearest neighbor point algorithm is employed to match the feature points obtained from UAV tilt photogrammetry,and the internal point coordinates are adjusted through the distanceweighted average to construct a 3D model.Finally,by integrating an engineering case study,the K0 value is determined to be 8,with a matching accuracy between the two model datasets ranging from 0.0669 to 1.0373 mm.Therefore,compared with the modeling method utilizing K-medoids clustering algorithm,the new modeling method significantly enhances the computational efficiency,the accuracy of selecting the optimal number of feature points in 3D laser scanning,and the precision of the 3D model derived from UAV tilt photogrammetry.This method provides a research foundation for constructing mine slope model.
基金supported in part by the National Natural Science Foundation of China under Grant No.61972371Youth Innovation Promotion Association of Chinese Academy of Sciences(CAS)under Grant No.Y202093.
文摘By integrating the traditional power grid with information and communication technology, smart grid achieves dependable, efficient, and flexible grid data processing. The smart meters deployed on the user side of the smart grid collect the users' power usage data on a regular basis and upload it to the control center to complete the smart grid data acquisition. The control center can evaluate the supply and demand of the power grid through aggregated data from users and then dynamically adjust the power supply and price, etc. However, since the grid data collected from users may disclose the user's electricity usage habits and daily activities, privacy concern has become a critical issue in smart grid data aggregation. Most of the existing privacy-preserving data collection schemes for smart grid adopt homomorphic encryption or randomization techniques which are either impractical because of the high computation overhead or unrealistic for requiring a trusted third party.
基金This paper was supported by the National Basic Research Pro- gram of China (973 Program) under Crant No. 2011CB302903 the National Natural Science Foundation of China under Crants No. 60873231, No.61272084+3 种基金 the Natural Science Foundation of Jiangsu Province under Ca-ant No. BK2009426 the Innovation Project for Postgraduate Cultivation of Jiangsu Province under Crants No. CXZZ11_0402, No. CX10B195Z, No. CXLX11_0415, No. CXLXll 0416 the Natural Science Research Project of Jiangsu Education Department under Grant No. 09KJD510008 the Natural Science Foundation of the Jiangsu Higher Educa-tion Institutions of China under Grant No. 11KJA520002.
文摘In scenarios of real-time data collection of long-term deployed Wireless Sensor Networks (WSNs), low-latency data collection with long net- work lifetime becomes a key issue. In this paper, we present a data aggregation scheduling with guaran- teed lifetime and efficient latency in WSNs. We first Construct a Guaranteed Lifetime Mininmm Ra- dius Data Aggregation Tree (GLMRDAT) which is conducive to reduce scheduling latency while pro- viding a guaranteed network lifetime, and then de-sign a Greedy Scheduling algorithM (GSM) based on finding the nmzximum independent set in conflict graph to schedule he transmission of nodes in the aggregation tree. Finally, simulations show that our proposed approach not only outperfonm the state-of-the-art solutions in terms of schedule latency, but also provides longer and guaranteed network lifetilre.
基金supported in part by the National Natural Science Foundation of China(No.61272084,61202004)the Natural Science Foundation of Jiangsu Province(No.BK20130096)the Project of Natural Science Research of Jiangsu University(No.14KJB520031,No.11KJA520002)
文摘Wireless sensor networks(WSNs)consist of a great deal of sensor nodes with limited power,computation,storage,sensing and communication capabilities.Data aggregation is a very important technique,which is designed to substantially reduce the communication overhead and energy expenditure of sensor node during the process of data collection in a WSNs.However,privacy-preservation is more challenging especially in data aggregation,where the aggregators need to perform some aggregation operations on sensing data it received.We present a state-of-the art survey of privacy-preserving data aggregation in WSNs.At first,we classify the existing privacy-preserving data aggregation schemes into different categories by the core privacy-preserving techniques used in each scheme.And then compare and contrast different algorithms on the basis of performance measures such as the privacy protection ability,communication consumption,power consumption and data accuracy etc.Furthermore,based on the existing work,we also discuss a number of open issues which may intrigue the interest of researchers for future work.
基金supported by the Natural Science Foundation of Fujian Province(2018J01782)the National Natural Science Foundation of China(U1905211)the Educational scientific research project of Fujian Provincial Department of Education(JAT210291)。
文摘The Internet of Things(IoT)has profoundly impacted our lives and has greatly revolutionized our lifestyle.The terminal devices in an IoT data aggregation application sense real-time data for the remote cloud server to achieve intelligent decisions.However,the high frequency of collecting user data will raise people concerns about personal privacy.In recent years,many privacy-preserving data aggregation schemes have been proposed.Unfortunately,most existing schemes cannot support either arbitrary aggregation functions,or dynamic user group management,or fault tolerance.In this paper,we propose an efficient and privacy-preserving data aggregation scheme.In the scheme,we design a lightweight encryption method to protect the user privacy by using a ring topology and a random location sequence.On this basis,the proposed scheme supports not only arbitrary aggregation functions,but also flexible dynamic user management.Furthermore,the scheme achieves faulttolerant capabilities by utilizing a future data buffering mechanism.Security analysis reveals that the scheme can achieve the desired security properties,and experimental evaluation results show the scheme's efficiency in terms of computational and communication overhead.
基金supported by the NSC under Grant No.NSC-101-2221-E-239-032 and NSC-102-2221-E-239-020
文摘Sensor nodes in a wireless sensor network (WSN) are typically powered by batteries, thus the energy is constrained. It is our design goal to efficiently utilize the energy of each sensor node to extend its lifetime, so as to prolong the lifetime of the whole WSN. In this paper, we propose a path-based data aggregation scheme (PBDAS) for grid-based wireless sensor networks. In order to extend the lifetime of a WSN, we construct a grid infrastructure by partitioning the whole sensor field into a grid of cells. Each cell has a head responsible for aggregating its own data with the data sensed by the others in the same cell and then transmitting out. In order to efficiently and rapidly transmit the data to the base station (BS), we link each cell head to form a chain. Each cell head on the chain takes turn becoming the chain leader responsible for transmitting data to the BS. Aggregated data moves from head to head along the chain, and finally the chain leader transmits to the BS. In PBDAS, only the cell heads need to transmit data toward the BS. Therefore, the data transmissions to the BS substantially decrease. Besides, the cell heads and chain leader are designated in turn according to the energy level so that the energy depletion of nodes is evenly distributed. Simulation results show that the proposed PBDAS extends the lifetime of sensor nodes, so as to make the lifetime of the whole network longer.
基金Support by the National High Technology Research and Development Program of China(No.2012AA120802)the National Natural Science Foundation of China(No.61302074)+1 种基金Specialized Research Fund for the Doctoral Program of Higher Education(No.20122301120004)Natural Science Foundation of Heilongjiang Province(No.QC2013C061)
文摘Data aggregation technology reduces traffic overhead of wireless sensor network and extends effective working time of the network,yet continued operation of wireless sensor networks increases the probability of aggregation nodes being captured and probability of aggregated data being tampered.Thus it will seriously affect the security performance of the network. For network security issues,a stateful public key based SDAM( secure data aggregation model) is proposed for wireless sensor networks( WSNs),which employs a new stateful public key encryption to provide efficient end-to-end security. Moreover,the security aggregation model will not impose any bound on the aggregation function property,so as to realize the low cost and high security level at the same time.
基金This research was funded by Key Research and Development Program of Shaanxi Province(No.2017GY-064)the National Key R&D Program of China(No.2017YFB1402102).
文摘With the popularity of sensor-rich mobile devices,mobile crowdsensing(MCS)has emerged as an effective method for data collection and processing.However,MCS platform usually need workers’precise locations for optimal task execution and collect sensing data from workers,which raises severe concerns of privacy leakage.Trying to preserve workers’location and sensing data from the untrusted MCS platform,a differentially private data aggregation method based on worker partition and location obfuscation(DP-DAWL method)is proposed in the paper.DP-DAWL method firstly use an improved K-means algorithm to divide workers into groups and assign different privacy budget to the group according to group size(the number of workers).Then each worker’s location is obfuscated and his/her sensing data is perturbed by adding Laplace noise before uploading to the platform.In the stage of data aggregation,DP-DAWL method adopts an improved Kalman filter algorithm to filter out the added noise(including both added noise of sensing data and the system noise in the sensing process).Through using optimal estimation of noisy aggregated sensing data,the platform can finally gain better utility of aggregated data while preserving workers’privacy.Extensive experiments on the synthetic datasets demonstrate the effectiveness of the proposed method.
基金This work was supported by the National Natural Science Foundation of China under Grant No. 60873199.
文摘In order to avoid internal attacks during data aggregation in wireless sensor networks, a grid-based network architecture fit for monitoring is designed and the algorithms for network division, initialization and grid tree construction are presented. The characteristics of on-off attacks are first studied and monitoring mechanisms are then designed for sensor nodes. A Fast Detection and Slow Recovery (FDSR) algorithm is proposed to prevent on-off attacks by observing the behaviors of the nodes and computing reputations. A recovery mechanism is designed to isolate malicious nodes by identifying the new roles of nodes and updating the grid tree. In the experiments, some situations of on-off attacks are simulated and the results are compared with other approaches. The experimental results indicate that our approach can detect malicious nodes effectively and guarantee secure data aggregation with acceptable energy consumption.
基金supported by the National Natural Science Foundation of China(NSFC)(62102232,62122042,61971269)Natural Science Foundation of Shandong Province Under(ZR2021QF064)。
文摘As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when learning agents are deployed on the edge side,the data aggregation from the end side to the designated edge devices is an important research topic.Considering the various importance of end devices,this paper studies the weighted data aggregation problem in a single hop end-to-edge communication network.Firstly,to make sure all the end devices with various weights are fairly treated in data aggregation,a distributed end-to-edge cooperative scheme is proposed.Then,to handle the massive contention on the wireless channel caused by end devices,a multi-armed bandit(MAB)algorithm is designed to help the end devices find their most appropriate update rates.Diffe-rent from the traditional data aggregation works,combining the MAB enables our algorithm a higher efficiency in data aggregation.With a theoretical analysis,we show that the efficiency of our algorithm is asymptotically optimal.Comparative experiments with previous works are also conducted to show the strength of our algorithm.
文摘In monitoring systems, multiple sensor nodes can detect a single target of interest simultaneously and the data collected are usually highly correlated and redundant. If each node sends data to the base station, energy will be wasted and thus the network energy will be depleted quickly. Data aggregation is an important paradigm for compressing data so that the energy of the network is spent efficiently. In this paper, a novel data aggregation algorithm called Redundancy Elimination for Accurate Data Aggregation (READA) has been proposed. By exploiting the range of spatial correlations of data in the network, READA applies a grouping and compression mechanism to remove duplicate data in the aggregated set of data to be sent to the base station without largely losing the accuracy of the final aggregated data. One peculiarity of READA is that it uses a prediction model derived from cached values to confirm whether any outlier is actually an event which has occurred. From the various simulations conducted, it was observed that in READA the accuracy of data has been highly preserved taking into consideration the energy dissipated for aggregating the
文摘In Wireless Sensor Networks (WSNs), sensor nodes are developed densely. They have limit processing ca-pability and low power resources. Thus, energy is one of most important constraints in these networks. In some applications of sensor networks, sensor nodes sense data from the environment periodically and trans-mit these data to sink node. In order to decrease energy consumption and so, increase network’s lifetime, volume of transmitted data should be decreased. A solution, which is suggested, is aggregation. In aggrega-tion mechanisms, the nodes aggregate received data and send aggregated result instead of raw data to sink, so, the volume of the transmitted data is decreased. Aggregation algorithms should construct aggregation tree and transmit data to sink based on this tree. In this paper, we propose an automaton based algorithm to con-struct aggregation tree by using energy and distance parameters. Automaton is a decision-making machine that is able-to-learn. Since network’s topology is dynamic, algorithm should construct aggregation tree peri-odically. In order to aware nodes of topology and so, select optimal path, routing packets must be flooded in entire network that led to high energy consumption. By using automaton machine which is in interaction with environment, we solve this problem based on automat learning. By using this strategy, aggregation tree is reconstructed locally, that result in decreasing energy consumption. Simulation results show that the pro-posed algorithm has better performance in terms of energy efficiency which increase the network lifetime and support better coverage.
文摘Wireless Sensor Networks (WSNs) typically use in-network processing to reduce the communication overhead. Due to the fusion of data items sourced at different nodes into a single one during in-network processing, the sanctity of the aggregated data needs to be ensured. Especially, the data integrity of the aggregated result is critical as any malicious update to it can jeopardize not one, but many sensor readings. In this paper, we analyse three different approaches to providing integrity support for SDA in WSNs. The first one is traditional MAC, in which each leaf node and intermediate node share a key with parent (symmetric key). The second is aggregate MAC (AMAC), in which a base station shares a unique key with all the other sensor nodes. The third is homomorphic MAC (Homo MAC) that is purely symmetric key-based approach. These approaches exhibit diverse trade-off in resource consumption and security assumptions. Adding together to that, we also propose a probabilistic and improved variant of homomorphic MAC that improves the security strength for secure data aggregation in WSNs. We carry out simulations in TinyOS environment to experimentally evaluate the impact of each of these on the resource consumption in WSNs.
基金supported by MTO in part through the Highway Infrastructure and Innovations Funding Program(HIIFP)
文摘This paper describes an empirical study aiming at identifying the main differences between different logistic regression models and collision data aggregation methods that are commonly applied in road safety literature for modeling collision severity. In particular, the research compares three popular multilevel logistic models (i.e., sequential binary logit models, ordered logit models, and multinomial logit models) as well as three data aggregation methods (i.e., occupant based, vehicle based, and collision based). Six years of collision data (2001-2006) from 31 highway routes from across the province of Ontario, Canada were used for this analysis. It was found that a multilevel multinomial logit model has the best fit to the data than the other two models while the results obtained from occupant-based data are more reliable than those from vehicle- and collision-based data. More importantly, while generally consistent in terms of factors that were found to be significant between different models and data aggregation methods, the effect size of each factor differ sub- stantially, which could have significant implications forevaluating the effects of different safety-related policies and countermeasures.