Continuous response of range query on steaming data provides useful information for many practical applications as well as the risk of privacy disclosure.The existing research on differential privacy streaming data pu...Continuous response of range query on steaming data provides useful information for many practical applications as well as the risk of privacy disclosure.The existing research on differential privacy streaming data publication mostly pay close attention to boosting query accuracy,but pay less attention to query efficiency,and ignore the effect of timeliness on data weight.In this paper,we propose an effective algorithm of differential privacy streaming data publication under exponential decay mode.Firstly,by introducing the Fenwick tree to divide and reorganize data items in the stream,we achieve a constant time complexity for inserting a new item and getting the prefix sum.Meanwhile,we achieve time complicity linear to the number of data item for building a tree.After that,we use the advantage of matrix mechanism to deal with relevant queries and reduce the global sensitivity.In addition,we choose proper diagonal matrix further improve the range query accuracy.Finally,considering about exponential decay,every data item is weighted by the decay factor.By putting the Fenwick tree and matrix optimization together,we present complete algorithm for differentiate private real-time streaming data publication.The experiment is designed to compare the algorithm in this paper with similar algorithms for streaming data release in exponential decay.Experimental results show that the algorithm in this paper effectively improve the query efficiency while ensuring the quality of the query.展开更多
Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL...Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets.展开更多
The co-frequency vibration fault is one of the common faults in the operation of rotating equipment,and realizing the real-time diagnosis of the co-frequency vibration fault is of great significance for monitoring the...The co-frequency vibration fault is one of the common faults in the operation of rotating equipment,and realizing the real-time diagnosis of the co-frequency vibration fault is of great significance for monitoring the health state and carrying out vibration suppression of the equipment.In engineering scenarios,co-frequency vibration faults are highlighted by rotational frequency and are difficult to identify,and existing intelligent methods require more hardware conditions and are exclusively time-consuming.Therefore,Lightweight-convolutional neural networks(LW-CNN)algorithm is proposed in this paper to achieve real-time fault diagnosis.The critical parameters are discussed and verified by simulated and experimental signals for the sliding window data augmentation method.Based on LW-CNN and data augmentation,the real-time intelligent diagnosis of co-frequency is realized.Moreover,a real-time detection method of fault diagnosis algorithm is proposed for data acquisition to fault diagnosis.It is verified by experiments that the LW-CNN and sliding window methods are used with high accuracy and real-time performance.展开更多
Data stream clustering is integral to contemporary big data applications.However,addressing the ongoing influx of data streams efficiently and accurately remains a primary challenge in current research.This paper aims...Data stream clustering is integral to contemporary big data applications.However,addressing the ongoing influx of data streams efficiently and accurately remains a primary challenge in current research.This paper aims to elevate the efficiency and precision of data stream clustering,leveraging the TEDA(Typicality and Eccentricity Data Analysis)algorithm as a foundation,we introduce improvements by integrating a nearest neighbor search algorithm to enhance both the efficiency and accuracy of the algorithm.The original TEDA algorithm,grounded in the concept of“Typicality and Eccentricity Data Analytics”,represents an evolving and recursive method that requires no prior knowledge.While the algorithm autonomously creates and merges clusters as new data arrives,its efficiency is significantly hindered by the need to traverse all existing clusters upon the arrival of further data.This work presents the NS-TEDA(Neighbor Search Based Typicality and Eccentricity Data Analysis)algorithm by incorporating a KD-Tree(K-Dimensional Tree)algorithm integrated with the Scapegoat Tree.Upon arrival,this ensures that new data points interact solely with clusters in very close proximity.This significantly enhances algorithm efficiency while preventing a single data point from joining too many clusters and mitigating the merging of clusters with high overlap to some extent.We apply the NS-TEDA algorithm to several well-known datasets,comparing its performance with other data stream clustering algorithms and the original TEDA algorithm.The results demonstrate that the proposed algorithm achieves higher accuracy,and its runtime exhibits almost linear dependence on the volume of data,making it more suitable for large-scale data stream analysis research.展开更多
In petroleum engineering,real-time lithology identification is very important for reservoir evaluation,drilling decisions and petroleum geological exploration.A lithology identification method while drilling based on ...In petroleum engineering,real-time lithology identification is very important for reservoir evaluation,drilling decisions and petroleum geological exploration.A lithology identification method while drilling based on machine learning and mud logging data is studied in this paper.This method can effectively utilize downhole parameters collected in real-time during drilling,to identify lithology in real-time and provide a reference for optimization of drilling parameters.Given the imbalance of lithology samples,the synthetic minority over-sampling technique(SMOTE)and Tomek link were used to balance the sample number of five lithologies.Meanwhile,this paper introduces Tent map,random opposition-based learning and dynamic perceived probability to the original crow search algorithm(CSA),and establishes an improved crow search algorithm(ICSA).In this paper,ICSA is used to optimize the hyperparameter combination of random forest(RF),extremely random trees(ET),extreme gradient boosting(XGB),and light gradient boosting machine(LGBM)models.In addition,this study combines the recognition advantages of the four models.The accuracy of lithology identification by the weighted average probability model reaches 0.877.The study of this paper realizes high-precision real-time lithology identification method,which can provide lithology reference for the drilling process.展开更多
Predicting the mechanical behaviors of structure and perceiving the anomalies in advance are essential to ensuring the safe operation of infrastructures in the long run.In addition to the incomplete consideration of i...Predicting the mechanical behaviors of structure and perceiving the anomalies in advance are essential to ensuring the safe operation of infrastructures in the long run.In addition to the incomplete consideration of influencing factors,the prediction time scale of existing studies is rough.Therefore,this study focuses on the development of a real-time prediction model by coupling the spatio-temporal correlation with external load through autoencoder network(ATENet)based on structural health monitoring(SHM)data.An autoencoder mechanism is performed to acquire the high-level representation of raw monitoring data at different spatial positions,and the recurrent neural network is applied to understanding the temporal correlation from the time series.Then,the obtained temporal-spatial information is coupled with dynamic loads through a fully connected layer to predict structural performance in next 12 h.As a case study,the proposed model is formulated on the SHM data collected from a representative underwater shield tunnel.The robustness study is carried out to verify the reliability and the prediction capability of the proposed model.Finally,the ATENet model is compared with some typical models,and the results indicate that it has the best performance.ATENet model is of great value to predict the realtime evolution trend of tunnel structure.展开更多
Due to their significant correlation and redundancy,conventional block cipher cryptosystems are not efficient in encryptingmultimedia data.Streamciphers based onCellularAutomata(CA)can provide amore effective solution...Due to their significant correlation and redundancy,conventional block cipher cryptosystems are not efficient in encryptingmultimedia data.Streamciphers based onCellularAutomata(CA)can provide amore effective solution.The CA have recently gained recognition as a robust cryptographic primitive,being used as pseudorandom number generators in hash functions,block ciphers and stream ciphers.CA have the ability to perform parallel transformations,resulting in high throughput performance.Additionally,they exhibit a natural tendency to resist fault attacks.Few stream cipher schemes based on CA have been proposed in the literature.Though,their encryption/decryption throughput is relatively low,which makes them unsuitable formultimedia communication.Trivium and Grain are efficient stream ciphers that were selected as finalists in the eSTREAM project,but they have proven to be vulnerable to differential fault attacks.This work introduces a novel and scalable stream cipher named CeTrivium,whose design is based on CA.CeTrivium is a 5-neighborhood CA-based streamcipher inspired by the designs of Trivium and Grain.It is constructed using three building blocks:the Trivium(Tr)block,the Nonlinear-CA(NCA)block,and the Nonlinear Mixing(NM)block.The NCA block is a 64-bit nonlinear hybrid 5-neighborhood CA,while the Tr block has the same structure as the Trivium stream cipher.The NM block is a nonlinear,balanced,and reversible Boolean function that mixes the outputs of the Tr and NCA blocks to produce a keystream.Cryptanalysis of CeTrivium has indicated that it can resist various attacks,including correlation,algebraic,fault,cube,Meier and Staffelbach,and side channel attacks.Moreover,the scheme is evaluated using histogramand spectrogramanalysis,aswell as several differentmeasurements,including the correlation coefficient,number of samples change rate,signal-to-noise ratio,entropy,and peak signal-to-noise ratio.The performance of CeTrivium is evaluated and compared with other state-of-the-art techniques.CeTrivium outperforms them in terms of encryption throughput while maintaining high security.CeTrivium has high encryption and decryption speeds,is scalable,and resists various attacks,making it suitable for multimedia communication.展开更多
This paper examines how cybersecurity is developing and how it relates to more conventional information security. Although information security and cyber security are sometimes used synonymously, this study contends t...This paper examines how cybersecurity is developing and how it relates to more conventional information security. Although information security and cyber security are sometimes used synonymously, this study contends that they are not the same. The concept of cyber security is explored, which goes beyond protecting information resources to include a wider variety of assets, including people [1]. Protecting information assets is the main goal of traditional information security, with consideration to the human element and how people fit into the security process. On the other hand, cyber security adds a new level of complexity, as people might unintentionally contribute to or become targets of cyberattacks. This aspect presents moral questions since it is becoming more widely accepted that society has a duty to protect weaker members of society, including children [1]. The study emphasizes how important cyber security is on a larger scale, with many countries creating plans and laws to counteract cyberattacks. Nevertheless, a lot of these sources frequently neglect to define the differences or the relationship between information security and cyber security [1]. The paper focus on differentiating between cybersecurity and information security on a larger scale. The study also highlights other areas of cybersecurity which includes defending people, social norms, and vital infrastructure from threats that arise from online in addition to information and technology protection. It contends that ethical issues and the human factor are becoming more and more important in protecting assets in the digital age, and that cyber security is a paradigm shift in this regard [1].展开更多
Real-time health data monitoring is pivotal for bolstering road services’safety,intelligence,and efficiency within the Internet of Health Things(IoHT)framework.Yet,delays in data retrieval can markedly hinder the eff...Real-time health data monitoring is pivotal for bolstering road services’safety,intelligence,and efficiency within the Internet of Health Things(IoHT)framework.Yet,delays in data retrieval can markedly hinder the efficacy of big data awareness detection systems.We advocate for a collaborative caching approach involving edge devices and cloud networks to combat this.This strategy is devised to streamline the data retrieval path,subsequently diminishing network strain.Crafting an adept cache processing scheme poses its own set of challenges,especially given the transient nature of monitoring data and the imperative for swift data transmission,intertwined with resource allocation tactics.This paper unveils a novel mobile healthcare solution that harnesses the power of our collaborative caching approach,facilitating nuanced health monitoring via edge devices.The system capitalizes on cloud computing for intricate health data analytics,especially in pinpointing health anomalies.Given the dynamic locational shifts and possible connection disruptions,we have architected a hierarchical detection system,particularly during crises.This system caches data efficiently and incorporates a detection utility to assess data freshness and potential lag in response times.Furthermore,we introduce the Cache-Assisted Real-Time Detection(CARD)model,crafted to optimize utility.Addressing the inherent complexity of the NP-hard CARD model,we have championed a greedy algorithm as a solution.Simulations reveal that our collaborative caching technique markedly elevates the Cache Hit Ratio(CHR)and data freshness,outshining its contemporaneous benchmark algorithms.The empirical results underscore the strength and efficiency of our innovative IoHT-based health monitoring solution.To encapsulate,this paper tackles the nuances of real-time health data monitoring in the IoHT landscape,presenting a joint edge-cloud caching strategy paired with a hierarchical detection system.Our methodology yields enhanced cache efficiency and data freshness.The corroborative numerical data accentuates the feasibility and relevance of our model,casting a beacon for the future trajectory of real-time health data monitoring systems.展开更多
Due to the advancements in information technologies,massive quantity of data is being produced by social media,smartphones,and sensor devices.The investigation of data stream by the use of machine learning(ML)approach...Due to the advancements in information technologies,massive quantity of data is being produced by social media,smartphones,and sensor devices.The investigation of data stream by the use of machine learning(ML)approaches to address regression,prediction,and classification problems have received consid-erable interest.At the same time,the detection of anomalies or outliers and feature selection(FS)processes becomes important.This study develops an outlier detec-tion with feature selection technique for streaming data classification,named ODFST-SDC technique.Initially,streaming data is pre-processed in two ways namely categorical encoding and null value removal.In addition,Local Correla-tion Integral(LOCI)is used which is significant in the detection and removal of outliers.Besides,red deer algorithm(RDA)based FS approach is employed to derive an optimal subset of features.Finally,kernel extreme learning machine(KELM)classifier is used for streaming data classification.The design of LOCI based outlier detection and RDA based FS shows the novelty of the work.In order to assess the classification outcomes of the ODFST-SDC technique,a series of simulations were performed using three benchmark datasets.The experimental results reported the promising outcomes of the ODFST-SDC technique over the recent approaches.展开更多
In the era of big data, huge volumes of data are generated from online social networks, sensor networks, mobile devices, and organizations’ enterprise systems. This phenomenon provides organizations with unprecedente...In the era of big data, huge volumes of data are generated from online social networks, sensor networks, mobile devices, and organizations’ enterprise systems. This phenomenon provides organizations with unprecedented opportunities to tap into big data to mine valuable business intelligence. However, traditional business analytics methods may not be able to cope with the flood of big data. The main contribution of this paper is the illustration of the development of a novel big data stream analytics framework named BDSASA that leverages a probabilistic language model to analyze the consumer sentiments embedded in hundreds of millions of online consumer reviews. In particular, an inference model is embedded into the classical language modeling framework to enhance the prediction of consumer sentiments. The practical implication of our research work is that organizations can apply our big data stream analytics framework to analyze consumers’ product preferences, and hence develop more effective marketing and production strategies.展开更多
Opinion (sentiment) analysis on big data streams from the constantly generated text streams on social media networks to hundreds of millions of online consumer reviews provides many organizations in every field with o...Opinion (sentiment) analysis on big data streams from the constantly generated text streams on social media networks to hundreds of millions of online consumer reviews provides many organizations in every field with opportunities to discover valuable intelligence from the massive user generated text streams. However, the traditional content analysis frameworks are inefficient to handle the unprecedentedly big volume of unstructured text streams and the complexity of text analysis tasks for the real time opinion analysis on the big data streams. In this paper, we propose a parallel real time sentiment analysis system: Social Media Data Stream Sentiment Analysis Service (SMDSSAS) that performs multiple phases of sentiment analysis of social media text streams effectively in real time with two fully analytic opinion mining models to combat the scale of text data streams and the complexity of sentiment analysis processing on unstructured text streams. We propose two aspect based opinion mining models: Deterministic and Probabilistic sentiment models for a real time sentiment analysis on the user given topic related data streams. Experiments on the social media Twitter stream traffic captured during the pre-election weeks of the 2016 Presidential election for real-time analysis of public opinions toward two presidential candidates showed that the proposed system was able to predict correctly Donald Trump as the winner of the 2016 Presidential election. The cross validation results showed that the proposed sentiment models with the real-time streaming components in our proposed framework delivered effectively the analysis of the opinions on two presidential candidates with average 81% accuracy for the Deterministic model and 80% for the Probabilistic model, which are 1% - 22% improvements from the results of the existing literature.展开更多
The interleaving/multiplexing technique was used to realize a 200?MHz real time data acquisition system. Two 100?MHz ADC modules worked parallelly and every ADC plays out data in ping pang fashion. The design improv...The interleaving/multiplexing technique was used to realize a 200?MHz real time data acquisition system. Two 100?MHz ADC modules worked parallelly and every ADC plays out data in ping pang fashion. The design improved the system conversion rata to 200?MHz and reduced the speed of data transporting and storing to 50?MHz. The high speed HDPLD and ECL logic parts were used to control system timing and the memory address. The multi layer print board and the shield were used to decrease interference produced by the high speed circuit. The system timing was designed carefully. The interleaving/multiplexing technique could improve the system conversion rata greatly while reducing the speed of external digital interfaces greatly. The design resolved the difficulties in high speed system effectively. The experiment proved the data acquisition system is stable and accurate.展开更多
In order to improve the precision of super point detection and control measurement resource consumption, this paper proposes a super point detection method based on sampling and data streaming algorithms (SDSD), and...In order to improve the precision of super point detection and control measurement resource consumption, this paper proposes a super point detection method based on sampling and data streaming algorithms (SDSD), and proves that only sources or destinations with a lot of flows can be sampled probabilistically using the SDSD algorithm. The SDSD algorithm uses both the IP table and the flow bloom filter (BF) data structures to maintain the IP and flow information. The IP table is used to judge whether an IP address has been recorded. If the IP exists, then all its subsequent flows will be recorded into the flow BF; otherwise, the IP flow is sampled. This paper also analyzes the accuracy and memory requirements of the SDSD algorithm , and tests them using the CERNET trace. The theoretical analysis and experimental tests demonstrate that the most relative errors of the super points estimated by the SDSD algorithm are less than 5%, whereas the results of other algorithms are about 10%. Because of the BF structure, the SDSD algorithm is also better than previous algorithms in terms of memory consumption.展开更多
Offshore waters provide resources for human beings,while on the other hand,threaten them because of marine disasters.Ocean stations are part of offshore observation networks,and the quality of their data is of great s...Offshore waters provide resources for human beings,while on the other hand,threaten them because of marine disasters.Ocean stations are part of offshore observation networks,and the quality of their data is of great significance for exploiting and protecting the ocean.We used hourly mean wave height,temperature,and pressure real-time observation data taken in the Xiaomaidao station(in Qingdao,China)from June 1,2017,to May 31,2018,to explore the data quality using eight quality control methods,and to discriminate the most effective method for Xiaomaidao station.After using the eight quality control methods,the percentages of the mean wave height,temperature,and pressure data that passed the tests were 89.6%,88.3%,and 98.6%,respectively.With the marine disaster(wave alarm report)data,the values failed in the test mainly due to the influence of aging observation equipment and missing data transmissions.The mean wave height is often affected by dynamic marine disasters,so the continuity test method is not effective.The correlation test with other related parameters would be more useful for the mean wave height.展开更多
In the era of Big Data, typical architecture of distributed real-time stream processing systems is the combination of Flume, Kafka, and Storm. As a kind of distributed message system, Kafka has the characteristics of ...In the era of Big Data, typical architecture of distributed real-time stream processing systems is the combination of Flume, Kafka, and Storm. As a kind of distributed message system, Kafka has the characteristics of horizontal scalability and high throughput, which is manly deployed in many areas in order to address the problem of speed mismatch between message producers and consumers. When using Kafka, we need to quickly receive data sent by producers. In addition, we need to send data to consumers quickly. Therefore, the performance of Kafka is of critical importance to the performance of the whole stream processing system. In this paper, we propose the improved design of real-time stream processing systems, and focus on improving the Kafka's data loading process.We use Kafka cat to transfer data from the source to Kafka topic directly, which can reduce the network transmission. We also utilize the memory file system to accelerate the process of data loading, which can address the bottleneck and performance problems caused by disk I/O. Extensive experiments are conducted to evaluate the performance, which show the superiority of our improved design.展开更多
Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approac...Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approach for processing high-dimensional data by finding relevant features for each cluster in the data space.Subspace clustering methods extend traditional clustering to account for the constraints imposed by data streams.Data streams are not only high-dimensional,but also unbounded and evolving.This necessitates the development of subspace clustering algorithms that can handle high dimensionality and adapt to the unique characteristics of data streams.Although many articles have contributed to the literature review on data stream clustering,there is currently no specific review on subspace clustering algorithms in high-dimensional data streams.Therefore,this article aims to systematically review the existing literature on subspace clustering of data streams in high-dimensional streaming environments.The review follows a systematic methodological approach and includes 18 articles for the final analysis.The analysis focused on two research questions related to the general clustering process and dealing with the unbounded and evolving characteristics of data streams.The main findings relate to six elements:clustering process,cluster search,subspace search,synopsis structure,cluster maintenance,and evaluation measures.Most algorithms use a two-phase clustering approach consisting of an initialization stage,a refinement stage,a cluster maintenance stage,and a final clustering stage.The density-based top-down subspace clustering approach is more widely used than the others because it is able to distinguish true clusters and outliers using projected microclusters.Most algorithms implicitly adapt to the evolving nature of the data stream by using a time fading function that is sensitive to outliers.Future work can focus on the clustering framework,parameter optimization,subspace search techniques,memory-efficient synopsis structures,explicit cluster change detection,and intrinsic performance metrics.This article can serve as a guide for researchers interested in high-dimensional subspace clustering methods for data streams.展开更多
The application and development of a wide-area measurement system(WAMS)has enabled many applications and led to several requirements based on dynamic measurement data.Such data are transmitted as big data information ...The application and development of a wide-area measurement system(WAMS)has enabled many applications and led to several requirements based on dynamic measurement data.Such data are transmitted as big data information flow.To ensure effective transmission of wide-frequency electrical information by the communication protocol of a WAMS,this study performs real-time traffic monitoring and analysis of the data network of a power information system,and establishes corresponding network optimization strategies to solve existing transmission problems.This study utilizes the traffic analysis results obtained using the current real-time dynamic monitoring system to design an optimization strategy,covering the optimization in three progressive levels:the underlying communication protocol,source data,and transmission process.Optimization of the system structure and scheduling optimization of data information are validated to be feasible and practical via tests.展开更多
Glacier disasters occur frequently in alpine regions around the world,but the current conventional geological disaster measurement technology cannot be directly used for glacier disaster measurement.Hence,in this stud...Glacier disasters occur frequently in alpine regions around the world,but the current conventional geological disaster measurement technology cannot be directly used for glacier disaster measurement.Hence,in this study,a distributed multi-sensor measurement system for glacier deformation was established by integrating piezoelectric sensing,coded sensing,attitude sensing technology and wireless communication technology.The traditional Modbus protocol was optimized to solve the problem of data identification confusion of different acquisition nodes.Through indoor wireless transmission,adaptive performance analysis,error measurement experiment and landslide simulation experiment,the performance of the measurement system was analyzed and evaluated.Using unmanned aerial vehicle technology,the reliability and effectiveness of the measurement system were verified on the site of Galongla glacier in southeastern Tibet,China.The results show that the mean absolute percentage errors were only 1.13%and 2.09%for the displacement and temperature,respectively.The distributed glacier deformation real-time measurement system provides a new means for the assessment of the development process of glacier disasters and disaster prevention and mitigation.展开更多
Handling sentiment drifts in real time twitter data streams are a challen-ging task while performing sentiment classifications,because of the changes that occur in the sentiments of twitter users,with respect to time....Handling sentiment drifts in real time twitter data streams are a challen-ging task while performing sentiment classifications,because of the changes that occur in the sentiments of twitter users,with respect to time.The growing volume of tweets with sentiment drifts has led to the need for devising an adaptive approach to detect and handle this drift in real time.This work proposes an adap-tive learning algorithm-based framework,Twitter Sentiment Drift Analysis-Bidir-ectional Encoder Representations from Transformers(TSDA-BERT),which introduces a sentiment drift measure to detect drifts and a domain impact score to adaptively retrain the classification model with domain relevant data in real time.The framework also works on static data by converting them to data streams using the Kafka tool.The experiments conducted on real time and simulated tweets of sports,health care andfinancial topics show that the proposed system is able to detect sentiment drifts and maintain the performance of the classification model,with accuracies of 91%,87%and 90%,respectively.Though the results have been provided only for a few topics,as a proof of concept,this framework can be applied to detect sentiment drifts and perform sentiment classification on real time data streams of any topic.展开更多
基金This work is supported,in part,by the National Natural Science Foundation of China under grant numbers 61300026in part,by the Natural Science Foundation of Fujian Province under grant numbers 2017J01754, 2018J01797.
文摘Continuous response of range query on steaming data provides useful information for many practical applications as well as the risk of privacy disclosure.The existing research on differential privacy streaming data publication mostly pay close attention to boosting query accuracy,but pay less attention to query efficiency,and ignore the effect of timeliness on data weight.In this paper,we propose an effective algorithm of differential privacy streaming data publication under exponential decay mode.Firstly,by introducing the Fenwick tree to divide and reorganize data items in the stream,we achieve a constant time complexity for inserting a new item and getting the prefix sum.Meanwhile,we achieve time complicity linear to the number of data item for building a tree.After that,we use the advantage of matrix mechanism to deal with relevant queries and reduce the global sensitivity.In addition,we choose proper diagonal matrix further improve the range query accuracy.Finally,considering about exponential decay,every data item is weighted by the decay factor.By putting the Fenwick tree and matrix optimization together,we present complete algorithm for differentiate private real-time streaming data publication.The experiment is designed to compare the algorithm in this paper with similar algorithms for streaming data release in exponential decay.Experimental results show that the algorithm in this paper effectively improve the query efficiency while ensuring the quality of the query.
文摘Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets.
基金Supported by National Natural Science Foundation of China(Grant Nos.51875031,52242507)Beijing Municipal Natural Science Foundation of China(Grant No.3212010)Beijing Municipal Youth Backbone Personal Project of China(Grant No.2017000020124 G018).
文摘The co-frequency vibration fault is one of the common faults in the operation of rotating equipment,and realizing the real-time diagnosis of the co-frequency vibration fault is of great significance for monitoring the health state and carrying out vibration suppression of the equipment.In engineering scenarios,co-frequency vibration faults are highlighted by rotational frequency and are difficult to identify,and existing intelligent methods require more hardware conditions and are exclusively time-consuming.Therefore,Lightweight-convolutional neural networks(LW-CNN)algorithm is proposed in this paper to achieve real-time fault diagnosis.The critical parameters are discussed and verified by simulated and experimental signals for the sliding window data augmentation method.Based on LW-CNN and data augmentation,the real-time intelligent diagnosis of co-frequency is realized.Moreover,a real-time detection method of fault diagnosis algorithm is proposed for data acquisition to fault diagnosis.It is verified by experiments that the LW-CNN and sliding window methods are used with high accuracy and real-time performance.
基金This research was funded by the National Natural Science Foundation of China(Grant No.72001190)by the Ministry of Education’s Humanities and Social Science Project via the China Ministry of Education(Grant No.20YJC630173)by Zhejiang A&F University(Grant No.2022LFR062).
文摘Data stream clustering is integral to contemporary big data applications.However,addressing the ongoing influx of data streams efficiently and accurately remains a primary challenge in current research.This paper aims to elevate the efficiency and precision of data stream clustering,leveraging the TEDA(Typicality and Eccentricity Data Analysis)algorithm as a foundation,we introduce improvements by integrating a nearest neighbor search algorithm to enhance both the efficiency and accuracy of the algorithm.The original TEDA algorithm,grounded in the concept of“Typicality and Eccentricity Data Analytics”,represents an evolving and recursive method that requires no prior knowledge.While the algorithm autonomously creates and merges clusters as new data arrives,its efficiency is significantly hindered by the need to traverse all existing clusters upon the arrival of further data.This work presents the NS-TEDA(Neighbor Search Based Typicality and Eccentricity Data Analysis)algorithm by incorporating a KD-Tree(K-Dimensional Tree)algorithm integrated with the Scapegoat Tree.Upon arrival,this ensures that new data points interact solely with clusters in very close proximity.This significantly enhances algorithm efficiency while preventing a single data point from joining too many clusters and mitigating the merging of clusters with high overlap to some extent.We apply the NS-TEDA algorithm to several well-known datasets,comparing its performance with other data stream clustering algorithms and the original TEDA algorithm.The results demonstrate that the proposed algorithm achieves higher accuracy,and its runtime exhibits almost linear dependence on the volume of data,making it more suitable for large-scale data stream analysis research.
基金supported by CNPC-CZU Innovation Alliancesupported by the Program of Polar Drilling Environmental Protection and Waste Treatment Technology (2022YFC2806403)。
文摘In petroleum engineering,real-time lithology identification is very important for reservoir evaluation,drilling decisions and petroleum geological exploration.A lithology identification method while drilling based on machine learning and mud logging data is studied in this paper.This method can effectively utilize downhole parameters collected in real-time during drilling,to identify lithology in real-time and provide a reference for optimization of drilling parameters.Given the imbalance of lithology samples,the synthetic minority over-sampling technique(SMOTE)and Tomek link were used to balance the sample number of five lithologies.Meanwhile,this paper introduces Tent map,random opposition-based learning and dynamic perceived probability to the original crow search algorithm(CSA),and establishes an improved crow search algorithm(ICSA).In this paper,ICSA is used to optimize the hyperparameter combination of random forest(RF),extremely random trees(ET),extreme gradient boosting(XGB),and light gradient boosting machine(LGBM)models.In addition,this study combines the recognition advantages of the four models.The accuracy of lithology identification by the weighted average probability model reaches 0.877.The study of this paper realizes high-precision real-time lithology identification method,which can provide lithology reference for the drilling process.
基金This work is supported by the National Natural Science Foundation of China(Grant No.51991392)Key Deployment Projects of Chinese Academy of Sciences(Grant No.ZDRW-ZS-2021-3-3)the Second Tibetan Plateau Scientific Expedition and Research Program(STEP)(Grant No.2019QZKK0904).
文摘Predicting the mechanical behaviors of structure and perceiving the anomalies in advance are essential to ensuring the safe operation of infrastructures in the long run.In addition to the incomplete consideration of influencing factors,the prediction time scale of existing studies is rough.Therefore,this study focuses on the development of a real-time prediction model by coupling the spatio-temporal correlation with external load through autoencoder network(ATENet)based on structural health monitoring(SHM)data.An autoencoder mechanism is performed to acquire the high-level representation of raw monitoring data at different spatial positions,and the recurrent neural network is applied to understanding the temporal correlation from the time series.Then,the obtained temporal-spatial information is coupled with dynamic loads through a fully connected layer to predict structural performance in next 12 h.As a case study,the proposed model is formulated on the SHM data collected from a representative underwater shield tunnel.The robustness study is carried out to verify the reliability and the prediction capability of the proposed model.Finally,the ATENet model is compared with some typical models,and the results indicate that it has the best performance.ATENet model is of great value to predict the realtime evolution trend of tunnel structure.
文摘Due to their significant correlation and redundancy,conventional block cipher cryptosystems are not efficient in encryptingmultimedia data.Streamciphers based onCellularAutomata(CA)can provide amore effective solution.The CA have recently gained recognition as a robust cryptographic primitive,being used as pseudorandom number generators in hash functions,block ciphers and stream ciphers.CA have the ability to perform parallel transformations,resulting in high throughput performance.Additionally,they exhibit a natural tendency to resist fault attacks.Few stream cipher schemes based on CA have been proposed in the literature.Though,their encryption/decryption throughput is relatively low,which makes them unsuitable formultimedia communication.Trivium and Grain are efficient stream ciphers that were selected as finalists in the eSTREAM project,but they have proven to be vulnerable to differential fault attacks.This work introduces a novel and scalable stream cipher named CeTrivium,whose design is based on CA.CeTrivium is a 5-neighborhood CA-based streamcipher inspired by the designs of Trivium and Grain.It is constructed using three building blocks:the Trivium(Tr)block,the Nonlinear-CA(NCA)block,and the Nonlinear Mixing(NM)block.The NCA block is a 64-bit nonlinear hybrid 5-neighborhood CA,while the Tr block has the same structure as the Trivium stream cipher.The NM block is a nonlinear,balanced,and reversible Boolean function that mixes the outputs of the Tr and NCA blocks to produce a keystream.Cryptanalysis of CeTrivium has indicated that it can resist various attacks,including correlation,algebraic,fault,cube,Meier and Staffelbach,and side channel attacks.Moreover,the scheme is evaluated using histogramand spectrogramanalysis,aswell as several differentmeasurements,including the correlation coefficient,number of samples change rate,signal-to-noise ratio,entropy,and peak signal-to-noise ratio.The performance of CeTrivium is evaluated and compared with other state-of-the-art techniques.CeTrivium outperforms them in terms of encryption throughput while maintaining high security.CeTrivium has high encryption and decryption speeds,is scalable,and resists various attacks,making it suitable for multimedia communication.
文摘This paper examines how cybersecurity is developing and how it relates to more conventional information security. Although information security and cyber security are sometimes used synonymously, this study contends that they are not the same. The concept of cyber security is explored, which goes beyond protecting information resources to include a wider variety of assets, including people [1]. Protecting information assets is the main goal of traditional information security, with consideration to the human element and how people fit into the security process. On the other hand, cyber security adds a new level of complexity, as people might unintentionally contribute to or become targets of cyberattacks. This aspect presents moral questions since it is becoming more widely accepted that society has a duty to protect weaker members of society, including children [1]. The study emphasizes how important cyber security is on a larger scale, with many countries creating plans and laws to counteract cyberattacks. Nevertheless, a lot of these sources frequently neglect to define the differences or the relationship between information security and cyber security [1]. The paper focus on differentiating between cybersecurity and information security on a larger scale. The study also highlights other areas of cybersecurity which includes defending people, social norms, and vital infrastructure from threats that arise from online in addition to information and technology protection. It contends that ethical issues and the human factor are becoming more and more important in protecting assets in the digital age, and that cyber security is a paradigm shift in this regard [1].
基金supported by National Natural Science Foundation of China(NSFC)under Grant Number T2350710232.
文摘Real-time health data monitoring is pivotal for bolstering road services’safety,intelligence,and efficiency within the Internet of Health Things(IoHT)framework.Yet,delays in data retrieval can markedly hinder the efficacy of big data awareness detection systems.We advocate for a collaborative caching approach involving edge devices and cloud networks to combat this.This strategy is devised to streamline the data retrieval path,subsequently diminishing network strain.Crafting an adept cache processing scheme poses its own set of challenges,especially given the transient nature of monitoring data and the imperative for swift data transmission,intertwined with resource allocation tactics.This paper unveils a novel mobile healthcare solution that harnesses the power of our collaborative caching approach,facilitating nuanced health monitoring via edge devices.The system capitalizes on cloud computing for intricate health data analytics,especially in pinpointing health anomalies.Given the dynamic locational shifts and possible connection disruptions,we have architected a hierarchical detection system,particularly during crises.This system caches data efficiently and incorporates a detection utility to assess data freshness and potential lag in response times.Furthermore,we introduce the Cache-Assisted Real-Time Detection(CARD)model,crafted to optimize utility.Addressing the inherent complexity of the NP-hard CARD model,we have championed a greedy algorithm as a solution.Simulations reveal that our collaborative caching technique markedly elevates the Cache Hit Ratio(CHR)and data freshness,outshining its contemporaneous benchmark algorithms.The empirical results underscore the strength and efficiency of our innovative IoHT-based health monitoring solution.To encapsulate,this paper tackles the nuances of real-time health data monitoring in the IoHT landscape,presenting a joint edge-cloud caching strategy paired with a hierarchical detection system.Our methodology yields enhanced cache efficiency and data freshness.The corroborative numerical data accentuates the feasibility and relevance of our model,casting a beacon for the future trajectory of real-time health data monitoring systems.
文摘Due to the advancements in information technologies,massive quantity of data is being produced by social media,smartphones,and sensor devices.The investigation of data stream by the use of machine learning(ML)approaches to address regression,prediction,and classification problems have received consid-erable interest.At the same time,the detection of anomalies or outliers and feature selection(FS)processes becomes important.This study develops an outlier detec-tion with feature selection technique for streaming data classification,named ODFST-SDC technique.Initially,streaming data is pre-processed in two ways namely categorical encoding and null value removal.In addition,Local Correla-tion Integral(LOCI)is used which is significant in the detection and removal of outliers.Besides,red deer algorithm(RDA)based FS approach is employed to derive an optimal subset of features.Finally,kernel extreme learning machine(KELM)classifier is used for streaming data classification.The design of LOCI based outlier detection and RDA based FS shows the novelty of the work.In order to assess the classification outcomes of the ODFST-SDC technique,a series of simulations were performed using three benchmark datasets.The experimental results reported the promising outcomes of the ODFST-SDC technique over the recent approaches.
文摘In the era of big data, huge volumes of data are generated from online social networks, sensor networks, mobile devices, and organizations’ enterprise systems. This phenomenon provides organizations with unprecedented opportunities to tap into big data to mine valuable business intelligence. However, traditional business analytics methods may not be able to cope with the flood of big data. The main contribution of this paper is the illustration of the development of a novel big data stream analytics framework named BDSASA that leverages a probabilistic language model to analyze the consumer sentiments embedded in hundreds of millions of online consumer reviews. In particular, an inference model is embedded into the classical language modeling framework to enhance the prediction of consumer sentiments. The practical implication of our research work is that organizations can apply our big data stream analytics framework to analyze consumers’ product preferences, and hence develop more effective marketing and production strategies.
文摘Opinion (sentiment) analysis on big data streams from the constantly generated text streams on social media networks to hundreds of millions of online consumer reviews provides many organizations in every field with opportunities to discover valuable intelligence from the massive user generated text streams. However, the traditional content analysis frameworks are inefficient to handle the unprecedentedly big volume of unstructured text streams and the complexity of text analysis tasks for the real time opinion analysis on the big data streams. In this paper, we propose a parallel real time sentiment analysis system: Social Media Data Stream Sentiment Analysis Service (SMDSSAS) that performs multiple phases of sentiment analysis of social media text streams effectively in real time with two fully analytic opinion mining models to combat the scale of text data streams and the complexity of sentiment analysis processing on unstructured text streams. We propose two aspect based opinion mining models: Deterministic and Probabilistic sentiment models for a real time sentiment analysis on the user given topic related data streams. Experiments on the social media Twitter stream traffic captured during the pre-election weeks of the 2016 Presidential election for real-time analysis of public opinions toward two presidential candidates showed that the proposed system was able to predict correctly Donald Trump as the winner of the 2016 Presidential election. The cross validation results showed that the proposed sentiment models with the real-time streaming components in our proposed framework delivered effectively the analysis of the opinions on two presidential candidates with average 81% accuracy for the Deterministic model and 80% for the Probabilistic model, which are 1% - 22% improvements from the results of the existing literature.
文摘The interleaving/multiplexing technique was used to realize a 200?MHz real time data acquisition system. Two 100?MHz ADC modules worked parallelly and every ADC plays out data in ping pang fashion. The design improved the system conversion rata to 200?MHz and reduced the speed of data transporting and storing to 50?MHz. The high speed HDPLD and ECL logic parts were used to control system timing and the memory address. The multi layer print board and the shield were used to decrease interference produced by the high speed circuit. The system timing was designed carefully. The interleaving/multiplexing technique could improve the system conversion rata greatly while reducing the speed of external digital interfaces greatly. The design resolved the difficulties in high speed system effectively. The experiment proved the data acquisition system is stable and accurate.
基金The National Basic Research Program of China(973Program)(No.2009CB320505)the Natural Science Foundation of Jiangsu Province(No. BK2008288)+1 种基金the Excellent Young Teachers Program of Southeast University(No.4009001018)the Open Research Program of Key Laboratory of Computer Network of Guangdong Province (No. CCNL200706)
文摘In order to improve the precision of super point detection and control measurement resource consumption, this paper proposes a super point detection method based on sampling and data streaming algorithms (SDSD), and proves that only sources or destinations with a lot of flows can be sampled probabilistically using the SDSD algorithm. The SDSD algorithm uses both the IP table and the flow bloom filter (BF) data structures to maintain the IP and flow information. The IP table is used to judge whether an IP address has been recorded. If the IP exists, then all its subsequent flows will be recorded into the flow BF; otherwise, the IP flow is sampled. This paper also analyzes the accuracy and memory requirements of the SDSD algorithm , and tests them using the CERNET trace. The theoretical analysis and experimental tests demonstrate that the most relative errors of the super points estimated by the SDSD algorithm are less than 5%, whereas the results of other algorithms are about 10%. Because of the BF structure, the SDSD algorithm is also better than previous algorithms in terms of memory consumption.
基金Supported by the National Key Research and Development Program of China(Nos.2016YFC1402000,2018YFC1407003,2017YFC1405300)
文摘Offshore waters provide resources for human beings,while on the other hand,threaten them because of marine disasters.Ocean stations are part of offshore observation networks,and the quality of their data is of great significance for exploiting and protecting the ocean.We used hourly mean wave height,temperature,and pressure real-time observation data taken in the Xiaomaidao station(in Qingdao,China)from June 1,2017,to May 31,2018,to explore the data quality using eight quality control methods,and to discriminate the most effective method for Xiaomaidao station.After using the eight quality control methods,the percentages of the mean wave height,temperature,and pressure data that passed the tests were 89.6%,88.3%,and 98.6%,respectively.With the marine disaster(wave alarm report)data,the values failed in the test mainly due to the influence of aging observation equipment and missing data transmissions.The mean wave height is often affected by dynamic marine disasters,so the continuity test method is not effective.The correlation test with other related parameters would be more useful for the mean wave height.
基金supported by the Research Fund of National Key Laboratory of Computer Architecture under Grant No.CARCH201501the Open Project Program of the State Key Laboratory of Mathematical Engineering and Advanced Computing under Grant No.2016A09
文摘In the era of Big Data, typical architecture of distributed real-time stream processing systems is the combination of Flume, Kafka, and Storm. As a kind of distributed message system, Kafka has the characteristics of horizontal scalability and high throughput, which is manly deployed in many areas in order to address the problem of speed mismatch between message producers and consumers. When using Kafka, we need to quickly receive data sent by producers. In addition, we need to send data to consumers quickly. Therefore, the performance of Kafka is of critical importance to the performance of the whole stream processing system. In this paper, we propose the improved design of real-time stream processing systems, and focus on improving the Kafka's data loading process.We use Kafka cat to transfer data from the source to Kafka topic directly, which can reduce the network transmission. We also utilize the memory file system to accelerate the process of data loading, which can address the bottleneck and performance problems caused by disk I/O. Extensive experiments are conducted to evaluate the performance, which show the superiority of our improved design.
文摘Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approach for processing high-dimensional data by finding relevant features for each cluster in the data space.Subspace clustering methods extend traditional clustering to account for the constraints imposed by data streams.Data streams are not only high-dimensional,but also unbounded and evolving.This necessitates the development of subspace clustering algorithms that can handle high dimensionality and adapt to the unique characteristics of data streams.Although many articles have contributed to the literature review on data stream clustering,there is currently no specific review on subspace clustering algorithms in high-dimensional data streams.Therefore,this article aims to systematically review the existing literature on subspace clustering of data streams in high-dimensional streaming environments.The review follows a systematic methodological approach and includes 18 articles for the final analysis.The analysis focused on two research questions related to the general clustering process and dealing with the unbounded and evolving characteristics of data streams.The main findings relate to six elements:clustering process,cluster search,subspace search,synopsis structure,cluster maintenance,and evaluation measures.Most algorithms use a two-phase clustering approach consisting of an initialization stage,a refinement stage,a cluster maintenance stage,and a final clustering stage.The density-based top-down subspace clustering approach is more widely used than the others because it is able to distinguish true clusters and outliers using projected microclusters.Most algorithms implicitly adapt to the evolving nature of the data stream by using a time fading function that is sensitive to outliers.Future work can focus on the clustering framework,parameter optimization,subspace search techniques,memory-efficient synopsis structures,explicit cluster change detection,and intrinsic performance metrics.This article can serve as a guide for researchers interested in high-dimensional subspace clustering methods for data streams.
文摘The application and development of a wide-area measurement system(WAMS)has enabled many applications and led to several requirements based on dynamic measurement data.Such data are transmitted as big data information flow.To ensure effective transmission of wide-frequency electrical information by the communication protocol of a WAMS,this study performs real-time traffic monitoring and analysis of the data network of a power information system,and establishes corresponding network optimization strategies to solve existing transmission problems.This study utilizes the traffic analysis results obtained using the current real-time dynamic monitoring system to design an optimization strategy,covering the optimization in three progressive levels:the underlying communication protocol,source data,and transmission process.Optimization of the system structure and scheduling optimization of data information are validated to be feasible and practical via tests.
基金funded by National Key R&D Program of China((Nos.2022YFC3003403 and 2018YFC1505203)Key Research and Development Program of Tibet Autonomous Region(XZ202301ZY0039G)+1 种基金Natural Science Foundation of Hebei Province(No.F2021201031)Geological Survey Project of China Geological Survey(No.DD20221747)。
文摘Glacier disasters occur frequently in alpine regions around the world,but the current conventional geological disaster measurement technology cannot be directly used for glacier disaster measurement.Hence,in this study,a distributed multi-sensor measurement system for glacier deformation was established by integrating piezoelectric sensing,coded sensing,attitude sensing technology and wireless communication technology.The traditional Modbus protocol was optimized to solve the problem of data identification confusion of different acquisition nodes.Through indoor wireless transmission,adaptive performance analysis,error measurement experiment and landslide simulation experiment,the performance of the measurement system was analyzed and evaluated.Using unmanned aerial vehicle technology,the reliability and effectiveness of the measurement system were verified on the site of Galongla glacier in southeastern Tibet,China.The results show that the mean absolute percentage errors were only 1.13%and 2.09%for the displacement and temperature,respectively.The distributed glacier deformation real-time measurement system provides a new means for the assessment of the development process of glacier disasters and disaster prevention and mitigation.
文摘Handling sentiment drifts in real time twitter data streams are a challen-ging task while performing sentiment classifications,because of the changes that occur in the sentiments of twitter users,with respect to time.The growing volume of tweets with sentiment drifts has led to the need for devising an adaptive approach to detect and handle this drift in real time.This work proposes an adap-tive learning algorithm-based framework,Twitter Sentiment Drift Analysis-Bidir-ectional Encoder Representations from Transformers(TSDA-BERT),which introduces a sentiment drift measure to detect drifts and a domain impact score to adaptively retrain the classification model with domain relevant data in real time.The framework also works on static data by converting them to data streams using the Kafka tool.The experiments conducted on real time and simulated tweets of sports,health care andfinancial topics show that the proposed system is able to detect sentiment drifts and maintain the performance of the classification model,with accuracies of 91%,87%and 90%,respectively.Though the results have been provided only for a few topics,as a proof of concept,this framework can be applied to detect sentiment drifts and perform sentiment classification on real time data streams of any topic.