Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL...Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets.展开更多
Integrating Tiny Machine Learning(TinyML)with edge computing in remotely sensed images enhances the capabilities of road anomaly detection on a broader level.Constrained devices efficiently implement a Binary Neural N...Integrating Tiny Machine Learning(TinyML)with edge computing in remotely sensed images enhances the capabilities of road anomaly detection on a broader level.Constrained devices efficiently implement a Binary Neural Network(BNN)for road feature extraction,utilizing quantization and compression through a pruning strategy.The modifications resulted in a 28-fold decrease in memory usage and a 25%enhancement in inference speed while only experiencing a 2.5%decrease in accuracy.It showcases its superiority over conventional detection algorithms in different road image scenarios.Although constrained by computer resources and training datasets,our results indicate opportunities for future research,demonstrating that quantization and focused optimization can significantly improve machine learning models’accuracy and operational efficiency.ARM Cortex-M0 gives practical feasibility and substantial benefits while deploying our optimized BNN model on this low-power device:Advanced machine learning in edge computing.The analysis work delves into the educational significance of TinyML and its essential function in analyzing road networks using remote sensing,suggesting ways to improve smart city frameworks in road network assessment,traffic management,and autonomous vehicle navigation systems by emphasizing the importance of new technologies for maintaining and safeguarding road networks.展开更多
Structural Health Monitoring(SHM)systems have become a crucial tool for the operational management of long tunnels.For immersed tunnels exposed to both traffic loads and the effects of the marine environment,efficient...Structural Health Monitoring(SHM)systems have become a crucial tool for the operational management of long tunnels.For immersed tunnels exposed to both traffic loads and the effects of the marine environment,efficiently identifying abnormal conditions from the extensive unannotated SHM data presents a significant challenge.This study proposed amodel-based approach for anomaly detection and conducted validation and comparative analysis of two distinct temporal predictive models using SHM data from a real immersed tunnel.Firstly,a dynamic predictive model-based anomaly detectionmethod is proposed,which utilizes a rolling time window for modeling to achieve dynamic prediction.Leveraging the assumption of temporal data similarity,an interval prediction value deviation was employed to determine the abnormality of the data.Subsequently,dynamic predictive models were constructed based on the Autoregressive Integrated Moving Average(ARIMA)and Long Short-Term Memory(LSTM)models.The hyperparameters of these models were optimized and selected using monitoring data from the immersed tunnel,yielding viable static and dynamic predictive models.Finally,the models were applied within the same segment of SHM data,to validate the effectiveness of the anomaly detection approach based on dynamic predictive modeling.A detailed comparative analysis discusses the discrepancies in temporal anomaly detection between the ARIMA-and LSTM-based models.The results demonstrated that the dynamic predictive modelbased anomaly detection approach was effective for dealing with unannotated SHM data.In a comparison between ARIMA and LSTM,it was found that ARIMA demonstrated higher modeling efficiency,rendering it suitable for short-term predictions.In contrast,the LSTM model exhibited greater capacity to capture long-term performance trends and enhanced early warning capabilities,thereby resulting in superior overall performance.展开更多
Predictive maintenance has emerged as an effective tool for curbing maintenance costs,yet prevailing research predominantly concentrates on the abnormal phases.Within the ostensibly stable healthy phase,the reliance o...Predictive maintenance has emerged as an effective tool for curbing maintenance costs,yet prevailing research predominantly concentrates on the abnormal phases.Within the ostensibly stable healthy phase,the reliance on anomaly detection to preempt equipment malfunctions faces the challenge of sudden anomaly discernment.To address this challenge,this paper proposes a dual-task learning approach for bearing anomaly detection and state evaluation of safe regions.The proposed method transforms the execution of the two tasks into an optimization issue of the hypersphere center.By leveraging the monotonicity and distinguishability pertinent to the tasks as the foundation for optimization,it reconstructs the SVDD model to ensure equilibrium in the model’s performance across the two tasks.Subsequent experiments verify the proposed method’s effectiveness,which is interpreted from the perspectives of parameter adjustment and enveloping trade-offs.In the meantime,experimental results also show two deficiencies in anomaly detection accuracy and state evaluation metrics.Their theoretical analysis inspires us to focus on feature extraction and data collection to achieve improvements.The proposed method lays the foundation for realizing predictive maintenance in a healthy stage by improving condition awareness in safe regions.展开更多
In the IoT(Internet of Things)domain,the increased use of encryption protocols such as SSL/TLS,VPN(Virtual Private Network),and Tor has led to a rise in attacks leveraging encrypted traffic.While research on anomaly d...In the IoT(Internet of Things)domain,the increased use of encryption protocols such as SSL/TLS,VPN(Virtual Private Network),and Tor has led to a rise in attacks leveraging encrypted traffic.While research on anomaly detection using AI(Artificial Intelligence)is actively progressing,the encrypted nature of the data poses challenges for labeling,resulting in data imbalance and biased feature extraction toward specific nodes.This study proposes a reconstruction error-based anomaly detection method using an autoencoder(AE)that utilizes packet metadata excluding specific node information.The proposed method omits biased packet metadata such as IP and Port and trains the detection model using only normal data,leveraging a small amount of packet metadata.This makes it well-suited for direct application in IoT environments due to its low resource consumption.In experiments comparing feature extraction methods for AE-based anomaly detection,we found that using flowbased features significantly improves accuracy,precision,F1 score,and AUC(Area Under the Receiver Operating Characteristic Curve)score compared to packet-based features.Additionally,for flow-based features,the proposed method showed a 30.17%increase in F1 score and improved false positive rates compared to Isolation Forest and OneClassSVM.Furthermore,the proposedmethod demonstrated a 32.43%higherAUCwhen using packet features and a 111.39%higher AUC when using flow features,compared to previously proposed oversampling methods.This study highlights the impact of feature extraction methods on attack detection in imbalanced,encrypted traffic environments and emphasizes that the one-class method using AE is more effective for attack detection and reducing false positives compared to traditional oversampling methods.展开更多
The management of network intelligence in Beyond 5G(B5G)networks encompasses the complex challenges of scalability,dynamicity,interoperability,privacy,and security.These are essential steps towards achieving the reali...The management of network intelligence in Beyond 5G(B5G)networks encompasses the complex challenges of scalability,dynamicity,interoperability,privacy,and security.These are essential steps towards achieving the realization of truly ubiquitous Artificial Intelligence(AI)-based analytics,empowering seamless integration across the entire Continuum(Edge,Fog,Core,Cloud).This paper introduces a Federated Network Intelligence Orchestration approach aimed at scalable and automated Federated Learning(FL)-based anomaly detection in B5Gnetworks.By leveraging a horizontal Federated learning approach based on the FedAvg aggregation algorithm,which employs a deep autoencoder model trained on non-anomalous traffic samples to recognize normal behavior,the systemorchestrates network intelligence to detect and prevent cyber-attacks.Integrated into a B5G Zero-touch Service Management(ZSM)aligned Security Framework,the proposal utilizes multi-domain and multi-tenant orchestration to automate and scale the deployment of FL-agents and AI-based anomaly detectors,enhancing reaction capabilities against cyber-attacks.The proposed FL architecture can be dynamically deployed across the B5G Continuum,utilizing a hierarchy of Network Intelligence orchestrators for real-time anomaly and security threat handling.Implementation includes FL enforcement operations for interoperability and extensibility,enabling dynamic deployment,configuration,and reconfiguration on demand.Performance validation of the proposed solution was conducted through dynamic orchestration,FL,and real-time anomaly detection processes using a practical test environment.Analysis of key performance metrics,leveraging the 5G-NIDD dataset,demonstrates the system’s capability for automatic and near real-time handling of anomalies and attacks,including real-time network monitoring and countermeasure implementation for mitigation.展开更多
Due to their simple hardware,sensor nodes in IoT are vulnerable to attack,leading to data routing blockages or malicious tampering,which significantly disrupts secure data collection.An Intelligent Active Probing and ...Due to their simple hardware,sensor nodes in IoT are vulnerable to attack,leading to data routing blockages or malicious tampering,which significantly disrupts secure data collection.An Intelligent Active Probing and Trace-back Scheme for IoT Anomaly Detection(APTAD)is proposed to collect integrated IoT data by recruiting Mobile Edge Users(MEUs).(a)An intelligent unsupervised learning approach is used to identify anomalous data from the collected data by MEUs and help to identify anomalous nodes.(b)Recruit MEUs to trace back and propose a series of trust calculation methods to determine the trust of nodes.(c)The last,the number of active detection packets and detection paths are designed,so as to accurately identify the trust of nodes in IoT at the minimum cost of the network.A large number of experimental results show that the recruiting cost and average anomaly detection time are reduced by 6.5 times and 34.33%respectively,while the accuracy of trust identification is improved by 20%.展开更多
With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve suffi...With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed.展开更多
The identification and mitigation of anomaly data,characterized by deviations from normal patterns or singularities,stand as critical endeavors in modern technological landscapes,spanning domains such as Non-Fungible ...The identification and mitigation of anomaly data,characterized by deviations from normal patterns or singularities,stand as critical endeavors in modern technological landscapes,spanning domains such as Non-Fungible Tokens(NFTs),cyber-security,and the burgeoning metaverse.This paper presents a novel proposal aimed at refining anomaly detection methodologies,with a particular focus on continuous data streams.The essence of the proposed approach lies in analyzing the rate of change within such data streams,leveraging this dynamic aspect to discern anomalies with heightened precision and efficacy.Through empirical evaluation,our method demonstrates a marked improvement over existing techniques,showcasing more nuanced and sophisticated result values.Moreover,we envision a trajectory of continuous research and development,wherein iterative refinement and supplementation will tailor our approach to various anomaly detection scenarios,ensuring adaptability and robustness in real-world applications.展开更多
With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althou...With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althoughthis approach can achieve higher detection performance,it requires huge human labor and resources to maintainthe feature library.In contrast,semantic feature engineering can dynamically discover new semantic featuresand optimize feature selection by automatically analyzing the semantic information contained in the data itself,thus reducing dependence on prior knowledge.However,current semantic features still have the problem ofsemantic expression singularity,as they are extracted from a single semantic mode such as word segmentation,character segmentation,or arbitrary semantic feature extraction.This paper extracts features of web requestsfrom dual semantic granularity,and proposes a semantic feature fusion method to solve the above problems.Themethod first preprocesses web requests,and extracts word-level and character-level semantic features of URLs viaconvolutional neural network(CNN),respectively.By constructing three loss functions to reduce losses betweenfeatures,labels and categories.Experiments on the HTTP CSIC 2010,Malicious URLs and HttpParams datasetsverify the proposedmethod.Results show that compared withmachine learning,deep learningmethods and BERTmodel,the proposed method has better detection performance.And it achieved the best detection rate of 99.16%in the dataset HttpParams.展开更多
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),...While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.展开更多
In the Industrial Internet of Things(IIoT),sensors generate time series data to reflect the working state.When the systems are attacked,timely identification of outliers in time series is critical to ensure security.A...In the Industrial Internet of Things(IIoT),sensors generate time series data to reflect the working state.When the systems are attacked,timely identification of outliers in time series is critical to ensure security.Although many anomaly detection methods have been proposed,the temporal correlation of the time series over the same sensor and the state(spatial)correlation between different sensors are rarely considered simultaneously in these methods.Owing to the superior capability of Transformer in learning time series features.This paper proposes a time series anomaly detection method based on a spatial-temporal network and an improved Transformer.Additionally,the methods based on graph neural networks typically include a graph structure learning module and an anomaly detection module,which are interdependent.However,in the initial phase of training,since neither of the modules has reached an optimal state,their performance may influence each other.This scenario makes the end-to-end training approach hard to effectively direct the learning trajectory of each module.This interdependence between the modules,coupled with the initial instability,may cause the model to find it hard to find the optimal solution during the training process,resulting in unsatisfactory results.We introduce an adaptive graph structure learning method to obtain the optimal model parameters and graph structure.Experiments on two publicly available datasets demonstrate that the proposed method attains higher anomaly detection results than other methods.展开更多
System logs,serving as a pivotal data source for performance monitoring and anomaly detection,play an indispensable role in assuring service stability and reliability.Despite this,the majority of existing log-based an...System logs,serving as a pivotal data source for performance monitoring and anomaly detection,play an indispensable role in assuring service stability and reliability.Despite this,the majority of existing log-based anomaly detection methodologies predominantly depend on the sequence or quantity attributes of logs,utilizing solely a single Recurrent Neural Network(RNN)and its variant sequence models for detection.These approaches have not thoroughly exploited the semantic information embedded in logs,exhibit limited adaptability to novel logs,and a single model struggles to fully unearth the potential features within the log sequence.Addressing these challenges,this article proposes a hybrid architecture based on amultiscale convolutional neural network,efficient channel attention and mogrifier gated recurrent unit networks(LogCEM),which amalgamates multiple neural network technologies.Capitalizing on the superior performance of robustly optimized BERT approach(RoBERTa)in the realm of natural language processing,we employ RoBERTa to extract the original word vectors from each word in the log template.In conjunction with the enhanced Smooth Inverse Frequency(SIF)algorithm,we generate more precise log sentence vectors,thereby achieving an in-depth representation of log semantics.Subsequently,these log vector sequences are fed into a hybrid neural network,which fuses 1D Multi-Scale Convolutional Neural Network(MSCNN),Efficient Channel Attention Mechanism(ECA),and Mogrifier Gated Recurrent Unit(GRU).This amalgamation enables themodel to concurrently capture the local and global dependencies of the log sequence and autonomously learn the significance of different log sequences,thereby markedly enhancing the efficacy of log anomaly detection.To validate the effectiveness of the LogCEM model,we conducted evaluations on two authoritative open-source datasets.The experimental results demonstrate that LogCEM not only exhibits excellent accuracy and robustness,but also outperforms the current mainstream log anomaly detection methods.展开更多
Internet of Things(IoT)is vulnerable to data-tampering(DT)attacks.Due to resource limitations,many anomaly detection systems(ADSs)for IoT have high false positive rates when detecting DT attacks.This leads to the misr...Internet of Things(IoT)is vulnerable to data-tampering(DT)attacks.Due to resource limitations,many anomaly detection systems(ADSs)for IoT have high false positive rates when detecting DT attacks.This leads to the misreporting of normal data,which will impact the normal operation of IoT.To mitigate the impact caused by the high false positive rate of ADS,this paper proposes an ADS management scheme for clustered IoT.First,we model the data transmission and anomaly detection in clustered IoT.Then,the operation strategy of the clustered IoT is formulated as the running probabilities of all ADSs deployed on every IoT device.In the presence of a high false positive rate in ADSs,to deal with the trade-off between the security and availability of data,we develop a linear programming model referred to as a security trade-off(ST)model.Next,we develop an analysis framework for the ST model,and solve the ST model on an IoT simulation platform.Last,we reveal the effect of some factors on the maximum combined detection rate through theoretical analysis.Simulations show that the ADS management scheme can mitigate the data unavailability loss caused by the high false positive rates in ADS.展开更多
In this paper, we propose a novel anomaly detection method for data centers based on a combination of graphstructure and abnormal attention mechanism. The method leverages the sensor monitoring data from targetpower s...In this paper, we propose a novel anomaly detection method for data centers based on a combination of graphstructure and abnormal attention mechanism. The method leverages the sensor monitoring data from targetpower substations to construct multidimensional time series. These time series are subsequently transformed intograph structures, and corresponding adjacency matrices are obtained. By incorporating the adjacency matricesand additional weights associated with the graph structure, an aggregation matrix is derived. The aggregationmatrix is then fed into a pre-trained graph convolutional neural network (GCN) to extract graph structure features.Moreover, both themultidimensional time series segments and the graph structure features are inputted into a pretrainedanomaly detectionmodel, resulting in corresponding anomaly detection results that help identify abnormaldata. The anomaly detection model consists of a multi-level encoder-decoder module, wherein each level includesa transformer encoder and decoder based on correlation differences. The attention module in the encoding layeradopts an abnormal attention module with a dual-branch structure. Experimental results demonstrate that ourproposed method significantly improves the accuracy and stability of anomaly detection.展开更多
Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconst...Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection.展开更多
In recent years,anomaly detection has attracted much attention in industrial production.As traditional anomaly detection methods usually rely on direct comparison of samples,they often ignore the intrinsic relationshi...In recent years,anomaly detection has attracted much attention in industrial production.As traditional anomaly detection methods usually rely on direct comparison of samples,they often ignore the intrinsic relationship between samples,resulting in poor accuracy in recognizing anomalous samples.To address this problem,a knowledge distillation anomaly detection method based on feature reconstruction was proposed in this study.Knowledge distillation was performed after inverting the structure of the teacher-student network to avoid the teacher-student network sharing the same inputs and similar structure.Representability was improved by using feature splicing to unify features at different levels,and the merged features were processed and reconstructed using an improved Transformer.The experimental results show that the proposed method achieves better performance on the MVTec dataset,verifying its effectiveness and feasibility in anomaly detection tasks.This study provides a new idea to improve the accuracy and efficiency of anomaly detection.展开更多
The increasing amount and intricacy of network traffic in the modern digital era have worsened the difficulty of identifying abnormal behaviours that may indicate potential security breaches or operational interruptio...The increasing amount and intricacy of network traffic in the modern digital era have worsened the difficulty of identifying abnormal behaviours that may indicate potential security breaches or operational interruptions. Conventional detection approaches face challenges in keeping up with the ever-changing strategies of cyber-attacks, resulting in heightened susceptibility and significant harm to network infrastructures. In order to tackle this urgent issue, this project focused on developing an effective anomaly detection system that utilizes Machine Learning technology. The suggested model utilizes contemporary machine learning algorithms and frameworks to autonomously detect deviations from typical network behaviour. It promptly identifies anomalous activities that may indicate security breaches or performance difficulties. The solution entails a multi-faceted approach encompassing data collection, preprocessing, feature engineering, model training, and evaluation. By utilizing machine learning methods, the model is trained on a wide range of datasets that include both regular and abnormal network traffic patterns. This training ensures that the model can adapt to numerous scenarios. The main priority is to ensure that the system is functional and efficient, with a particular emphasis on reducing false positives to avoid unwanted alerts. Additionally, efforts are directed on improving anomaly detection accuracy so that the model can consistently distinguish between potentially harmful and benign activity. This project aims to greatly strengthen network security by addressing emerging cyber threats and improving their resilience and reliability.展开更多
Solar arrays are important and indispensable parts of spacecraft and provide energy support for spacecraft to operate in orbit and complete on-orbit missions.When a spacecraft is in orbit,because the solar array is ex...Solar arrays are important and indispensable parts of spacecraft and provide energy support for spacecraft to operate in orbit and complete on-orbit missions.When a spacecraft is in orbit,because the solar array is exposed to the harsh space environment,with increasing working time,the performance of its internal electronic components gradually degrade until abnormal damage occurs.This damage makes solar array power generation unable to fully meet the energy demand of a spacecraft.Therefore,timely and accurate detection of solar array anomalies is of great significance for the on-orbit operation and maintenance management of spacecraft.In this paper,we propose an anomaly detection method for spacecraft solar arrays based on the integrated least squares support vector machine(ILS-SVM)model:it selects correlated telemetry data from spacecraft solar arrays to form a training set and extracts n groups of training subsets from this set,then gets n corresponding least squares support vector machine(LS-SVM)submodels by training on these training subsets,respectively;after that,the ILS-SVM model is obtained by integrating these submodels through a weighting operation to increase the prediction accuracy and so on;finally,based on the obtained ILS-SVM model,a parameterfree and unsupervised anomaly determination method is proposed to detect the health status of solar arrays.We use the telemetry data set from a satellite in orbit to carry out experimental verification and find that the proposed method can diagnose solar array anomalies in time and can capture the signs before a solar array anomaly occurs,which reflects the applicability of the method.展开更多
Despite the big success of transfer learning techniques in anomaly detection,it is still challenging to achieve good transition of detection rules merely based on the preferred data in the anomaly detection with one-c...Despite the big success of transfer learning techniques in anomaly detection,it is still challenging to achieve good transition of detection rules merely based on the preferred data in the anomaly detection with one-class classification,especially for the data with a large distribution difference.To address this challenge,a novel deep one-class transfer learning algorithm with domain-adversarial training is proposed in this paper.First,by integrating a hypersphere adaptation constraint into domainadversarial neural network,a new hypersphere adversarial training mechanism is designed.Second,an alternative optimization method is derived to seek the optimal network parameters while pushing the hyperspheres built in the source domain and target domain to be as identical as possible.Through transferring oneclass detection rule in the adaptive extraction of domain-invariant feature representation,the end-to-end anomaly detection with one-class classification is then enhanced.Furthermore,a theoretical analysis about the model reliability,as well as the strategy of avoiding invalid and negative transfer,is provided.Experiments are conducted on two typical anomaly detection problems,i.e.,image recognition detection and online early fault detection of rolling bearings.The results demonstrate that the proposed algorithm outperforms the state-of-the-art methods in terms of detection accuracy and robustness.展开更多
文摘Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets.
基金supported by the National Natural Science Foundation of China(61170147)Scientific Research Project of Zhejiang Provincial Department of Education in China(Y202146796)+2 种基金Natural Science Foundation of Zhejiang Province in China(LTY22F020003)Wenzhou Major Scientific and Technological Innovation Project of China(ZG2021029)Scientific and Technological Projects of Henan Province in China(202102210172).
文摘Integrating Tiny Machine Learning(TinyML)with edge computing in remotely sensed images enhances the capabilities of road anomaly detection on a broader level.Constrained devices efficiently implement a Binary Neural Network(BNN)for road feature extraction,utilizing quantization and compression through a pruning strategy.The modifications resulted in a 28-fold decrease in memory usage and a 25%enhancement in inference speed while only experiencing a 2.5%decrease in accuracy.It showcases its superiority over conventional detection algorithms in different road image scenarios.Although constrained by computer resources and training datasets,our results indicate opportunities for future research,demonstrating that quantization and focused optimization can significantly improve machine learning models’accuracy and operational efficiency.ARM Cortex-M0 gives practical feasibility and substantial benefits while deploying our optimized BNN model on this low-power device:Advanced machine learning in edge computing.The analysis work delves into the educational significance of TinyML and its essential function in analyzing road networks using remote sensing,suggesting ways to improve smart city frameworks in road network assessment,traffic management,and autonomous vehicle navigation systems by emphasizing the importance of new technologies for maintaining and safeguarding road networks.
基金supported by the Research and Development Center of Transport Industry of New Generation of Artificial Intelligence Technology(Grant No.202202H)the National Key R&D Program of China(Grant No.2019YFB1600702)the National Natural Science Foundation of China(Grant Nos.51978600&51808336).
文摘Structural Health Monitoring(SHM)systems have become a crucial tool for the operational management of long tunnels.For immersed tunnels exposed to both traffic loads and the effects of the marine environment,efficiently identifying abnormal conditions from the extensive unannotated SHM data presents a significant challenge.This study proposed amodel-based approach for anomaly detection and conducted validation and comparative analysis of two distinct temporal predictive models using SHM data from a real immersed tunnel.Firstly,a dynamic predictive model-based anomaly detectionmethod is proposed,which utilizes a rolling time window for modeling to achieve dynamic prediction.Leveraging the assumption of temporal data similarity,an interval prediction value deviation was employed to determine the abnormality of the data.Subsequently,dynamic predictive models were constructed based on the Autoregressive Integrated Moving Average(ARIMA)and Long Short-Term Memory(LSTM)models.The hyperparameters of these models were optimized and selected using monitoring data from the immersed tunnel,yielding viable static and dynamic predictive models.Finally,the models were applied within the same segment of SHM data,to validate the effectiveness of the anomaly detection approach based on dynamic predictive modeling.A detailed comparative analysis discusses the discrepancies in temporal anomaly detection between the ARIMA-and LSTM-based models.The results demonstrated that the dynamic predictive modelbased anomaly detection approach was effective for dealing with unannotated SHM data.In a comparison between ARIMA and LSTM,it was found that ARIMA demonstrated higher modeling efficiency,rendering it suitable for short-term predictions.In contrast,the LSTM model exhibited greater capacity to capture long-term performance trends and enhanced early warning capabilities,thereby resulting in superior overall performance.
基金Supported by Sichuan Provincial Key Research and Development Program of China(Grant No.2023YFG0351)National Natural Science Foundation of China(Grant No.61833002).
文摘Predictive maintenance has emerged as an effective tool for curbing maintenance costs,yet prevailing research predominantly concentrates on the abnormal phases.Within the ostensibly stable healthy phase,the reliance on anomaly detection to preempt equipment malfunctions faces the challenge of sudden anomaly discernment.To address this challenge,this paper proposes a dual-task learning approach for bearing anomaly detection and state evaluation of safe regions.The proposed method transforms the execution of the two tasks into an optimization issue of the hypersphere center.By leveraging the monotonicity and distinguishability pertinent to the tasks as the foundation for optimization,it reconstructs the SVDD model to ensure equilibrium in the model’s performance across the two tasks.Subsequent experiments verify the proposed method’s effectiveness,which is interpreted from the perspectives of parameter adjustment and enveloping trade-offs.In the meantime,experimental results also show two deficiencies in anomaly detection accuracy and state evaluation metrics.Their theoretical analysis inspires us to focus on feature extraction and data collection to achieve improvements.The proposed method lays the foundation for realizing predictive maintenance in a healthy stage by improving condition awareness in safe regions.
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2023-00235509,Development of Security Monitoring Technology Based Network Behavior against Encrypted Cyber Threats in ICT Convergence Environment).
文摘In the IoT(Internet of Things)domain,the increased use of encryption protocols such as SSL/TLS,VPN(Virtual Private Network),and Tor has led to a rise in attacks leveraging encrypted traffic.While research on anomaly detection using AI(Artificial Intelligence)is actively progressing,the encrypted nature of the data poses challenges for labeling,resulting in data imbalance and biased feature extraction toward specific nodes.This study proposes a reconstruction error-based anomaly detection method using an autoencoder(AE)that utilizes packet metadata excluding specific node information.The proposed method omits biased packet metadata such as IP and Port and trains the detection model using only normal data,leveraging a small amount of packet metadata.This makes it well-suited for direct application in IoT environments due to its low resource consumption.In experiments comparing feature extraction methods for AE-based anomaly detection,we found that using flowbased features significantly improves accuracy,precision,F1 score,and AUC(Area Under the Receiver Operating Characteristic Curve)score compared to packet-based features.Additionally,for flow-based features,the proposed method showed a 30.17%increase in F1 score and improved false positive rates compared to Isolation Forest and OneClassSVM.Furthermore,the proposedmethod demonstrated a 32.43%higherAUCwhen using packet features and a 111.39%higher AUC when using flow features,compared to previously proposed oversampling methods.This study highlights the impact of feature extraction methods on attack detection in imbalanced,encrypted traffic environments and emphasizes that the one-class method using AE is more effective for attack detection and reducing false positives compared to traditional oversampling methods.
基金supported by the grants:PID2020-112675RBC44(ONOFRE-3),funded by MCIN/AEI/10.13039/501100011033Horizon Project RIGOUROUS funded by European Commission,GA:101095933TSI-063000-2021-{36,44,45,62}(Cerberus)funded by MAETD’s 2021 UNICO I+D Program.
文摘The management of network intelligence in Beyond 5G(B5G)networks encompasses the complex challenges of scalability,dynamicity,interoperability,privacy,and security.These are essential steps towards achieving the realization of truly ubiquitous Artificial Intelligence(AI)-based analytics,empowering seamless integration across the entire Continuum(Edge,Fog,Core,Cloud).This paper introduces a Federated Network Intelligence Orchestration approach aimed at scalable and automated Federated Learning(FL)-based anomaly detection in B5Gnetworks.By leveraging a horizontal Federated learning approach based on the FedAvg aggregation algorithm,which employs a deep autoencoder model trained on non-anomalous traffic samples to recognize normal behavior,the systemorchestrates network intelligence to detect and prevent cyber-attacks.Integrated into a B5G Zero-touch Service Management(ZSM)aligned Security Framework,the proposal utilizes multi-domain and multi-tenant orchestration to automate and scale the deployment of FL-agents and AI-based anomaly detectors,enhancing reaction capabilities against cyber-attacks.The proposed FL architecture can be dynamically deployed across the B5G Continuum,utilizing a hierarchy of Network Intelligence orchestrators for real-time anomaly and security threat handling.Implementation includes FL enforcement operations for interoperability and extensibility,enabling dynamic deployment,configuration,and reconfiguration on demand.Performance validation of the proposed solution was conducted through dynamic orchestration,FL,and real-time anomaly detection processes using a practical test environment.Analysis of key performance metrics,leveraging the 5G-NIDD dataset,demonstrates the system’s capability for automatic and near real-time handling of anomalies and attacks,including real-time network monitoring and countermeasure implementation for mitigation.
基金supported by the National Natural Science Foundation of China(62072475)the Fundamental Research Funds for the Central Universities of Central South University(CX20230356)。
文摘Due to their simple hardware,sensor nodes in IoT are vulnerable to attack,leading to data routing blockages or malicious tampering,which significantly disrupts secure data collection.An Intelligent Active Probing and Trace-back Scheme for IoT Anomaly Detection(APTAD)is proposed to collect integrated IoT data by recruiting Mobile Edge Users(MEUs).(a)An intelligent unsupervised learning approach is used to identify anomalous data from the collected data by MEUs and help to identify anomalous nodes.(b)Recruit MEUs to trace back and propose a series of trust calculation methods to determine the trust of nodes.(c)The last,the number of active detection packets and detection paths are designed,so as to accurately identify the trust of nodes in IoT at the minimum cost of the network.A large number of experimental results show that the recruiting cost and average anomaly detection time are reduced by 6.5 times and 34.33%respectively,while the accuracy of trust identification is improved by 20%.
文摘With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed.
基金supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea(NRF-2019S1A5B5A02041334).
文摘The identification and mitigation of anomaly data,characterized by deviations from normal patterns or singularities,stand as critical endeavors in modern technological landscapes,spanning domains such as Non-Fungible Tokens(NFTs),cyber-security,and the burgeoning metaverse.This paper presents a novel proposal aimed at refining anomaly detection methodologies,with a particular focus on continuous data streams.The essence of the proposed approach lies in analyzing the rate of change within such data streams,leveraging this dynamic aspect to discern anomalies with heightened precision and efficacy.Through empirical evaluation,our method demonstrates a marked improvement over existing techniques,showcasing more nuanced and sophisticated result values.Moreover,we envision a trajectory of continuous research and development,wherein iterative refinement and supplementation will tailor our approach to various anomaly detection scenarios,ensuring adaptability and robustness in real-world applications.
基金a grant from the National Natural Science Foundation of China(Nos.11905239,12005248 and 12105303).
文摘With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althoughthis approach can achieve higher detection performance,it requires huge human labor and resources to maintainthe feature library.In contrast,semantic feature engineering can dynamically discover new semantic featuresand optimize feature selection by automatically analyzing the semantic information contained in the data itself,thus reducing dependence on prior knowledge.However,current semantic features still have the problem ofsemantic expression singularity,as they are extracted from a single semantic mode such as word segmentation,character segmentation,or arbitrary semantic feature extraction.This paper extracts features of web requestsfrom dual semantic granularity,and proposes a semantic feature fusion method to solve the above problems.Themethod first preprocesses web requests,and extracts word-level and character-level semantic features of URLs viaconvolutional neural network(CNN),respectively.By constructing three loss functions to reduce losses betweenfeatures,labels and categories.Experiments on the HTTP CSIC 2010,Malicious URLs and HttpParams datasetsverify the proposedmethod.Results show that compared withmachine learning,deep learningmethods and BERTmodel,the proposed method has better detection performance.And it achieved the best detection rate of 99.16%in the dataset HttpParams.
文摘While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.
基金This work is partly supported by the National Key Research and Development Program of China(Grant No.2020YFB1805403)the National Natural Science Foundation of China(Grant No.62032002)the 111 Project(Grant No.B21049).
文摘In the Industrial Internet of Things(IIoT),sensors generate time series data to reflect the working state.When the systems are attacked,timely identification of outliers in time series is critical to ensure security.Although many anomaly detection methods have been proposed,the temporal correlation of the time series over the same sensor and the state(spatial)correlation between different sensors are rarely considered simultaneously in these methods.Owing to the superior capability of Transformer in learning time series features.This paper proposes a time series anomaly detection method based on a spatial-temporal network and an improved Transformer.Additionally,the methods based on graph neural networks typically include a graph structure learning module and an anomaly detection module,which are interdependent.However,in the initial phase of training,since neither of the modules has reached an optimal state,their performance may influence each other.This scenario makes the end-to-end training approach hard to effectively direct the learning trajectory of each module.This interdependence between the modules,coupled with the initial instability,may cause the model to find it hard to find the optimal solution during the training process,resulting in unsatisfactory results.We introduce an adaptive graph structure learning method to obtain the optimal model parameters and graph structure.Experiments on two publicly available datasets demonstrate that the proposed method attains higher anomaly detection results than other methods.
基金supported by the Science and Technology Program State Grid Corporation of China,Grant SGSXDK00DJJS2250061.
文摘System logs,serving as a pivotal data source for performance monitoring and anomaly detection,play an indispensable role in assuring service stability and reliability.Despite this,the majority of existing log-based anomaly detection methodologies predominantly depend on the sequence or quantity attributes of logs,utilizing solely a single Recurrent Neural Network(RNN)and its variant sequence models for detection.These approaches have not thoroughly exploited the semantic information embedded in logs,exhibit limited adaptability to novel logs,and a single model struggles to fully unearth the potential features within the log sequence.Addressing these challenges,this article proposes a hybrid architecture based on amultiscale convolutional neural network,efficient channel attention and mogrifier gated recurrent unit networks(LogCEM),which amalgamates multiple neural network technologies.Capitalizing on the superior performance of robustly optimized BERT approach(RoBERTa)in the realm of natural language processing,we employ RoBERTa to extract the original word vectors from each word in the log template.In conjunction with the enhanced Smooth Inverse Frequency(SIF)algorithm,we generate more precise log sentence vectors,thereby achieving an in-depth representation of log semantics.Subsequently,these log vector sequences are fed into a hybrid neural network,which fuses 1D Multi-Scale Convolutional Neural Network(MSCNN),Efficient Channel Attention Mechanism(ECA),and Mogrifier Gated Recurrent Unit(GRU).This amalgamation enables themodel to concurrently capture the local and global dependencies of the log sequence and autonomously learn the significance of different log sequences,thereby markedly enhancing the efficacy of log anomaly detection.To validate the effectiveness of the LogCEM model,we conducted evaluations on two authoritative open-source datasets.The experimental results demonstrate that LogCEM not only exhibits excellent accuracy and robustness,but also outperforms the current mainstream log anomaly detection methods.
基金This study was funded by the Chongqing Normal University Startup Foundation for PhD(22XLB021)was also supported by the Open Research Project of the State Key Laboratory of Industrial Control Technology,Zhejiang University,China(No.ICT2023B40).
文摘Internet of Things(IoT)is vulnerable to data-tampering(DT)attacks.Due to resource limitations,many anomaly detection systems(ADSs)for IoT have high false positive rates when detecting DT attacks.This leads to the misreporting of normal data,which will impact the normal operation of IoT.To mitigate the impact caused by the high false positive rate of ADS,this paper proposes an ADS management scheme for clustered IoT.First,we model the data transmission and anomaly detection in clustered IoT.Then,the operation strategy of the clustered IoT is formulated as the running probabilities of all ADSs deployed on every IoT device.In the presence of a high false positive rate in ADSs,to deal with the trade-off between the security and availability of data,we develop a linear programming model referred to as a security trade-off(ST)model.Next,we develop an analysis framework for the ST model,and solve the ST model on an IoT simulation platform.Last,we reveal the effect of some factors on the maximum combined detection rate through theoretical analysis.Simulations show that the ADS management scheme can mitigate the data unavailability loss caused by the high false positive rates in ADS.
基金the Science and Technology Project of China Southern Power Grid Company,Ltd.(031200KK52200003)the National Natural Science Foundation of China(Nos.62371253,52278119).
文摘In this paper, we propose a novel anomaly detection method for data centers based on a combination of graphstructure and abnormal attention mechanism. The method leverages the sensor monitoring data from targetpower substations to construct multidimensional time series. These time series are subsequently transformed intograph structures, and corresponding adjacency matrices are obtained. By incorporating the adjacency matricesand additional weights associated with the graph structure, an aggregation matrix is derived. The aggregationmatrix is then fed into a pre-trained graph convolutional neural network (GCN) to extract graph structure features.Moreover, both themultidimensional time series segments and the graph structure features are inputted into a pretrainedanomaly detectionmodel, resulting in corresponding anomaly detection results that help identify abnormaldata. The anomaly detection model consists of a multi-level encoder-decoder module, wherein each level includesa transformer encoder and decoder based on correlation differences. The attention module in the encoding layeradopts an abnormal attention module with a dual-branch structure. Experimental results demonstrate that ourproposed method significantly improves the accuracy and stability of anomaly detection.
基金supported in part by the National Natural Science Foundation of China(Grants 62376172,62006163,62376043)in part by the National Postdoctoral Program for Innovative Talents(Grant BX20200226)in part by Sichuan Science and Technology Planning Project(Grants 2022YFSY0047,2022YFQ0014,2023ZYD0143,2022YFH0021,2023YFQ0020,24QYCX0354,24NSFTD0025).
文摘Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection.
文摘In recent years,anomaly detection has attracted much attention in industrial production.As traditional anomaly detection methods usually rely on direct comparison of samples,they often ignore the intrinsic relationship between samples,resulting in poor accuracy in recognizing anomalous samples.To address this problem,a knowledge distillation anomaly detection method based on feature reconstruction was proposed in this study.Knowledge distillation was performed after inverting the structure of the teacher-student network to avoid the teacher-student network sharing the same inputs and similar structure.Representability was improved by using feature splicing to unify features at different levels,and the merged features were processed and reconstructed using an improved Transformer.The experimental results show that the proposed method achieves better performance on the MVTec dataset,verifying its effectiveness and feasibility in anomaly detection tasks.This study provides a new idea to improve the accuracy and efficiency of anomaly detection.
文摘The increasing amount and intricacy of network traffic in the modern digital era have worsened the difficulty of identifying abnormal behaviours that may indicate potential security breaches or operational interruptions. Conventional detection approaches face challenges in keeping up with the ever-changing strategies of cyber-attacks, resulting in heightened susceptibility and significant harm to network infrastructures. In order to tackle this urgent issue, this project focused on developing an effective anomaly detection system that utilizes Machine Learning technology. The suggested model utilizes contemporary machine learning algorithms and frameworks to autonomously detect deviations from typical network behaviour. It promptly identifies anomalous activities that may indicate security breaches or performance difficulties. The solution entails a multi-faceted approach encompassing data collection, preprocessing, feature engineering, model training, and evaluation. By utilizing machine learning methods, the model is trained on a wide range of datasets that include both regular and abnormal network traffic patterns. This training ensures that the model can adapt to numerous scenarios. The main priority is to ensure that the system is functional and efficient, with a particular emphasis on reducing false positives to avoid unwanted alerts. Additionally, efforts are directed on improving anomaly detection accuracy so that the model can consistently distinguish between potentially harmful and benign activity. This project aims to greatly strengthen network security by addressing emerging cyber threats and improving their resilience and reliability.
基金supported by the National Natural Science Foundation of China(7190121061973310).
文摘Solar arrays are important and indispensable parts of spacecraft and provide energy support for spacecraft to operate in orbit and complete on-orbit missions.When a spacecraft is in orbit,because the solar array is exposed to the harsh space environment,with increasing working time,the performance of its internal electronic components gradually degrade until abnormal damage occurs.This damage makes solar array power generation unable to fully meet the energy demand of a spacecraft.Therefore,timely and accurate detection of solar array anomalies is of great significance for the on-orbit operation and maintenance management of spacecraft.In this paper,we propose an anomaly detection method for spacecraft solar arrays based on the integrated least squares support vector machine(ILS-SVM)model:it selects correlated telemetry data from spacecraft solar arrays to form a training set and extracts n groups of training subsets from this set,then gets n corresponding least squares support vector machine(LS-SVM)submodels by training on these training subsets,respectively;after that,the ILS-SVM model is obtained by integrating these submodels through a weighting operation to increase the prediction accuracy and so on;finally,based on the obtained ILS-SVM model,a parameterfree and unsupervised anomaly determination method is proposed to detect the health status of solar arrays.We use the telemetry data set from a satellite in orbit to carry out experimental verification and find that the proposed method can diagnose solar array anomalies in time and can capture the signs before a solar array anomaly occurs,which reflects the applicability of the method.
基金supported by the National Natural Science Foundation of China(NSFC)(U1704158)Henan Province Technologies Research and Development Project of China(212102210103)+1 种基金the NSFC Development Funding of Henan Normal University(2020PL09)the University of Manitoba Research Grants Program(URGP)。
文摘Despite the big success of transfer learning techniques in anomaly detection,it is still challenging to achieve good transition of detection rules merely based on the preferred data in the anomaly detection with one-class classification,especially for the data with a large distribution difference.To address this challenge,a novel deep one-class transfer learning algorithm with domain-adversarial training is proposed in this paper.First,by integrating a hypersphere adaptation constraint into domainadversarial neural network,a new hypersphere adversarial training mechanism is designed.Second,an alternative optimization method is derived to seek the optimal network parameters while pushing the hyperspheres built in the source domain and target domain to be as identical as possible.Through transferring oneclass detection rule in the adaptive extraction of domain-invariant feature representation,the end-to-end anomaly detection with one-class classification is then enhanced.Furthermore,a theoretical analysis about the model reliability,as well as the strategy of avoiding invalid and negative transfer,is provided.Experiments are conducted on two typical anomaly detection problems,i.e.,image recognition detection and online early fault detection of rolling bearings.The results demonstrate that the proposed algorithm outperforms the state-of-the-art methods in terms of detection accuracy and robustness.