To study the difference of industrial location among different industries, this article is to test the spatial agglomeration across industries and firm sizes at the city level. Our research bases on a unique plant-lev...To study the difference of industrial location among different industries, this article is to test the spatial agglomeration across industries and firm sizes at the city level. Our research bases on a unique plant-level data set of Beijing and employs a distance-based approach, which considers space as continuous. Unlike previous studies, we set two sets of references for service and manufacturing industries respectively to adapt to the investigation in the intra-urban area. Comparing among eight types of industries and different firm sizes, we find that: 1) producer service, high-tech industries and labor-intensive manufacturing industries are more likely to cluster, whereas personal service and capital-intensive industries tend to be randomly dispersed in Beijing; 2) the spillover of the co-location of finns is more important to knowledge-intensive industries and has more significant impact on their allocation than business-oriented services in the intra-urban area; 3) the spatial agglomeration of service industries are driven by larger establishments, whereas manufac- turing industries are mixed.展开更多
The Nei's improved genetic distance(DA)and gene flow(Nm)were measured using sixteen microsatellite markers.Dendograms based on DA genetic distance using the neighbor-joining(NJ)method and STRUCTURE program were co...The Nei's improved genetic distance(DA)and gene flow(Nm)were measured using sixteen microsatellite markers.Dendograms based on DA genetic distance using the neighbor-joining(NJ)method and STRUCTURE program were constructed to analyze the genetic structure and relationship among 10 Chinese indigenous chicken breeds.The results showed that dendograms of DA genetic distance using the NJ method divided the 10 chicken breeds into two main clusters;one consisted of breeds of low weight body(CHA,TTB,XIA,GUS and BAI),the other contained heavier breeds(LAN,DAG,YOU,XIS and LUY).In the lighter breeds,TIB and CHA clustered together,as did XIA and GUS.In the heavier breeds,XIS and LUY was clustered together in one branch,but LAN,DAG and YOU clustered in independent branches.The results were consistent with Nm estimates among the 10 indigenous chicken breeds.The STRUCTURE program properly inferred the presence of genetic structure despite not pre-defining the origin of individuals.The genetic cluster inferred by STRUCTURE was basically the same as that from the DA distance clustering method.An advantage of the STRUCTURE program was its ability to identify the migrants and admixed individuals in the 10 chicken populations;this could not be achieved by use of the DA distance clustering method.展开更多
Detecting the boundaries of protein domains is an important and challenging task in both experimental and computational structural biology. In this paper, a promising method for detecting the domain structure of a pro...Detecting the boundaries of protein domains is an important and challenging task in both experimental and computational structural biology. In this paper, a promising method for detecting the domain structure of a protein from sequence information alone is presented. The method is based on analyzing multiple sequence alignments derived from a database search. Multiple measures are defined to quantify the domain information content of each position along the sequence. Then they are combined into a single predictor using support vector machine. What is more important, the domain detection is first taken as an imbal- anced data learning problem. A novel undersampling method is proposed on distance-based maximal entropy in the feature space of Support Vector Machine (SVM). The overall precision is about 80%. Simulation results demonstrate that the method can help not only in predicting the complete 3D structure of a protein but also in the machine learning system on general im- balanced datasets.展开更多
The urban transit fare structure and level can largely affect passengers’travel behavior and route choices.The commonly used transit fare policies in the present transit network would lead to the unbalanced transit a...The urban transit fare structure and level can largely affect passengers’travel behavior and route choices.The commonly used transit fare policies in the present transit network would lead to the unbalanced transit assignment and improper transit resources distribution.In order to distribute transit passenger flow evenly and efficiently,this paper introduces a new distance-based fare pattern with Euclidean distance.A bi-level programming model is developed for determining the optimal distance-based fare pattern,with the path-based stochastic transit assignment(STA)problem with elastic demand being proposed at the lower level.The upper-level intends to address a principal-agent game between transport authorities and transit enterprises pursing maximization of social welfare and financial interest,respectively.A genetic algorithm(GA)is implemented to solve the bi-level model,which is verified by a numerical example to illustrate that the proposed nonlinear distance-based fare pattern presents a better financial performance and distribution effect than other fare structures.展开更多
A new update strategy, distance-based update strategy, is presented in Location Dependent Continuous Query (LDCQ) under error limitation. There are different possibilities to intersect when the distances between movin...A new update strategy, distance-based update strategy, is presented in Location Dependent Continuous Query (LDCQ) under error limitation. There are different possibilities to intersect when the distances between moving objects and the querying boundary are different.Therefore, moving objects have different influences to the query result. We set different deviation limits for different moving objects according to distances. A great number of unnecessary updates are reduced and the payload of the system is relieved.展开更多
Purpose:To address the“anomalies”that occur when scientific breakthroughs emerge,this study focuses on identifying early signs and nascent stages of breakthrough innovations from the perspective of outliers,aiming t...Purpose:To address the“anomalies”that occur when scientific breakthroughs emerge,this study focuses on identifying early signs and nascent stages of breakthrough innovations from the perspective of outliers,aiming to achieve early identification of scientific breakthroughs in papers.Design/methodology/approach:This study utilizes semantic technology to extract research entities from the titles and abstracts of papers to represent each paper’s research content.Outlier detection methods are then employed to measure and analyze the anomalies in breakthrough papers during their early stages.The development and evolution process are traced using literature time tags.Finally,a case study is conducted using the key publications of the 2021 Nobel Prize laureates in Physiology or Medicine.Findings:Through manual analysis of all identified outlier papers,the effectiveness of the proposed method for early identifying potential scientific breakthroughs is verified.Research limitations:The study’s applicability has only been empirically tested in the biomedical field.More data from various fields are needed to validate the robustness and generalizability of the method.Practical implications:This study provides a valuable supplement to current methods for early identification of scientific breakthroughs,effectively supporting technological intelligence decision-making and services.Originality/value:The study introduces a novel approach to early identification of scientific breakthroughs by leveraging outlier analysis of research entities,offering a more sensitive,precise,and fine-grained alternative method compared to traditional citation-based evaluations,which enhances the ability to identify nascent breakthrough innovations.展开更多
This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-S...This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-Score incorporated with GreyWolf Optimization(GWO)as well as Interquartile Range(IQR)coupled with Ant Colony Optimization(ACO).Using a performance index,it is shown that when compared with the Z-Score and GWO with AdaBoost,the IQR and ACO,with AdaBoost are not very accurate(89.0%vs.86.0%)and less discriminative(Area Under the Curve(AUC)score of 93.0%vs.91.0%).The Z-Score and GWO methods also outperformed the others in terms of precision,scoring 89.0%;and the recall was also found to be satisfactory,scoring 90.0%.Thus,the paper helps to reveal various specific benefits and drawbacks associated with different outlier detection and feature selection techniques,which can be important to consider in further improving various aspects of diagnostics in cardiovascular health.Collectively,these findings can enhance the knowledge of heart disease prediction and patient treatment using enhanced and innovativemachine learning(ML)techniques.These findings when combined improve patient therapy knowledge and cardiac disease prediction through the use of cutting-edge and improved machine learning approaches.This work lays the groundwork for more precise diagnosis models by highlighting the benefits of combining multiple optimization methodologies.Future studies should focus on maximizing patient outcomes and model efficacy through research on these combinations.展开更多
Changepoint detection faces challenges when outlier data are present. This paper proposes a multivariate changepoint detection method which is based on the robust WPCA projection direction and the robust RFPOP method,...Changepoint detection faces challenges when outlier data are present. This paper proposes a multivariate changepoint detection method which is based on the robust WPCA projection direction and the robust RFPOP method, RWPCA-RFPOP method. Our method is double robust which is suitable for detecting mean changepoints in multivariate normal data with high correlations between variables that include outliers. Simulation results demonstrate that our method provides strong guarantees on both the number and location of changepoints in the presence of outliers. Finally, our method is well applied in an ACGH dataset.展开更多
Although quality assurance and quality control procedures are routinely applied in most air quality networks, outliers can still occur due to instrument malfunctions, the influence of harsh environments and the limita...Although quality assurance and quality control procedures are routinely applied in most air quality networks, outliers can still occur due to instrument malfunctions, the influence of harsh environments and the limitation of measuring methods. Such outliers pose challenges for data-powered applications such as data assimilation, statistical analysis of pollution characteristics and ensemble forecasting. Here, a fully automatic outlier detection method was developed based on the probability of residuals, which are the discrepancies between the observed and the estimated concentration values. The estimation can be conducted using filtering—or regressions when appropriate—to discriminate four types of outliers characterized by temporal and spatial inconsistency, instrument-induced low variances, periodic calibration exceptions, and less PM_(10) than PM_(2.5) in concentration observations, respectively. This probabilistic method was applied to detect all four types of outliers in hourly surface measurements of six pollutants(PM_(2.5), PM_(10),SO_2,NO_2,CO and O_3) from 1436 stations of the China National Environmental Monitoring Network during 2014-16. Among the measurements, 0.65%-5.68% are marked as outliers. with PM_(10) and CO more prone to outliers. Our method successfully identifies a trend of decreasing outliers from 2014 to 2016,which corresponds to known improvements in the quality assurance and quality control procedures of the China National Environmental Monitoring Network. The outliers can have a significant impact on the annual mean concentrations of PM_(2.5),with differences exceeding 10 μg m^(-3) at 66 sites.展开更多
With the development of global position system(GPS),wireless technology and location aware services,it is possible to collect a large quantity of trajectory data.In the field of data mining for moving objects,the pr...With the development of global position system(GPS),wireless technology and location aware services,it is possible to collect a large quantity of trajectory data.In the field of data mining for moving objects,the problem of anomaly detection is a hot topic.Based on the development of anomalous trajectory detection of moving objects,this paper introduces the classical trajectory outlier detection(TRAOD) algorithm,and then proposes a density-based trajectory outlier detection(DBTOD) algorithm,which compensates the disadvantages of the TRAOD algorithm that it is unable to detect anomalous defects when the trajectory is local and dense.The results of employing the proposed algorithm to Elk1993 and Deer1995 datasets are also presented,which show the effectiveness of the algorithm.展开更多
With the development of data age,data quality has become one of the problems that people pay much attention to.As a field of data mining,outlier detection is related to the quality of data.The isolated forest algorith...With the development of data age,data quality has become one of the problems that people pay much attention to.As a field of data mining,outlier detection is related to the quality of data.The isolated forest algorithm is one of the more prominent numerical data outlier detection algorithms in recent years.In the process of constructing the isolation tree by the isolated forest algorithm,as the isolation tree is continuously generated,the difference of isolation trees will gradually decrease or even no difference,which will result in the waste of memory and reduced efficiency of outlier detection.And in the constructed isolation trees,some isolation trees cannot detect outlier.In this paper,an improved iForest-based method GA-iForest is proposed.This method optimizes the isolated forest by selecting some better isolation trees according to the detection accuracy and the difference of isolation trees,thereby reducing some duplicate,similar and poor detection isolation trees and improving the accuracy and stability of outlier detection.In the experiment,Ubuntu system and Spark platform are used to build the experiment environment.The outlier datasets provided by ODDS are used as test.According to indicators such as the accuracy,recall rate,ROC curves,AUC and execution time,the performance of the proposed method is evaluated.Experimental results show that the proposed method can not only improve the accuracy and stability of outlier detection,but also reduce the number of isolation trees by 20%-40%compared with the original iForest method.展开更多
In this paper, we present a cluster-based algorithm for time series outlier mining.We use discrete Fourier transformation (DFT) to transform time series from time domain to frequency domain. Time series thus can be ma...In this paper, we present a cluster-based algorithm for time series outlier mining.We use discrete Fourier transformation (DFT) to transform time series from time domain to frequency domain. Time series thus can be mapped as the points in k -dimensional space.For these points, a cluster-based algorithm is developed to mine the outliers from these points.The algorithm first partitions the input points into disjoint clusters and then prunes the clusters,through judgment that can not contain outliers.Our algorithm has been run in the electrical load time series of one steel enterprise and proved to be effective.展开更多
Security is a nonfunctional information system attribute that plays a crucial role in wide sensor network application domains. Security risk can be quantified as the combination of the probability that a sensor networ...Security is a nonfunctional information system attribute that plays a crucial role in wide sensor network application domains. Security risk can be quantified as the combination of the probability that a sensor network system may fail and the evaluation of the severity of the damage caused by the failure. In this paper, we devise a methodology of Rough Outlier Detection (ROD) for the detection of security-based risk factor, which originates from violations of attack requirements (namely, attack risks). The methodology elaborates dimension reduction method to analyze the attack risk probability from high dimensional and nonlinear data set, and combines it with rough redundancy reduction and the distance measurement of kernel function which is obtained using the ROD. In this way, it is possible to determine the risky scenarios, and the analysis feedback can be used to improve the sensor network system design. We illustrate the methodology in the DARPA case set study using step-by-step approach and then prove that the method is effective in lowering the rate of false alarm.展开更多
Blast furnace data processing is prone to problems such as outliers.To overcome these problems and identify an improved method for processing blast furnace data,we conducted an in-depth study of blast furnace data.Bas...Blast furnace data processing is prone to problems such as outliers.To overcome these problems and identify an improved method for processing blast furnace data,we conducted an in-depth study of blast furnace data.Based on data samples from selected iron and steel companies,data types were classified according to different characteristics;then,appropriate methods were selected to process them in order to solve the deficiencies and outliers of the original blast furnace data.Linear interpolation was used to fill in the divided continuation data,the Knearest neighbor(KNN)algorithm was used to fill in correlation data with the internal law,and periodic statistical data were filled by the average.The error rate in the filling was low,and the fitting degree was over 85%.For the screening of outliers,corresponding indicator parameters were added according to the continuity,relevance,and periodicity of different data.Also,a variety of algorithms were used for processing.Through the analysis of screening results,a large amount of efficient information in the data was retained,and ineffective outliers were eliminated.Standardized processing of blast furnace big data as the basis of applied research on blast furnace big data can serve as an important means to improve data quality and retain data value.展开更多
基金State Key Program of National Natural Science of China(No.41230632)National Natural Science Foundation of China(No.41301123,41201169)
文摘To study the difference of industrial location among different industries, this article is to test the spatial agglomeration across industries and firm sizes at the city level. Our research bases on a unique plant-level data set of Beijing and employs a distance-based approach, which considers space as continuous. Unlike previous studies, we set two sets of references for service and manufacturing industries respectively to adapt to the investigation in the intra-urban area. Comparing among eight types of industries and different firm sizes, we find that: 1) producer service, high-tech industries and labor-intensive manufacturing industries are more likely to cluster, whereas personal service and capital-intensive industries tend to be randomly dispersed in Beijing; 2) the spillover of the co-location of finns is more important to knowledge-intensive industries and has more significant impact on their allocation than business-oriented services in the intra-urban area; 3) the spatial agglomeration of service industries are driven by larger establishments, whereas manufac- turing industries are mixed.
基金supported by the Program of National Technological Basis from Ministry of Science and Technology of China(No.2005DKA21101)the National Natural Science Foundation of China(No.30700572)
文摘The Nei's improved genetic distance(DA)and gene flow(Nm)were measured using sixteen microsatellite markers.Dendograms based on DA genetic distance using the neighbor-joining(NJ)method and STRUCTURE program were constructed to analyze the genetic structure and relationship among 10 Chinese indigenous chicken breeds.The results showed that dendograms of DA genetic distance using the NJ method divided the 10 chicken breeds into two main clusters;one consisted of breeds of low weight body(CHA,TTB,XIA,GUS and BAI),the other contained heavier breeds(LAN,DAG,YOU,XIS and LUY).In the lighter breeds,TIB and CHA clustered together,as did XIA and GUS.In the heavier breeds,XIS and LUY was clustered together in one branch,but LAN,DAG and YOU clustered in independent branches.The results were consistent with Nm estimates among the 10 indigenous chicken breeds.The STRUCTURE program properly inferred the presence of genetic structure despite not pre-defining the origin of individuals.The genetic cluster inferred by STRUCTURE was basically the same as that from the DA distance clustering method.An advantage of the STRUCTURE program was its ability to identify the migrants and admixed individuals in the 10 chicken populations;this could not be achieved by use of the DA distance clustering method.
基金National Natural Science Foundation of China (Grant No. 60433020, 60673099, 60673023)"985" project of Jilin University
文摘Detecting the boundaries of protein domains is an important and challenging task in both experimental and computational structural biology. In this paper, a promising method for detecting the domain structure of a protein from sequence information alone is presented. The method is based on analyzing multiple sequence alignments derived from a database search. Multiple measures are defined to quantify the domain information content of each position along the sequence. Then they are combined into a single predictor using support vector machine. What is more important, the domain detection is first taken as an imbal- anced data learning problem. A novel undersampling method is proposed on distance-based maximal entropy in the feature space of Support Vector Machine (SVM). The overall precision is about 80%. Simulation results demonstrate that the method can help not only in predicting the complete 3D structure of a protein but also in the machine learning system on general im- balanced datasets.
基金the Humanities and Social Science Foundation of the Ministry of Education of China(Grant No.20YJCZH121).
文摘The urban transit fare structure and level can largely affect passengers’travel behavior and route choices.The commonly used transit fare policies in the present transit network would lead to the unbalanced transit assignment and improper transit resources distribution.In order to distribute transit passenger flow evenly and efficiently,this paper introduces a new distance-based fare pattern with Euclidean distance.A bi-level programming model is developed for determining the optimal distance-based fare pattern,with the path-based stochastic transit assignment(STA)problem with elastic demand being proposed at the lower level.The upper-level intends to address a principal-agent game between transport authorities and transit enterprises pursing maximization of social welfare and financial interest,respectively.A genetic algorithm(GA)is implemented to solve the bi-level model,which is verified by a numerical example to illustrate that the proposed nonlinear distance-based fare pattern presents a better financial performance and distribution effect than other fare structures.
文摘A new update strategy, distance-based update strategy, is presented in Location Dependent Continuous Query (LDCQ) under error limitation. There are different possibilities to intersect when the distances between moving objects and the querying boundary are different.Therefore, moving objects have different influences to the query result. We set different deviation limits for different moving objects according to distances. A great number of unnecessary updates are reduced and the payload of the system is relieved.
基金supported by the major project of the National Social Science Foundation of China“Big Data-driven Semantic Evaluation System of Science and Technology Literature”(Grant No.21&ZD329)。
文摘Purpose:To address the“anomalies”that occur when scientific breakthroughs emerge,this study focuses on identifying early signs and nascent stages of breakthrough innovations from the perspective of outliers,aiming to achieve early identification of scientific breakthroughs in papers.Design/methodology/approach:This study utilizes semantic technology to extract research entities from the titles and abstracts of papers to represent each paper’s research content.Outlier detection methods are then employed to measure and analyze the anomalies in breakthrough papers during their early stages.The development and evolution process are traced using literature time tags.Finally,a case study is conducted using the key publications of the 2021 Nobel Prize laureates in Physiology or Medicine.Findings:Through manual analysis of all identified outlier papers,the effectiveness of the proposed method for early identifying potential scientific breakthroughs is verified.Research limitations:The study’s applicability has only been empirically tested in the biomedical field.More data from various fields are needed to validate the robustness and generalizability of the method.Practical implications:This study provides a valuable supplement to current methods for early identification of scientific breakthroughs,effectively supporting technological intelligence decision-making and services.Originality/value:The study introduces a novel approach to early identification of scientific breakthroughs by leveraging outlier analysis of research entities,offering a more sensitive,precise,and fine-grained alternative method compared to traditional citation-based evaluations,which enhances the ability to identify nascent breakthrough innovations.
文摘This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-Score incorporated with GreyWolf Optimization(GWO)as well as Interquartile Range(IQR)coupled with Ant Colony Optimization(ACO).Using a performance index,it is shown that when compared with the Z-Score and GWO with AdaBoost,the IQR and ACO,with AdaBoost are not very accurate(89.0%vs.86.0%)and less discriminative(Area Under the Curve(AUC)score of 93.0%vs.91.0%).The Z-Score and GWO methods also outperformed the others in terms of precision,scoring 89.0%;and the recall was also found to be satisfactory,scoring 90.0%.Thus,the paper helps to reveal various specific benefits and drawbacks associated with different outlier detection and feature selection techniques,which can be important to consider in further improving various aspects of diagnostics in cardiovascular health.Collectively,these findings can enhance the knowledge of heart disease prediction and patient treatment using enhanced and innovativemachine learning(ML)techniques.These findings when combined improve patient therapy knowledge and cardiac disease prediction through the use of cutting-edge and improved machine learning approaches.This work lays the groundwork for more precise diagnosis models by highlighting the benefits of combining multiple optimization methodologies.Future studies should focus on maximizing patient outcomes and model efficacy through research on these combinations.
文摘Changepoint detection faces challenges when outlier data are present. This paper proposes a multivariate changepoint detection method which is based on the robust WPCA projection direction and the robust RFPOP method, RWPCA-RFPOP method. Our method is double robust which is suitable for detecting mean changepoints in multivariate normal data with high correlations between variables that include outliers. Simulation results demonstrate that our method provides strong guarantees on both the number and location of changepoints in the presence of outliers. Finally, our method is well applied in an ACGH dataset.
基金supported by the National Natural Science Foundation of China(Grant No.11201003)the Provincial Natural Science Research Project of Anhui Colleges(Grant No.KJ2016A263)+1 种基金the Natural Science Foundation of Anhui Province(Grant No.1408085MA07)the PhD Research Startup Foundation of Anhui Normal University(Grant No.2014bsqdjj34)
基金supported by the National Natural Science Foundation (Grant Nos.91644216 and 41575128)the CAS Information Technology Program (Grant No.XXH13506-302)Guangdong Provincial Science and Technology Development Special Fund (No.2017B020216007)
文摘Although quality assurance and quality control procedures are routinely applied in most air quality networks, outliers can still occur due to instrument malfunctions, the influence of harsh environments and the limitation of measuring methods. Such outliers pose challenges for data-powered applications such as data assimilation, statistical analysis of pollution characteristics and ensemble forecasting. Here, a fully automatic outlier detection method was developed based on the probability of residuals, which are the discrepancies between the observed and the estimated concentration values. The estimation can be conducted using filtering—or regressions when appropriate—to discriminate four types of outliers characterized by temporal and spatial inconsistency, instrument-induced low variances, periodic calibration exceptions, and less PM_(10) than PM_(2.5) in concentration observations, respectively. This probabilistic method was applied to detect all four types of outliers in hourly surface measurements of six pollutants(PM_(2.5), PM_(10),SO_2,NO_2,CO and O_3) from 1436 stations of the China National Environmental Monitoring Network during 2014-16. Among the measurements, 0.65%-5.68% are marked as outliers. with PM_(10) and CO more prone to outliers. Our method successfully identifies a trend of decreasing outliers from 2014 to 2016,which corresponds to known improvements in the quality assurance and quality control procedures of the China National Environmental Monitoring Network. The outliers can have a significant impact on the annual mean concentrations of PM_(2.5),with differences exceeding 10 μg m^(-3) at 66 sites.
基金supported by the Aeronautical Science Foundation of China(20111052010)the Jiangsu Graduates Innovation Project (CXZZ120163)+1 种基金the "333" Project of Jiangsu Provincethe Qing Lan Project of Jiangsu Province
文摘With the development of global position system(GPS),wireless technology and location aware services,it is possible to collect a large quantity of trajectory data.In the field of data mining for moving objects,the problem of anomaly detection is a hot topic.Based on the development of anomalous trajectory detection of moving objects,this paper introduces the classical trajectory outlier detection(TRAOD) algorithm,and then proposes a density-based trajectory outlier detection(DBTOD) algorithm,which compensates the disadvantages of the TRAOD algorithm that it is unable to detect anomalous defects when the trajectory is local and dense.The results of employing the proposed algorithm to Elk1993 and Deer1995 datasets are also presented,which show the effectiveness of the algorithm.
基金supported by the State Grid Liaoning Electric Power Supply CO, LTDthe financial support for the “Key Technology and Application Research of the Self-Service Grid Big Data Governance (No.SGLNXT00YJJS1800110)”
文摘With the development of data age,data quality has become one of the problems that people pay much attention to.As a field of data mining,outlier detection is related to the quality of data.The isolated forest algorithm is one of the more prominent numerical data outlier detection algorithms in recent years.In the process of constructing the isolation tree by the isolated forest algorithm,as the isolation tree is continuously generated,the difference of isolation trees will gradually decrease or even no difference,which will result in the waste of memory and reduced efficiency of outlier detection.And in the constructed isolation trees,some isolation trees cannot detect outlier.In this paper,an improved iForest-based method GA-iForest is proposed.This method optimizes the isolated forest by selecting some better isolation trees according to the detection accuracy and the difference of isolation trees,thereby reducing some duplicate,similar and poor detection isolation trees and improving the accuracy and stability of outlier detection.In the experiment,Ubuntu system and Spark platform are used to build the experiment environment.The outlier datasets provided by ODDS are used as test.According to indicators such as the accuracy,recall rate,ROC curves,AUC and execution time,the performance of the proposed method is evaluated.Experimental results show that the proposed method can not only improve the accuracy and stability of outlier detection,but also reduce the number of isolation trees by 20%-40%compared with the original iForest method.
文摘In this paper, we present a cluster-based algorithm for time series outlier mining.We use discrete Fourier transformation (DFT) to transform time series from time domain to frequency domain. Time series thus can be mapped as the points in k -dimensional space.For these points, a cluster-based algorithm is developed to mine the outliers from these points.The algorithm first partitions the input points into disjoint clusters and then prunes the clusters,through judgment that can not contain outliers.Our algorithm has been run in the electrical load time series of one steel enterprise and proved to be effective.
基金the Jiangsu 973 Scientific Project,the National Natural Science Foundation of China,the Jiangsu Natural Science Foundation,the Aerospace Innovation Fund,the Lianyungang Science & Technology Project
文摘Security is a nonfunctional information system attribute that plays a crucial role in wide sensor network application domains. Security risk can be quantified as the combination of the probability that a sensor network system may fail and the evaluation of the severity of the damage caused by the failure. In this paper, we devise a methodology of Rough Outlier Detection (ROD) for the detection of security-based risk factor, which originates from violations of attack requirements (namely, attack risks). The methodology elaborates dimension reduction method to analyze the attack risk probability from high dimensional and nonlinear data set, and combines it with rough redundancy reduction and the distance measurement of kernel function which is obtained using the ROD. In this way, it is possible to determine the risky scenarios, and the analysis feedback can be used to improve the sensor network system design. We illustrate the methodology in the DARPA case set study using step-by-step approach and then prove that the method is effective in lowering the rate of false alarm.
基金This work is financially supported by the National Nature Science Foundation of China(No.52004096)the Hebei Province High-End Iron and Steel Metallurgical Joint Research Fund Project,China(No.E2019209314)+1 种基金the Scientific Research Program Project of Hebei Education Department,China(No.QN2019200)the Tangshan Science and Technology Planning Project,China(No.19150241E).
文摘Blast furnace data processing is prone to problems such as outliers.To overcome these problems and identify an improved method for processing blast furnace data,we conducted an in-depth study of blast furnace data.Based on data samples from selected iron and steel companies,data types were classified according to different characteristics;then,appropriate methods were selected to process them in order to solve the deficiencies and outliers of the original blast furnace data.Linear interpolation was used to fill in the divided continuation data,the Knearest neighbor(KNN)algorithm was used to fill in correlation data with the internal law,and periodic statistical data were filled by the average.The error rate in the filling was low,and the fitting degree was over 85%.For the screening of outliers,corresponding indicator parameters were added according to the continuity,relevance,and periodicity of different data.Also,a variety of algorithms were used for processing.Through the analysis of screening results,a large amount of efficient information in the data was retained,and ineffective outliers were eliminated.Standardized processing of blast furnace big data as the basis of applied research on blast furnace big data can serve as an important means to improve data quality and retain data value.