The Circle algorithm was proposed for large datasets.The idea of the algorithm is to find a set of vertices that are close to each other and far from other vertices.This algorithm makes use of the connection between c...The Circle algorithm was proposed for large datasets.The idea of the algorithm is to find a set of vertices that are close to each other and far from other vertices.This algorithm makes use of the connection between clustering aggregation and the problem of correlation clustering.The best deterministic approximation algorithm was provided for the variation of the correlation of clustering problem,and showed how sampling can be used to scale the algorithms for large datasets.An extensive empirical evaluation was given for the usefulness of the problem and the solutions.The results show that this method achieves more than 50% reduction in the running time without sacrificing the quality of the clustering.展开更多
Raw data are classified using clustering techniques in a reasonable manner to create disjoint clusters.A lot of clustering algorithms based on specific parameters have been proposed to access a high volume of datasets...Raw data are classified using clustering techniques in a reasonable manner to create disjoint clusters.A lot of clustering algorithms based on specific parameters have been proposed to access a high volume of datasets.This paper focuses on cluster analysis based on neutrosophic set implication,i.e.,a k-means algorithm with a threshold-based clustering technique.This algorithm addresses the shortcomings of the k-means clustering algorithm by overcoming the limitations of the threshold-based clustering algorithm.To evaluate the validity of the proposed method,several validity measures and validity indices are applied to the Iris dataset(from the University of California,Irvine,Machine Learning Repository)along with k-means and threshold-based clustering algorithms.The proposed method results in more segregated datasets with compacted clusters,thus achieving higher validity indices.The method also eliminates the limitations of threshold-based clustering algorithm and validates measures and respective indices along with k-means and threshold-based clustering algorithms.展开更多
In conjunction with association rules for data mining, the connections between testing indices and strong and weak association rules were determined, and new derivative rules were obtained by further reasoning. Associ...In conjunction with association rules for data mining, the connections between testing indices and strong and weak association rules were determined, and new derivative rules were obtained by further reasoning. Association rules were used to analyze correlation and check consistency between indices. This study shows that the judgment obtained by weak association rules or non-association rules is more accurate and more credible than that obtained by strong association rules. When the testing grades of two indices in the weak association rules are inconsistent, the testing grades of indices are more likely to be erroneous, and the mistakes are often caused by human factors. Clustering data mining technology was used to analyze the reliability of a diagnosis, or to perform health diagnosis directly. Analysis showed that the clustering results are related to the indices selected, and that if the indices selected are more significant, the characteristics of clustering results are also more significant, and the analysis or diagnosis is more credible. The indices and diagnosis analysis function produced by this study provide a necessary theoretical foundation and new ideas for the development of hydraulic metal structure health diagnosis technology.展开更多
Fuzzy clustering theory is widely used in data mining of full-face tunnel boring machine.However,the traditional fuzzy clustering algorithm based on objective function is difficult to effectively cluster functional da...Fuzzy clustering theory is widely used in data mining of full-face tunnel boring machine.However,the traditional fuzzy clustering algorithm based on objective function is difficult to effectively cluster functional data.We propose a new Fuzzy clustering algorithm,namely FCM-ANN algorithm.The algorithm replaces the clustering prototype of the FCM algorithm with the predicted value of the artificial neural network.This makes the algorithm not only satisfy the clustering based on the traditional similarity criterion,but also can effectively cluster the functional data.In this paper,we first use the t-test as an evaluation index and apply the FCM-ANN algorithm to the synthetic datasets for validity testing.Then the algorithm is applied to TBM operation data and combined with the crossvalidation method to predict the tunneling speed.The predicted results are evaluated by RMSE and R^(2).According to the experimental results on the synthetic datasets,we obtain the relationship among the membership threshold,the number of samples,the number of attributes and the noise.Accordingly,the datasets can be effectively adjusted.Applying the FCM-ANN algorithm to the TBM operation data can accurately predict the tunneling speed.The FCM-ANN algorithm has improved the traditional fuzzy clustering algorithm,which can be used not only for the prediction of tunneling speed of TBM but also for clustering or prediction of other functional data.展开更多
With the rapid development of the economy,the scale of the power grid is expanding.The number of power equipment that constitutes the power grid has been very large,which makes the state data of power equipment grow e...With the rapid development of the economy,the scale of the power grid is expanding.The number of power equipment that constitutes the power grid has been very large,which makes the state data of power equipment grow explosively.These multi-source heterogeneous data have data differences,which lead to data variation in the process of transmission and preservation,thus forming the bad information of incomplete data.Therefore,the research on data integrity has become an urgent task.This paper is based on the characteristics of random chance and the Spatio-temporal difference of the system.According to the characteristics and data sources of the massive data generated by power equipment,the fuzzy mining model of power equipment data is established,and the data is divided into numerical and non-numerical data based on numerical data.Take the text data of power equipment defects as the mining material.Then,the Apriori algorithm based on an array is used to mine deeply.The strong association rules in incomplete data of power equipment are obtained and analyzed.From the change trend of NRMSE metrics and classification accuracy,most of the filling methods combined with the two frameworks in this method usually show a relatively stable filling trend,and will not fluctuate greatly with the growth of the missing rate.The experimental results show that the proposed algorithm model can effectively improve the filling effect of the existing filling methods on most data sets,and the filling effect fluctuates greatly with the increase of the missing rate,that is,with the increase of the missing rate,the improvement effect of the model for the existing filling methods is higher than 4.3%.Through the incomplete data clustering technology studied in this paper,a more innovative state assessment of smart grid reliability operation is carried out,which has good research value and reference significance.展开更多
Data clustering is a significant information retrieval technique in today's data intensive society. Over the last few decades a vast variety of huge number of data clustering algorithms have been designed and impleme...Data clustering is a significant information retrieval technique in today's data intensive society. Over the last few decades a vast variety of huge number of data clustering algorithms have been designed and implemented for all most all data types. The quality of results of cluster analysis mainly depends on the clustering algorithm used in the analysis. Architecture of a versatile, less user dependent, dynamic and scalable data clustering machine is presented. The machine selects for analysis, the best available data clustering algorithm on the basis of the credentials of the data and previously used domain knowledge. The domain knowledge is updated on completion of each session of data analysis.展开更多
Classical survival analysis assumes all subjects will experience the event of interest, but in some cases, a portion of the population may never encounter the event. These survival methods further assume independent s...Classical survival analysis assumes all subjects will experience the event of interest, but in some cases, a portion of the population may never encounter the event. These survival methods further assume independent survival times, which is not valid for honey bees, which live in nests. The study introduces a semi-parametric marginal proportional hazards mixture cure (PHMC) model with exchangeable correlation structure, using generalized estimating equations for survival data analysis. The model was tested on clustered right-censored bees survival data with a cured fraction, where two bee species were subjected to different entomopathogens to test the effect of the entomopathogens on the survival of the bee species. The Expectation-Solution algorithm is used to estimate the parameters. The study notes a weak positive association between cure statuses (ρ1=0.0007) and survival times for uncured bees (ρ2=0.0890), emphasizing their importance. The odds of being uncured for A. mellifera is higher than the odds for species M. ferruginea. The bee species, A. mellifera are more susceptible to entomopathogens icipe 7, icipe 20, and icipe 69. The Cox-Snell residuals show that the proposed semiparametric PH model generally fits the data well as compared to model that assume independent correlation structure. Thus, the semi parametric marginal proportional hazards mixture cure is parsimonious model for correlated bees survival data.展开更多
Graph-theoretical approaches have been widely used for data clustering and image segmentation recently. The goal of data clustering is to discover the underlying distribution and structural information of the given da...Graph-theoretical approaches have been widely used for data clustering and image segmentation recently. The goal of data clustering is to discover the underlying distribution and structural information of the given data, while image segmentation is to partition an image into several non-overlapping regions. Therefore, two popular graph-theoretical clustering methods are analyzed, including the directed tree based data clustering and the minimum spanning tree based image segmentation. There are two contributions: (1) To improve the directed tree based data clustering for image segmentation, (2) To improve the minimum spanning tree based image segmentation for data clustering. The extensive experiments using artificial and real-world data indicate that the improved directed tree based image segmentation can partition images well by preserving enough details, and the improved minimum spanning tree based data clustering can well cluster data in manifold structure.展开更多
Clustered interval-censored failure time data often occur in a wide variety of research and application fields such as cancer and AIDS studies. For such data, the failure times of interest are interval-censored and ma...Clustered interval-censored failure time data often occur in a wide variety of research and application fields such as cancer and AIDS studies. For such data, the failure times of interest are interval-censored and may be correlated for subjects coming from the same cluster. This paper presents a robust semiparametric transformation mixed effect models to analyze such data and use a U-statistic based on rank correlation to estimate the unknown parameters. The large sample properties of the estimator are also established. In addition, the authors illustrate the performance of the proposed estimate with extensive simulations and two real data examples.展开更多
Today, Linear Mixed Models (LMMs) are fitted, mostly, by assuming that random effects and errors have Gaussian distributions, therefore using Maximum Likelihood (ML) or REML estimation. However, for many data sets, th...Today, Linear Mixed Models (LMMs) are fitted, mostly, by assuming that random effects and errors have Gaussian distributions, therefore using Maximum Likelihood (ML) or REML estimation. However, for many data sets, that double assumption is unlikely to hold, particularly for the random effects, a crucial component </span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">in </span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">which assessment of magnitude is key in such modeling. Alternative fitting methods not relying on that assumption (as ANOVA ones and Rao</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">’</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">s MINQUE) apply, quite often, only to the very constrained class of variance components models. In this paper, a new computationally feasible estimation methodology is designed, first for the widely used class of 2-level (or longitudinal) LMMs with only assumption (beyond the usual basic ones) that residual errors are uncorrelated and homoscedastic, with no distributional assumption imposed on the random effects. A major asset of this new approach is that it yields nonnegative variance estimates and covariance matrices estimates which are symmetric and, at least, positive semi-definite. Furthermore, it is shown that when the LMM is, indeed, Gaussian, this new methodology differs from ML just through a slight variation in the denominator of the residual variance estimate. The new methodology actually generalizes to LMMs a well known nonparametric fitting procedure for standard Linear Models. Finally, the methodology is also extended to ANOVA LMMs, generalizing an old method by Henderson for ML estimation in such models under normality.展开更多
This paper presents a novel medical image registration algorithm named total variation constrained graphregularization for non-negative matrix factorization(TV-GNMF).The method utilizes non-negative matrix factorizati...This paper presents a novel medical image registration algorithm named total variation constrained graphregularization for non-negative matrix factorization(TV-GNMF).The method utilizes non-negative matrix factorization by total variation constraint and graph regularization.The main contributions of our work are the following.First,total variation is incorporated into NMF to control the diffusion speed.The purpose is to denoise in smooth regions and preserve features or details of the data in edge regions by using a diffusion coefficient based on gradient information.Second,we add graph regularization into NMF to reveal intrinsic geometry and structure information of features to enhance the discrimination power.Third,the multiplicative update rules and proof of convergence of the TV-GNMF algorithm are given.Experiments conducted on datasets show that the proposed TV-GNMF method outperforms other state-of-the-art algorithms.展开更多
This paper proposes a clustering technique that minimizes the need for subjective human intervention and is based on elements of rough set theory (RST). The proposed algorithm is unified in its approach to clusterin...This paper proposes a clustering technique that minimizes the need for subjective human intervention and is based on elements of rough set theory (RST). The proposed algorithm is unified in its approach to clustering and makes use of both local and global data properties to obtain clustering solutions. It handles single-type and mixed attribute data sets with ease. The results from three data sets of single and mixed attribute types are used to illustrate the technique and establish its efficiency.展开更多
Based on the analysis of features of the grid-based clustering method-clustering in quest (CLIQUE) and density-based clustering method-density-based spatial clustering of applications with noise (DBSCAN), a new cl...Based on the analysis of features of the grid-based clustering method-clustering in quest (CLIQUE) and density-based clustering method-density-based spatial clustering of applications with noise (DBSCAN), a new clustering algorithm named cooperative clustering based on grid and density (CLGRID) is presented. The new algorithm adopts an equivalent rule of regional inquiry and density unit identification. The central region of one class is calculated by the grid-based method and the margin region by a density-based method. By clustering in two phases and using only a small number of seed objects in representative units to expand the cluster, the frequency of region query can be decreased, and consequently the cost of time is reduced. The new algorithm retains positive features of both grid-based and density-based methods and avoids the difficulty of parameter searching. It can discover clusters of arbitrary shape with high efficiency and is not sensitive to noise. The application of CLGRID on test data sets demonstrates its validity and higher efficiency, which contrast with tradi- tional DBSCAN with R tree.展开更多
Finding clusters based on density represents a significant class of clustering algorithms.These methods can discover clusters of various shapes and sizes.The most studied algorithm in this class is theDensity-Based Sp...Finding clusters based on density represents a significant class of clustering algorithms.These methods can discover clusters of various shapes and sizes.The most studied algorithm in this class is theDensity-Based Spatial Clustering of Applications with Noise(DBSCAN).It identifies clusters by grouping the densely connected objects into one group and discarding the noise objects.It requires two input parameters:epsilon(fixed neighborhood radius)and MinPts(the lowest number of objects in epsilon).However,it can’t handle clusters of various densities since it uses a global value for epsilon.This article proposes an adaptation of the DBSCAN method so it can discover clusters of varied densities besides reducing the required number of input parameters to only one.Only user input in the proposed method is the MinPts.Epsilon on the other hand,is computed automatically based on statistical information of the dataset.The proposed method finds the core distance for each object in the dataset,takes the average of these distances as the first value of epsilon,and finds the clusters satisfying this density level.The remaining unclustered objects will be clustered using a new value of epsilon that equals the average core distances of unclustered objects.This process continues until all objects have been clustered or the remaining unclustered objects are less than 0.006 of the dataset’s size.The proposed method requires MinPts only as an input parameter because epsilon is computed from data.Benchmark datasets were used to evaluate the effectiveness of the proposed method that produced promising results.Practical experiments demonstrate that the outstanding ability of the proposed method to detect clusters of different densities even if there is no separation between them.The accuracy of the method ranges from 92%to 100%for the experimented datasets.展开更多
Recurrent event data often arises in biomedical studies, and individuals within a cluster might not be independent. We propose a semiparametric additive rates model for clustered recurrent event data, wherein the cova...Recurrent event data often arises in biomedical studies, and individuals within a cluster might not be independent. We propose a semiparametric additive rates model for clustered recurrent event data, wherein the covariates are assumed to add to the unspecified baseline rate. For the inference on the model parameters, estimating equation approaches are developed, and both large and finite sample properties of the proposed estimators are established.展开更多
The year of 2013 is considered the first year of smart city in China. With the development of informationization and urbanization in China, city diseases(traffic jam, medical problem and unbalanced education) are more...The year of 2013 is considered the first year of smart city in China. With the development of informationization and urbanization in China, city diseases(traffic jam, medical problem and unbalanced education) are more and more apparent. Smart city is the key to solving these diseases. This paper presents the overall smart city development in China in term of market scale and development stages, the technology standards, and industry layout. The paper claims that the issues and challenges facing smart city development in China and proposes to make polices to support smart city development.展开更多
In this study, we analyze Cluster observations of whistler-mode chorus and hiss waves during the event of August 19-21, 2006. Chorus is present outside the plasmasphere and hiss occurs inside the plasmasphere. Using a...In this study, we analyze Cluster observations of whistler-mode chorus and hiss waves during the event of August 19-21, 2006. Chorus is present outside the plasmasphere and hiss occurs inside the plasmasphere. Using a recently constructed plasma boundary layer model, we perform a ray-tracing study on the propagation of chorus. Numerical results show that chorus can penetrate into the plasmasphere through the plasma boundary layer, evolving into hiss. The current data analysis and modeling provide a further observational support for the previous findings that chorus is the origin of plasmaspheric hiss.展开更多
Attempts to determine characters of astronomical objects have been one of major and vibrant activities in both astronomy and data science fields.Instead of a manual inspection,various automated systems are invented to...Attempts to determine characters of astronomical objects have been one of major and vibrant activities in both astronomy and data science fields.Instead of a manual inspection,various automated systems are invented to satisfy the need,including the classification of light curve profiles.A specific Kaggle competition,namely Photometric LSST Astronomical Time-Series Classification Challenge(PLAsTiCC),is launched to gather new ideas of tackling the abovementioned task using the data set collected from the Large Synoptic Survey Telescope(LSST)project.Almost all proposed methods fall into the supervised family with a common aim to categorize each object into one of pre-defined types.As this challenge focuses on developing a predictive model that is robust to classifying unseen data,those previous attempts similarly encounter the lack of discriminate features,since distribution of training and actual test datasets are largely different.As a result,well-known classification algorithms prove to be sub-optimal,while more complicated feature extraction techniques may help to slightly boost the predictive performance.Given such a burden,this research is set to explore an unsupervised alternative to the difficult quest,where common classifiers fail to reach the 50%accuracy mark.A clustering technique is exploited to transform the space of training data,from which a more accurate classifier can be built.In addition to a single clustering framework that provides a comparable accuracy to the front runners of supervised learning,a multiple-clustering alternative is also introduced with improved performance.In fact,it is able to yield a higher accuracy rate of 58.32%from 51.36%that is obtained using a simple clustering.For this difficult problem,it is rather good considering for those achieved by well-known models like support vector machine(SVM)with 51.80%and Naive Bayes(NB)with only 2.92%.展开更多
This paper proposes a distributed dynamic k-medoid clustering algorithm for wireless sensor networks (WSNs), DDKCAWSN. Different from node-clustering algorithms and protocols for WSNs, the algorithm focuses on clust...This paper proposes a distributed dynamic k-medoid clustering algorithm for wireless sensor networks (WSNs), DDKCAWSN. Different from node-clustering algorithms and protocols for WSNs, the algorithm focuses on clustering data in the network. By sending the sink clustered data instead of practical ones, the algorithm can greatly reduce the size and the time of data communication, and further save the energy of the nodes in the network and prolong the system lifetime. Moreover, the algorithm improves the accuracy of the clustered data dynamically by updating the clusters periodically such as each day. Simulation results demonstrate the effectiveness of our approach for different metrics.展开更多
Compressional and shear sonic logs(DTC and DTS,respectively)are one of the effective means for determining petrophysical/geomechanical properties.However,the DTS log has limited availability mainly due to high acquisi...Compressional and shear sonic logs(DTC and DTS,respectively)are one of the effective means for determining petrophysical/geomechanical properties.However,the DTS log has limited availability mainly due to high acquisition costs.This study introduces a hybrid machine learning approach to generating synthetic DTS logs.Five wireline logs such as gamma ray(GR),density(RHOB),neutron porosity(NPHI),deep resistivity(Rt),and DTS logs are used as input data for three supervised-machine learning models including support vector machine for regression(SVR),deep neural network(DNN),and long short-term memory(LSTM).The hybrid machine learning model utilizes two additional techniques.First,as an unsupervised-learning approach,data clustering is integrated with general machine learning models for the purpose of improving model accuracy.All the machine learning models using the data-clustered approach show higher accuracies in predicting target(DTS)values,compared to non-clustered models.Second,particle swarm optimization(PSO)is combined with the models to determine optimal hyperparameters.The PSO algorithm proves time-effective,automated advantages as it gets feedback from previous computations so that is able to narrow down candidates for optimal hyperparameters.Compared to previous studies focusing on the performance comparison among machine learning algorithms,this study introduces an advanced approach to further improve the performance by integrating the unsupervised learning technique and PSO optimization with the general models.Based on this study result,we recommend the hybrid machine learning approach for improving the reliability and efficiency of synthetic log generation.展开更多
基金Projects(60873265,60903222) supported by the National Natural Science Foundation of China Project(IRT0661) supported by the Program for Changjiang Scholars and Innovative Research Team in University of China
文摘The Circle algorithm was proposed for large datasets.The idea of the algorithm is to find a set of vertices that are close to each other and far from other vertices.This algorithm makes use of the connection between clustering aggregation and the problem of correlation clustering.The best deterministic approximation algorithm was provided for the variation of the correlation of clustering problem,and showed how sampling can be used to scale the algorithms for large datasets.An extensive empirical evaluation was given for the usefulness of the problem and the solutions.The results show that this method achieves more than 50% reduction in the running time without sacrificing the quality of the clustering.
文摘Raw data are classified using clustering techniques in a reasonable manner to create disjoint clusters.A lot of clustering algorithms based on specific parameters have been proposed to access a high volume of datasets.This paper focuses on cluster analysis based on neutrosophic set implication,i.e.,a k-means algorithm with a threshold-based clustering technique.This algorithm addresses the shortcomings of the k-means clustering algorithm by overcoming the limitations of the threshold-based clustering algorithm.To evaluate the validity of the proposed method,several validity measures and validity indices are applied to the Iris dataset(from the University of California,Irvine,Machine Learning Repository)along with k-means and threshold-based clustering algorithms.The proposed method results in more segregated datasets with compacted clusters,thus achieving higher validity indices.The method also eliminates the limitations of threshold-based clustering algorithm and validates measures and respective indices along with k-means and threshold-based clustering algorithms.
基金supported by the Key Program of the National Natural Science Foundation of China(Grant No.50539010)the Special Fund for Public Welfare Industry of the Ministry of Water Resources of China(Grant No.200801019)
文摘In conjunction with association rules for data mining, the connections between testing indices and strong and weak association rules were determined, and new derivative rules were obtained by further reasoning. Association rules were used to analyze correlation and check consistency between indices. This study shows that the judgment obtained by weak association rules or non-association rules is more accurate and more credible than that obtained by strong association rules. When the testing grades of two indices in the weak association rules are inconsistent, the testing grades of indices are more likely to be erroneous, and the mistakes are often caused by human factors. Clustering data mining technology was used to analyze the reliability of a diagnosis, or to perform health diagnosis directly. Analysis showed that the clustering results are related to the indices selected, and that if the indices selected are more significant, the characteristics of clustering results are also more significant, and the analysis or diagnosis is more credible. The indices and diagnosis analysis function produced by this study provide a necessary theoretical foundation and new ideas for the development of hydraulic metal structure health diagnosis technology.
基金supported by the National Key R&D Program of China(Grant Nos.2018YFB1700704 and 2018YFB1702502)the Study on the Key Management and Privacy Preservation in VANET,The Innovation Foundation of Science and Technology of Dalian(2018J12GX045).
文摘Fuzzy clustering theory is widely used in data mining of full-face tunnel boring machine.However,the traditional fuzzy clustering algorithm based on objective function is difficult to effectively cluster functional data.We propose a new Fuzzy clustering algorithm,namely FCM-ANN algorithm.The algorithm replaces the clustering prototype of the FCM algorithm with the predicted value of the artificial neural network.This makes the algorithm not only satisfy the clustering based on the traditional similarity criterion,but also can effectively cluster the functional data.In this paper,we first use the t-test as an evaluation index and apply the FCM-ANN algorithm to the synthetic datasets for validity testing.Then the algorithm is applied to TBM operation data and combined with the crossvalidation method to predict the tunneling speed.The predicted results are evaluated by RMSE and R^(2).According to the experimental results on the synthetic datasets,we obtain the relationship among the membership threshold,the number of samples,the number of attributes and the noise.Accordingly,the datasets can be effectively adjusted.Applying the FCM-ANN algorithm to the TBM operation data can accurately predict the tunneling speed.The FCM-ANN algorithm has improved the traditional fuzzy clustering algorithm,which can be used not only for the prediction of tunneling speed of TBM but also for clustering or prediction of other functional data.
文摘With the rapid development of the economy,the scale of the power grid is expanding.The number of power equipment that constitutes the power grid has been very large,which makes the state data of power equipment grow explosively.These multi-source heterogeneous data have data differences,which lead to data variation in the process of transmission and preservation,thus forming the bad information of incomplete data.Therefore,the research on data integrity has become an urgent task.This paper is based on the characteristics of random chance and the Spatio-temporal difference of the system.According to the characteristics and data sources of the massive data generated by power equipment,the fuzzy mining model of power equipment data is established,and the data is divided into numerical and non-numerical data based on numerical data.Take the text data of power equipment defects as the mining material.Then,the Apriori algorithm based on an array is used to mine deeply.The strong association rules in incomplete data of power equipment are obtained and analyzed.From the change trend of NRMSE metrics and classification accuracy,most of the filling methods combined with the two frameworks in this method usually show a relatively stable filling trend,and will not fluctuate greatly with the growth of the missing rate.The experimental results show that the proposed algorithm model can effectively improve the filling effect of the existing filling methods on most data sets,and the filling effect fluctuates greatly with the increase of the missing rate,that is,with the increase of the missing rate,the improvement effect of the model for the existing filling methods is higher than 4.3%.Through the incomplete data clustering technology studied in this paper,a more innovative state assessment of smart grid reliability operation is carried out,which has good research value and reference significance.
文摘Data clustering is a significant information retrieval technique in today's data intensive society. Over the last few decades a vast variety of huge number of data clustering algorithms have been designed and implemented for all most all data types. The quality of results of cluster analysis mainly depends on the clustering algorithm used in the analysis. Architecture of a versatile, less user dependent, dynamic and scalable data clustering machine is presented. The machine selects for analysis, the best available data clustering algorithm on the basis of the credentials of the data and previously used domain knowledge. The domain knowledge is updated on completion of each session of data analysis.
文摘Classical survival analysis assumes all subjects will experience the event of interest, but in some cases, a portion of the population may never encounter the event. These survival methods further assume independent survival times, which is not valid for honey bees, which live in nests. The study introduces a semi-parametric marginal proportional hazards mixture cure (PHMC) model with exchangeable correlation structure, using generalized estimating equations for survival data analysis. The model was tested on clustered right-censored bees survival data with a cured fraction, where two bee species were subjected to different entomopathogens to test the effect of the entomopathogens on the survival of the bee species. The Expectation-Solution algorithm is used to estimate the parameters. The study notes a weak positive association between cure statuses (ρ1=0.0007) and survival times for uncured bees (ρ2=0.0890), emphasizing their importance. The odds of being uncured for A. mellifera is higher than the odds for species M. ferruginea. The bee species, A. mellifera are more susceptible to entomopathogens icipe 7, icipe 20, and icipe 69. The Cox-Snell residuals show that the proposed semiparametric PH model generally fits the data well as compared to model that assume independent correlation structure. Thus, the semi parametric marginal proportional hazards mixture cure is parsimonious model for correlated bees survival data.
基金Supported by the Key National Natural Science Foundation of China(61035003)~~
文摘Graph-theoretical approaches have been widely used for data clustering and image segmentation recently. The goal of data clustering is to discover the underlying distribution and structural information of the given data, while image segmentation is to partition an image into several non-overlapping regions. Therefore, two popular graph-theoretical clustering methods are analyzed, including the directed tree based data clustering and the minimum spanning tree based image segmentation. There are two contributions: (1) To improve the directed tree based data clustering for image segmentation, (2) To improve the minimum spanning tree based image segmentation for data clustering. The extensive experiments using artificial and real-world data indicate that the improved directed tree based image segmentation can partition images well by preserving enough details, and the improved minimum spanning tree based data clustering can well cluster data in manifold structure.
基金supported by the National Natural Science Foundation of China under Grant Nos. 11471135and 11861030。
文摘Clustered interval-censored failure time data often occur in a wide variety of research and application fields such as cancer and AIDS studies. For such data, the failure times of interest are interval-censored and may be correlated for subjects coming from the same cluster. This paper presents a robust semiparametric transformation mixed effect models to analyze such data and use a U-statistic based on rank correlation to estimate the unknown parameters. The large sample properties of the estimator are also established. In addition, the authors illustrate the performance of the proposed estimate with extensive simulations and two real data examples.
文摘Today, Linear Mixed Models (LMMs) are fitted, mostly, by assuming that random effects and errors have Gaussian distributions, therefore using Maximum Likelihood (ML) or REML estimation. However, for many data sets, that double assumption is unlikely to hold, particularly for the random effects, a crucial component </span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">in </span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">which assessment of magnitude is key in such modeling. Alternative fitting methods not relying on that assumption (as ANOVA ones and Rao</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">’</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">s MINQUE) apply, quite often, only to the very constrained class of variance components models. In this paper, a new computationally feasible estimation methodology is designed, first for the widely used class of 2-level (or longitudinal) LMMs with only assumption (beyond the usual basic ones) that residual errors are uncorrelated and homoscedastic, with no distributional assumption imposed on the random effects. A major asset of this new approach is that it yields nonnegative variance estimates and covariance matrices estimates which are symmetric and, at least, positive semi-definite. Furthermore, it is shown that when the LMM is, indeed, Gaussian, this new methodology differs from ML just through a slight variation in the denominator of the residual variance estimate. The new methodology actually generalizes to LMMs a well known nonparametric fitting procedure for standard Linear Models. Finally, the methodology is also extended to ANOVA LMMs, generalizing an old method by Henderson for ML estimation in such models under normality.
基金supported by the National Natural Science Foundation of China(61702251,41971424,61701191,U1605254)the Natural Science Basic Research Plan in Shaanxi Province of China(2018JM6030)+4 种基金the Key Technical Project of Fujian Province(2017H6015)the Science and Technology Project of Xiamen(3502Z20183032)the Doctor Scientific Research Starting Foundation of Northwest University(338050050)Youth Academic Talent Support Program of Northwest University(360051900151)the Natural Sciences and Engineering Research Council of Canada,Canada。
文摘This paper presents a novel medical image registration algorithm named total variation constrained graphregularization for non-negative matrix factorization(TV-GNMF).The method utilizes non-negative matrix factorization by total variation constraint and graph regularization.The main contributions of our work are the following.First,total variation is incorporated into NMF to control the diffusion speed.The purpose is to denoise in smooth regions and preserve features or details of the data in edge regions by using a diffusion coefficient based on gradient information.Second,we add graph regularization into NMF to reveal intrinsic geometry and structure information of features to enhance the discrimination power.Third,the multiplicative update rules and proof of convergence of the TV-GNMF algorithm are given.Experiments conducted on datasets show that the proposed TV-GNMF method outperforms other state-of-the-art algorithms.
文摘This paper proposes a clustering technique that minimizes the need for subjective human intervention and is based on elements of rough set theory (RST). The proposed algorithm is unified in its approach to clustering and makes use of both local and global data properties to obtain clustering solutions. It handles single-type and mixed attribute data sets with ease. The results from three data sets of single and mixed attribute types are used to illustrate the technique and establish its efficiency.
基金This project is supported by National Natural Science Foundation of China(No.50575153).
文摘Based on the analysis of features of the grid-based clustering method-clustering in quest (CLIQUE) and density-based clustering method-density-based spatial clustering of applications with noise (DBSCAN), a new clustering algorithm named cooperative clustering based on grid and density (CLGRID) is presented. The new algorithm adopts an equivalent rule of regional inquiry and density unit identification. The central region of one class is calculated by the grid-based method and the margin region by a density-based method. By clustering in two phases and using only a small number of seed objects in representative units to expand the cluster, the frequency of region query can be decreased, and consequently the cost of time is reduced. The new algorithm retains positive features of both grid-based and density-based methods and avoids the difficulty of parameter searching. It can discover clusters of arbitrary shape with high efficiency and is not sensitive to noise. The application of CLGRID on test data sets demonstrates its validity and higher efficiency, which contrast with tradi- tional DBSCAN with R tree.
基金The author extends his appreciation to theDeputyship forResearch&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number(IFPSAU-2021/01/17758).
文摘Finding clusters based on density represents a significant class of clustering algorithms.These methods can discover clusters of various shapes and sizes.The most studied algorithm in this class is theDensity-Based Spatial Clustering of Applications with Noise(DBSCAN).It identifies clusters by grouping the densely connected objects into one group and discarding the noise objects.It requires two input parameters:epsilon(fixed neighborhood radius)and MinPts(the lowest number of objects in epsilon).However,it can’t handle clusters of various densities since it uses a global value for epsilon.This article proposes an adaptation of the DBSCAN method so it can discover clusters of varied densities besides reducing the required number of input parameters to only one.Only user input in the proposed method is the MinPts.Epsilon on the other hand,is computed automatically based on statistical information of the dataset.The proposed method finds the core distance for each object in the dataset,takes the average of these distances as the first value of epsilon,and finds the clusters satisfying this density level.The remaining unclustered objects will be clustered using a new value of epsilon that equals the average core distances of unclustered objects.This process continues until all objects have been clustered or the remaining unclustered objects are less than 0.006 of the dataset’s size.The proposed method requires MinPts only as an input parameter because epsilon is computed from data.Benchmark datasets were used to evaluate the effectiveness of the proposed method that produced promising results.Practical experiments demonstrate that the outstanding ability of the proposed method to detect clusters of different densities even if there is no separation between them.The accuracy of the method ranges from 92%to 100%for the experimented datasets.
基金supported by International Cooperation Projects (2010DFA31790) of Chinese Ministry of Science and Technologythe fund of Central China Normal University for Ph.D students (No. 2009023)+2 种基金supported by the National Natural Science Foundation of China Grants(No. 10731010, 10971015 and 11021161)the National Basic Research Program of China (973 Program) (No.2007CB814902)Key Laboratory of Random Complex Structures and Data Science, Academy of Mathematics& Systems Science, Chinese Academy of Sciences (No. 2008DP173182)
文摘Recurrent event data often arises in biomedical studies, and individuals within a cluster might not be independent. We propose a semiparametric additive rates model for clustered recurrent event data, wherein the covariates are assumed to add to the unspecified baseline rate. For the inference on the model parameters, estimating equation approaches are developed, and both large and finite sample properties of the proposed estimators are established.
文摘The year of 2013 is considered the first year of smart city in China. With the development of informationization and urbanization in China, city diseases(traffic jam, medical problem and unbalanced education) are more and more apparent. Smart city is the key to solving these diseases. This paper presents the overall smart city development in China in term of market scale and development stages, the technology standards, and industry layout. The paper claims that the issues and challenges facing smart city development in China and proposes to make polices to support smart city development.
基金supported by National Natural Science Foundation of China(Nos.40925014,41274165,41204114)the Open Research Program from Key Laboratory of Basic Plasma Physics,Chinese Academy of Sciences(CAS)+1 种基金the Aid Program for Science and Technology Innovative Research Team in Higher Educational Institutions of Hunan Provincethe Construct Program of the Key Discipline in Hunan Province,China
文摘In this study, we analyze Cluster observations of whistler-mode chorus and hiss waves during the event of August 19-21, 2006. Chorus is present outside the plasmasphere and hiss occurs inside the plasmasphere. Using a recently constructed plasma boundary layer model, we perform a ray-tracing study on the propagation of chorus. Numerical results show that chorus can penetrate into the plasmasphere through the plasma boundary layer, evolving into hiss. The current data analysis and modeling provide a further observational support for the previous findings that chorus is the origin of plasmaspheric hiss.
基金funded by the Security BigData Fusion Project(Office of theMinistry of Higher Education,Science,Research and Innovation).The corresponding author is the project PI.
文摘Attempts to determine characters of astronomical objects have been one of major and vibrant activities in both astronomy and data science fields.Instead of a manual inspection,various automated systems are invented to satisfy the need,including the classification of light curve profiles.A specific Kaggle competition,namely Photometric LSST Astronomical Time-Series Classification Challenge(PLAsTiCC),is launched to gather new ideas of tackling the abovementioned task using the data set collected from the Large Synoptic Survey Telescope(LSST)project.Almost all proposed methods fall into the supervised family with a common aim to categorize each object into one of pre-defined types.As this challenge focuses on developing a predictive model that is robust to classifying unseen data,those previous attempts similarly encounter the lack of discriminate features,since distribution of training and actual test datasets are largely different.As a result,well-known classification algorithms prove to be sub-optimal,while more complicated feature extraction techniques may help to slightly boost the predictive performance.Given such a burden,this research is set to explore an unsupervised alternative to the difficult quest,where common classifiers fail to reach the 50%accuracy mark.A clustering technique is exploited to transform the space of training data,from which a more accurate classifier can be built.In addition to a single clustering framework that provides a comparable accuracy to the front runners of supervised learning,a multiple-clustering alternative is also introduced with improved performance.In fact,it is able to yield a higher accuracy rate of 58.32%from 51.36%that is obtained using a simple clustering.For this difficult problem,it is rather good considering for those achieved by well-known models like support vector machine(SVM)with 51.80%and Naive Bayes(NB)with only 2.92%.
基金the National Natural Science Foundation of China (60472047)
文摘This paper proposes a distributed dynamic k-medoid clustering algorithm for wireless sensor networks (WSNs), DDKCAWSN. Different from node-clustering algorithms and protocols for WSNs, the algorithm focuses on clustering data in the network. By sending the sink clustered data instead of practical ones, the algorithm can greatly reduce the size and the time of data communication, and further save the energy of the nodes in the network and prolong the system lifetime. Moreover, the algorithm improves the accuracy of the clustered data dynamically by updating the clusters periodically such as each day. Simulation results demonstrate the effectiveness of our approach for different metrics.
文摘Compressional and shear sonic logs(DTC and DTS,respectively)are one of the effective means for determining petrophysical/geomechanical properties.However,the DTS log has limited availability mainly due to high acquisition costs.This study introduces a hybrid machine learning approach to generating synthetic DTS logs.Five wireline logs such as gamma ray(GR),density(RHOB),neutron porosity(NPHI),deep resistivity(Rt),and DTS logs are used as input data for three supervised-machine learning models including support vector machine for regression(SVR),deep neural network(DNN),and long short-term memory(LSTM).The hybrid machine learning model utilizes two additional techniques.First,as an unsupervised-learning approach,data clustering is integrated with general machine learning models for the purpose of improving model accuracy.All the machine learning models using the data-clustered approach show higher accuracies in predicting target(DTS)values,compared to non-clustered models.Second,particle swarm optimization(PSO)is combined with the models to determine optimal hyperparameters.The PSO algorithm proves time-effective,automated advantages as it gets feedback from previous computations so that is able to narrow down candidates for optimal hyperparameters.Compared to previous studies focusing on the performance comparison among machine learning algorithms,this study introduces an advanced approach to further improve the performance by integrating the unsupervised learning technique and PSO optimization with the general models.Based on this study result,we recommend the hybrid machine learning approach for improving the reliability and efficiency of synthetic log generation.